Compare commits

...

123 Commits
3.0.10 ... main

Author SHA1 Message Date
BJ Dierkes
3527ade7b5
Merge pull request #766 from datafolklabs/feat/python-3.14
feat(dev): Python 3.14 default development target, drop 3.8 support
2025-11-02 20:51:58 -06:00
BJ Dierkes
dad85d287a feat(dev): Python 3.14 default development target, drop 3.8 support 2025-11-02 20:27:38 -06:00
BJ Dierkes
7c347abe43 fix(dev): add requests to dev dependencies
Resolves Issue #765
2025-11-02 18:45:21 -06:00
BJ Dierkes
bfb3b8c01b fix(ext.smtp): fix test related to mailpit api update 2025-11-02 18:42:22 -06:00
BJ Dierkes
3ee6b5157b feat(dev): update devbox 2025-11-02 18:41:48 -06:00
BJ Dierkes
cc857e70a7
Merge pull request #761 from datafolklabs/dep/update-pdm-lock
chore: Update pdm.lock
2025-11-02 17:49:34 -06:00
github-actions[bot]
ac410db146
chore: Update pdm.lock 2025-10-27 03:34:29 +00:00
BJ Dierkes
8b038170d8 feat: add direnv/devbox configurations and fix tests 2025-06-10 01:44:15 -05:00
BJ Dierkes
23b9b95d93 chore: add claude config 2025-06-09 23:32:38 -05:00
BJ Dierkes
9df6b3a3d3
Merge pull request #757 from datafolklabs/feat/github-actions
feat: setup github actions
2025-05-06 12:52:40 -05:00
BJ Dierkes
bd0d5eb878 ci: add minimal permissions for github actions 2025-05-06 12:29:15 -05:00
BJ Dierkes
80da0029ed ci: execute github actions on pull_request 2025-05-06 12:27:05 -05:00
BJ Dierkes
b46ce15833 feat: setup github actions 2025-05-06 12:23:13 -05:00
BJ Dierkes
41f2180976
Merge pull request #756 from sigma67/fix-all-exports
fix __all__
2025-05-06 09:16:11 -05:00
BJ Dierkes
8f5eaa817d chore: bump development version 2025-05-06 09:14:11 -05:00
sigma67
2bc559a30d fix __all__ 2025-05-06 10:07:03 +02:00
BJ Dierkes
c314892fb3 feat: bump version to 3.0.14 2025-05-05 11:30:13 -05:00
BJ Dierkes
822c22a1ff fix(ext_smtp): misc fixes and updates to better support content types
Ref: PR #742
2025-05-05 11:08:35 -05:00
BJ Dierkes
a7d004b82d resolve merge conflict 2025-05-05 08:24:49 -05:00
BJ Dierkes
ae763cf098 Merge branch 'pr753' 2025-05-05 08:07:41 -05:00
BJ Dierkes
2fb2940e60 chore: add contributors 2025-05-05 08:07:06 -05:00
BJ Dierkes
4669c7ad2e chore: update pdm 2025-05-05 08:05:34 -05:00
BJ Dierkes
12e4e62fe9 chore: resolve mypy errors 2025-05-05 08:05:34 -05:00
BJ Dierkes
9d51ed79b7 chore: update pdm 2025-05-05 07:58:39 -05:00
BJ Dierkes
32fe2685f0 chore: resolve mypy errors 2025-05-05 07:54:44 -05:00
blakejameson
ac887016c4 cleaning up some usage of "its" and "it's" 2025-04-14 11:40:44 -05:00
BJ Dierkes
a5a6a081f3 Merge branch 'main' of github.com:datafolklabs/cement 2025-04-13 10:28:09 -05:00
BJ Dierkes
aeb9715247 chore: update pdm deps 2025-04-13 10:27:52 -05:00
BJ Dierkes
67a1cd3030
Merge pull request #750 from sigma67/add-py-typed
add py.typed marker
2025-04-13 09:22:55 -05:00
Benedikt Putz
5314d21a5f add py.typed marker 2025-03-27 12:49:47 +01:00
BJ Dierkes
91953d07da fix(ext_jinja2): refactor hard-coded reference to jinja2 template handler
Issue: #749
2025-03-11 11:54:09 -05:00
BJ Dierkes
d8bd90b925 Bump version to 3.0.13 (dev) 2025-02-19 13:52:51 -06:00
BJ Dierkes
a4ce9e760f Tweak Docker py312 2025-01-21 12:50:29 -06:00
BJ Dierkes
659c783693 Fix RTFD 2024-11-10 01:43:18 -06:00
BJ Dierkes
da755b1539 Fix License - PyPi Didnt Like BSD-3-Clause 2024-11-10 01:33:48 -06:00
BJ Dierkes
02aa4f25eb Bump Version to 3.0.12 2024-11-10 01:08:13 -06:00
BJ Dierkes
44f2d1722a Add Notes for Windows/macOS Targeted Development 2024-11-10 01:00:57 -06:00
BJ Dierkes
82848eefb6 Fix CLI Smoke Tests / Make Python 3.13 Default 2024-11-09 23:54:20 -06:00
BJ Dierkes
2e30550863 Update Main Dockerfile for PDM (Remove Comments) 2024-11-09 22:58:41 -06:00
BJ Dierkes
8948247a80 Update Main Dockerfile for PDM 2024-11-09 22:58:18 -06:00
BJ Dierkes
97745773b4 Resolve MyPy/Ruff Linting Issue 2024-11-09 22:39:56 -06:00
BJ Dierkes
786b592de1 Add FrameworkError Message if Yaml/Jinja2 are Missing 2024-11-09 22:32:38 -06:00
BJ Dierkes
b617f7fb5b Resolve Merge Conflicts 2024-11-09 22:04:59 -06:00
BJ Dierkes
bee66a6712 Fix MyPy Linting 2024-11-09 22:00:40 -06:00
BJ Dierkes
b09a10f355 Update PDM Lock 2024-11-09 22:00:26 -06:00
BJ Dierkes
745790520f
Merge pull request #739 from sigma67/re-enable-313
re-enable 3.13 job
2024-11-09 21:47:10 -06:00
sigma67
f9dd4941fc
re-enable 3.13 job 2024-10-07 20:56:11 +02:00
BJ Dierkes
f8e9c42e77 Make Test Now Requires MyPy Compliance 2024-07-17 19:44:37 -05:00
BJ Dierkes
c1df8e5a72 Final Type Annotations Complete!!! Resolves PR #628 2024-07-17 19:36:18 -05:00
BJ Dierkes
32bf8acef3 Type Annotations: ext.yaml
- Resolves Issue #728
    - Related to PR #628
2024-07-17 19:32:55 -05:00
BJ Dierkes
f8f005d91b Type Annotations: ext.watchdog
- Resolves Issue #727
    - Related to PR #628
2024-07-17 19:23:53 -05:00
BJ Dierkes
d6862a4b4e Type Annotations: ext.tabulate
- Resolves Issue #726
    - Related to PR #628
2024-07-17 19:12:34 -05:00
BJ Dierkes
c719ea84a0 Type Annotations: ext.smtp
- Resolves Issue #725
    - Related to PR #628
2024-07-17 19:04:25 -05:00
BJ Dierkes
251bdcb3c1 Type Annotations: ext.scrub
- Resolves Issue #724
    - Related to PR #628
2024-07-17 17:13:53 -05:00
BJ Dierkes
b16b99a0cf Type Annotations: ext.redis
- Resolves Issue #723
    - Related to PR #628
2024-07-17 17:08:15 -05:00
BJ Dierkes
270343b0d9 Type Annotations: ext.print
- Resolves Issue #722
    - Related to PR #628
2024-07-17 16:56:30 -05:00
BJ Dierkes
8acdaf45ef Type Annotations: ext.plugin
- Resolves Issue #721
    - Related to PR #628
2024-07-17 16:39:13 -05:00
BJ Dierkes
b777433b9a Type Annotations: ext.mustache
- Resolves Issue #720
    - Related to PR #628
2024-07-17 16:21:26 -05:00
BJ Dierkes
4b8c2dc0cb Type Annotations: ext.memcached
- Resolves Issue #719
    - Related to PR #628
2024-07-17 16:09:52 -05:00
BJ Dierkes
d144f4db01 Type Annotations: ext.json
- Resolves Issue #717
    - Related to PR #628
2024-07-17 15:53:52 -05:00
BJ Dierkes
c733f671fc Type Annotations: ext.jinja2
- Resolves Issue #716
    - Related to PR #628
2024-07-17 15:34:31 -05:00
BJ Dierkes
fd655d898d Type Annotations: ext.generate
- Resolves Issue #715
    - Related to PR #628
2024-07-16 18:05:51 -05:00
BJ Dierkes
f6ccc8ee4c Type Annotations: ext.dummy
- Resolves Issue #714
    - Related to PR #628
2024-07-16 17:26:39 -05:00
BJ Dierkes
9d107507b2 Type Annotations: ext.daemon
- Resolves Issue #713
    - Related to PR #628
2024-07-16 16:56:09 -05:00
BJ Dierkes
a0e040d8ee Type Annotations: ext.configparser
- Resolves Issue #712
    - Related to PR #628
2024-07-16 16:25:10 -05:00
BJ Dierkes
68b371781e Type Annotations: ext.colorlog, ext.logging
- Resolves Issue #711
    - Resolves Issue #718
    - Related to PR #628
2024-07-16 15:43:20 -05:00
BJ Dierkes
06ebdd0821 Fix Ruff Error 2024-07-14 23:55:40 -05:00
BJ Dierkes
88ca56714d Update PDM Lock 2024-07-14 23:53:41 -05:00
BJ Dierkes
277c4391fe Fix Github Actions Checkout v4 2024-07-14 23:51:22 -05:00
BJ Dierkes
29eb84c96b Type Annotations: ext.argparse
- Resolves Issue #710
    - Related to PR #628
2024-07-14 23:46:10 -05:00
BJ Dierkes
5b86bc2286 Type Annotations: ext.alarm
- Resolves Issue #709
    - Related to PR #628
2024-07-14 22:24:39 -05:00
BJ Dierkes
54e58855ad Type Annotations: core.template
- Resolves Issue #708
    - Related to PR #628
2024-07-14 22:18:22 -05:00
BJ Dierkes
3de886dc62 Type Annotations: core.plugin
- Resolves Issue #707
    - Related to PR #628
2024-07-13 17:56:04 -05:00
BJ Dierkes
52e9ee20f9 Type Annotations: core.output
- Resolves Issue #706
    - Related to PR #628
2024-07-13 17:52:10 -05:00
BJ Dierkes
bc8d247a43 Type Annotations: core.mail
- Resolves Issue #704
    - Related to PR #628
2024-07-13 17:47:42 -05:00
BJ Dierkes
925c8c5d8b Type Annotations: core.log
- Resolves Issue #703
    - Related to PR #628
2024-07-13 17:43:37 -05:00
BJ Dierkes
358e29d66c Type Annotations: core.foundation, core.hook
- Resolves Issue #699
    - Resolves Issue #701
    - Related to PR #628
2024-07-13 17:34:47 -05:00
BJ Dierkes
0b0dbd28ce Fix CLI Smoke Tests - Resolves Issue #731 2024-07-13 02:59:55 -05:00
BJ Dierkes
ae5a9245cf Use Cement Version Utility for Packaging 2024-07-13 02:59:38 -05:00
BJ Dierkes
bda8d4817c Add Changelog for Issue 377 2024-07-13 02:19:45 -05:00
sigma67
5d2db8b839 use f-strings (#733) 2024-07-10 21:22:54 +02:00
BJ Dierkes
8b62e67252 Revert typing that used external typing_extensions
- Resolves Issue #732
2024-06-23 22:54:27 -05:00
BJ Dierkes
f75e810f7d Type Annotations: core.extension, core.handler
- Resolves Issue #698
- Resolves Issue #700
- Related to PR #628
2024-06-23 22:44:18 -05:00
BJ Dierkes
e502cab870 Type Annotations: core.deprecations
- Resolves Issue #696
- Related to PR #628
2024-06-23 21:16:55 -05:00
BJ Dierkes
3a636dbfbd Type Annotations: core.controller
- Resolves Issue #695
- Related to PR #628
2024-06-23 21:13:56 -05:00
BJ Dierkes
18d353eedc Minor Refactor for Typing 2024-06-23 20:56:50 -05:00
BJ Dierkes
3c750a16ce Type Annotations: core.config
- Resolves Issue #694
- Related to PR #628
2024-06-23 20:30:26 -05:00
BJ Dierkes
0376695cd8 Fix Issue Number 693 2024-06-23 20:19:34 -05:00
BJ Dierkes
01bcc70e0c Type Annotations: core.cache
- Resolves Issue #693
- Related to PR #628
2024-06-23 20:17:40 -05:00
BJ Dierkes
b5f579a499 Type Annotations: core.interface, core.arg
- Resolves Issue #692
- Resolves Issue #702
- Related to PR #628
2024-06-23 19:57:25 -05:00
BJ Dierkes
e50bb46469 Type Annotations: utils.version
- Resolves Issue #691
- Related to PR #628
2024-06-23 19:36:30 -05:00
BJ Dierkes
a390ecf16f Fix License Classifier 2024-06-22 21:44:21 -05:00
BJ Dierkes
795cfa51a6 Add Contributor Links 2024-06-22 20:37:02 -05:00
BJ Dierkes
7bd519dde9 Add Contributors 2024-06-22 20:35:44 -05:00
BJ Dierkes
767699326a Type Annotations
- Resolves Issue #690 -> utils.shell
- Resolves Issue #697 -> core.exc
- Resolves Issue #705 -> core.meta
2024-06-22 20:20:04 -05:00
BJ Dierkes
128e6665e9 Type Annotations: utils.misc
- Closes Issue #689
- Related to PR #628
2024-06-22 16:06:15 -05:00
BJ Dierkes
9b12f1a93b Adjust Changelog 2024-06-22 02:43:46 -05:00
BJ Dierkes
a46dfb86ad Adjust Changelog 2024-06-22 02:42:01 -05:00
BJ Dierkes
b49097de47 Type Annotations: utils.fs
- Resolves Issue #688
- Related to PR #628
2024-06-22 02:40:07 -05:00
BJ Dierkes
527cae8c23 Disable Travis Tests for 3.13 2024-06-22 01:34:48 -05:00
BJ Dierkes
44fd94966a Minor Tweaks for Dev Configs 2024-06-22 01:28:06 -05:00
BJ Dierkes
042c04c8c2 Merge branch 'issue-683' of github.com:sigma67/cement into issue-683 2024-06-22 00:50:16 -05:00
BJ Dierkes
5b242d0842 Resolves Issue #686 2024-06-22 00:44:32 -05:00
sigma67
f5575d8896
add travis 3.13 workaround 2024-06-13 21:23:47 +02:00
sigma67
fe164a0d3d
add 3.13 image and CI 2024-06-13 21:12:31 +02:00
sigma67
df93f386df
lint 2024-06-13 20:49:20 +02:00
sigma67
1701c49859
use recursive deps 2024-06-13 20:45:39 +02:00
sigma67
92cf147a64
remove cli/contrib 2024-06-13 20:41:26 +02:00
BJ Dierkes
5d1e32bbf6 Tweak README 2024-05-20 00:53:24 -04:00
BJ Dierkes
70992b2f6d Additional modifications to PR #681 2024-05-20 00:32:23 -04:00
sigma67
0866e525af
Update docker/Dockerfile.dev-py38 2024-03-15 20:52:11 +01:00
sigma67
60152bb78e
move coveragepy settings 2024-03-12 19:57:55 +01:00
sigma67
be23bae844
fix travis install 2024-03-12 19:46:50 +01:00
sigma67
e7e7fc35a1
fix travis 2024-03-12 19:41:41 +01:00
sigma67
28a5d1aa9a
fix make dist 2024-03-12 19:35:27 +01:00
sigma67
59aff0c640
.readthedocs.yaml fix requirements 2024-03-12 19:32:46 +01:00
sigma67
64d98fe4c7
fixup! use pyproject.toml 2024-03-11 19:46:56 +01:00
sigma67
a682491da8
use pyproject.toml 2024-03-11 13:19:08 +01:00
BJ Dierkes
ef4fa5b3d9 Fix RTD: PIP Install 2024-02-29 12:15:26 -06:00
BJ Dierkes
e5fc34640e Fix RTD -> Libmemcached 2024-02-29 12:06:20 -06:00
BJ Dierkes
595e404266 Fix Travis Tests for 3.11 Version Bump 2024-02-29 11:52:33 -06:00
BJ Dierkes
74e13524b1 Fix RTD 2024-02-29 11:50:00 -06:00
BJ Dierkes
a287f75a37 Remove duplicate RTD config 2024-02-29 11:44:51 -06:00
152 changed files with 5767 additions and 22568 deletions

View File

@ -1,2 +0,0 @@
[run]
omit = cement/cli/contrib/*

View File

@ -1,2 +1,2 @@
.vagrant
.git
.venv

14
.envrc Normal file
View File

@ -0,0 +1,14 @@
# Automatically sets up your devbox environment whenever you cd into this
# directory via our direnv integration:
eval "$(devbox generate direnv --print-envrc)"
# check out https://www.jetpack.io/devbox/docs/ide_configuration/direnv/
# for more details
source_env_if_exists .envrc.local
export SMTP_HOST=localhost
export SMTP_PORT=1025
export MEMCACHED_HOST=localhost
export REDIS_HOST=localhost

98
.github/workflows/build_and_test.yml vendored Normal file
View File

@ -0,0 +1,98 @@
name: Build & Test
permissions:
contents: read
pull-requests: write
on: [pull_request]
env:
SMTP_HOST: localhost
SMTP_PORT: 1025
MEMCACHED_HOST: localhost
REDIS_HOST: localhost
jobs:
comply:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: ConorMacBride/install-package@v1
with:
apt: libmemcached-dev
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.x"
architecture: "x64"
- name: Setup PDM
uses: pdm-project/setup-pdm@v4
- name: Install dependencies
run: pdm install
- name: Make Comply
run: make comply
test:
needs: comply
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: ConorMacBride/install-package@v1
with:
apt: libmemcached-dev
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.x"
architecture: "x64"
- uses: hoverkraft-tech/compose-action@v2.0.1
with:
compose-file: "./docker/compose-services-only.yml"
- name: Setup PDM
uses: pdm-project/setup-pdm@v4
- name: Install dependencies
run: pdm install
- name: Make Test
run: make test
test-all:
needs: test
runs-on: ${{ matrix.os }}
strategy:
matrix:
# FIXME ?
# os: [ubuntu-latest, macos-latest, windows-latest]
os: [ubuntu-latest]
python-version: ["3.9", "3.10", "3.11", "3.12", "3.13", "3.14", "pypy3.10"]
steps:
- uses: actions/checkout@v4
- uses: ConorMacBride/install-package@v1
with:
apt: libmemcached-dev
- uses: hoverkraft-tech/compose-action@v2.0.1
with:
compose-file: "./docker/compose-services-only.yml"
- name: Setup PDM
uses: pdm-project/setup-pdm@v4
- name: Install dependencies
run: pdm install
- name: Make Test
run: make test
cli-smoke-test:
needs: test-all
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: hoverkraft-tech/compose-action@v2.0.1
with:
compose-file: "./docker-compose.yml"
- name: CLI Smoke Tests
run: ./scripts/cli-smoke-test.sh
- if: always()
name: Review Output
run: cat ./tmp/cli-smoke-test.out

14
.github/workflows/pdm.yml vendored Normal file
View File

@ -0,0 +1,14 @@
name: Update dependencies
on:
schedule:
- cron: "5 3 * * 1"
jobs:
update-dependencies:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Update dependencies
uses: pdm-project/update-deps-action@main

10
.gitignore vendored
View File

@ -52,7 +52,7 @@ pip-log.txt
# Documentation
doc/build
# Unit test / coverage reports
.coverage
.coverage*
htmlcov
coverage-report
.tox
@ -77,4 +77,10 @@ dump.rdb
.pytest_cache
# VS Code
.vscode/
.vscode/
# PDM
.venv
__pypackages__
.pdm.toml
.pdm-python

View File

@ -9,15 +9,20 @@ version: 2
build:
os: ubuntu-22.04
tools:
python: "3.11"
python: "3.12"
apt_packages:
- libmemcached-dev
# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/conf.py
configuration: docs/source/conf.py
# We recommend specifying your dependencies to enable reproducible builds:
# https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html
# python:
# install:
# - requirements: docs/requirements.txt
python:
install:
- method: pip
path: .
extra_requirements:
- docs

View File

@ -1,5 +1,9 @@
language: python
sudo: false
before_install:
- sudo apt-get -y install pipx python3-venv
- pipx ensurepath
- pipx install pdm
script: ./scripts/travis.sh
os:
- linux
@ -43,6 +47,18 @@ matrix:
- DOCKER_COMPOSE_VERSION=v2.17.3
- SMTP_HOST=localhost
- SMTP_PORT=1025
- python: "3.13"
dist: "jammy"
sudo: true
env:
- DOCKER_COMPOSE_VERSION=v2.17.3
- SMTP_HOST=localhost
- SMTP_PORT=1025
# below is a workaround due to invalid travis Python version
- PDM_IGNORE_ACTIVE_VENV=true
- PYTHON_VERSION=3.13
services:
- memcached
- redis-server

View File

@ -1,8 +1,160 @@
# ChangeLog
## 3.0.15 - DEVELOPMENT (will be released as stable/3.0.16)
Bugs:
- None
Features:
- None
Refactoring:
- `[dev]` Python 3.14 Default Development Target
- `[dev]` Remove Support for Python 3.8 (EOL)
Misc:
- None
Deprecations:
- None
## 3.0.14 - May 5, 2025
Bugs:
- `[ext_jinja2]` Refactor hard-coded reference to `jinja2` template handler.
- [Issue #749](https://github.com/datafolklabs/cement/issues/749)
- `[ext_smtp]` Misc fixes and updates to better support content types.
- [PR #742](https://github.com/datafolklabs/cement/pull/742)
Features:
- None
Refactoring:
- None
Misc:
- None
Deprecations:
- None
## 3.0.12 - Nov 10, 2024
Bugs:
- None
Features:
- None
Refactoring:
- `[dev]` Refactor String Substitutions (`%s`) with F-Strings
- [Issue #733](https://github.com/datafolklabs/cement/issues/733)
- `[dev]` Allow line lengths up to 100 characters (previously 78)
- `[dev]` Modernize Packaging (pyproject.toml, PDM)
- [Issue #680](https://github.com/datafolklabs/cement/issues/680)
- [PR #681](https://github.com/datafolklabs/cement/pull/681)
- `[dev]` Implement Ruff for Code Compliance (replaces Flake8)
- [Issue #671](https://github.com/datafolklabs/cement/issues/671)
- [PR #681](https://github.com/datafolklabs/cement/pull/681)
- `[dev]` Remove Python 3.5, 3.6, 3.7 Docker Dev Targets
- `[dev]` Added Python 3.13 Dev Target
- `[dev]` Testing now requires typing compliance (`make test` -> `make comply-mypy`)
- `[dev]` Type Annotations (related: [PR #628](https://github.com/datafolklabs/cement/pull/628))
- `[core.arg]` [Issue #692](https://github.com/datafolklabs/cement/issues/692)
- `[core.cache]` [Issue #693](https://github.com/datafolklabs/cement/issues/693)
- `[core.config]` [Issue #694](https://github.com/datafolklabs/cement/issues/694)
- `[core.controller]` [Issue #695](https://github.com/datafolklabs/cement/issues/695)
- `[core.deprecations]` [Issue #696](https://github.com/datafolklabs/cement/issues/696)
- `[core.exc]` [Issue #697](https://github.com/datafolklabs/cement/issues/697)
- `[core.extension]` [Issue #698](https://github.com/datafolklabs/cement/issues/698)
- `[core.foundation]` [Issue #699](https://github.com/datafolklabs/cement/issues/699)
- `[core.handler]` [Issue #700](https://github.com/datafolklabs/cement/issues/700)
- `[core.hook]` [Issue #700](https://github.com/datafolklabs/cement/issues/701)
- `[core.interface]` [Issue #702](https://github.com/datafolklabs/cement/issues/702)
- `[core.log]` [Issue #703](https://github.com/datafolklabs/cement/issues/703)
- `[core.mail]` [Issue #704](https://github.com/datafolklabs/cement/issues/704)
- `[core.meta]` [Issue #705](https://github.com/datafolklabs/cement/issues/705)
- `[core.output]` [Issue #706](https://github.com/datafolklabs/cement/issues/706)
- `[core.plugin]` [Issue #707](https://github.com/datafolklabs/cement/issues/707)
- `[core.template]` [Issue #708](https://github.com/datafolklabs/cement/issues/708)
- `[ext.alarm]` [Issue #709](https://github.com/datafolklabs/cement/issues/709)
- `[ext.argparse]` [Issue #710](https://github.com/datafolklabs/cement/issues/710)
- `[ext.colorlog]` [Issue #711](https://github.com/datafolklabs/cement/issues/711)
- `[ext.configparser]` [Issue #712](https://github.com/datafolklabs/cement/issues/712)
- `[ext.daemon]` [Issue #713](https://github.com/datafolklabs/cement/issues/713)
- `[ext.dummy]` [Issue #714](https://github.com/datafolklabs/cement/issues/714)
- `[ext.generate]` [Issue #715](https://github.com/datafolklabs/cement/issues/715)
- `[ext.jinja2]` [Issue #716](https://github.com/datafolklabs/cement/issues/716)
- `[ext.json]` [Issue #717](https://github.com/datafolklabs/cement/issues/717)
- `[ext.logging]` [Issue #718](https://github.com/datafolklabs/cement/issues/718)
- `[ext.memcached]` [Issue #719](https://github.com/datafolklabs/cement/issues/719)
- `[ext.mustache]` [Issue #720](https://github.com/datafolklabs/cement/issues/720)
- `[ext.plugin]` [Issue #721](https://github.com/datafolklabs/cement/issues/721)
- `[ext.print]` [Issue #722](https://github.com/datafolklabs/cement/issues/722)
- `[ext.redis]` [Issue #723](https://github.com/datafolklabs/cement/issues/723)
- `[ext.scrub]` [Issue #724](https://github.com/datafolklabs/cement/issues/724)
- `[ext.smtp]` [Issue #725](https://github.com/datafolklabs/cement/issues/725)
- `[ext.tabulate]` [Issue #726](https://github.com/datafolklabs/cement/issues/726)
- `[ext.watchdog]` [Issue #727](https://github.com/datafolklabs/cement/issues/727)
- `[ext.yaml]` [Issue #728](https://github.com/datafolklabs/cement/issues/728)
- `[utils.fs]` [Issue #688](https://github.com/datafolklabs/cement/issues/688)
- `[utils.misc]` [Issue #689](https://github.com/datafolklabs/cement/issues/689)
- `[utils.shell]` [Issue #690](https://github.com/datafolklabs/cement/issues/690)
- `[utils.version]` [Issue #691](https://github.com/datafolklabs/cement/issues/691)
Misc:
- [cli] Move CLI dependencies to `cement[cli]` extras package, and remove included/nexted `contrib` sources. See note on 'Potential Upgrade Incompatibility'
- [Issue #679](https://github.com/datafolklabs/cement/issues/679)
Deprecations:
- None
Special Recognitions:
Many thanks to [@sigma67](https://github.com/sigma67) for their contributions in modernizing the packaging system. Cement was started in 2009, and has some lingering technical debt that is now being addressed. Their contribution was a major help in moving off of setuptools and on to PDM and `pyproject.toml`, along with initial implementations of Ruff for a new generation of code compliance. I sincerely appreciate your help!
Many thanks to [@rednar](https://github.com/rednar) for their contributions toward adding type annotations in [PR #628](https://github.com/datafolklabs/cement/pull/628). This PR was too large to merge directly, but it is serving as a guide to finally begin work toward adding type annotations to Cement. This was a massive effort, and is very helpful to have this work available to guide the effort even if it will not be merged directly.
Potential Upgrade Incompatibility:
This update removes included `contrib` libraries that are dependencies for the `cement` command line tool to function (PyYAML, and Jinja2). The dependencies are now included via the `cement[cli]` extras package.
This is not an upgrade incompatibility in the core Cement code, and it would not affect any applications that are built on Cement. That said, it does have the potential to break any automation or other uses of the `cement` command line tool.
Resolution:
```
pip install cement[cli]
```
## 3.0.10 - Feb 28, 2024
Bugs:
Bugs:
- `[ext.logging]` Support `logging.propagate` to avoid duplicate log entries
- [Issue #310](https://github.com/datafolklabs/cement/issues/310)
@ -24,7 +176,7 @@ Features:
- [PR #669](https://github.com/datafolklabs/cement/pull/669)
Refactoring:
Refactoring:
- `[core.plugin]` Deprecate the use of `imp` in favor of `importlib`
- [Issue #386](https://github.com/datafolklabs/cement/issues/386)
@ -103,7 +255,7 @@ Bugs:
- `[ext.argparse]` Parser (`self._parser`) not accessible inside `_pre_argument_parsing` when `stacked_type = 'embedded'`
- [Issue #569](https://github.com/datafolklabs/cement/issues/569)
- `[ext.configparser]` Overriding config options with environment variables doesn't work correctly with surrounding underscore characters
- `[ext.configparser]` Overriding config options with environment variables doesn't work correctly with surrounding underscore characters
- [Issue #590](https://github.com/datafolklabs/cement/issues/590)
- `[utils.fs]` Fix bug where trailing slash was not removed in `fs.backup()` of a directory.
- [Issue #610](https://github.com/datafolklabs/cement/issues/610)

78
CLAUDE.md Normal file
View File

@ -0,0 +1,78 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Development Commands
**Testing and Compliance:**
- `make test` - Run full test suite with coverage and PEP8 compliance
- `make test-core` - Run only core library tests
- `make comply` - Run both ruff and mypy compliance checks
- `make comply-ruff` - Run ruff linting
- `make comply-ruff-fix` - Auto-fix ruff issues
- `make comply-mypy` - Run mypy type checking
- `pdm run pytest --cov=cement tests/` - Direct pytest execution
- `pdm run pytest --cov=cement.core tests/core` - Test only core components
**Development Environment:**
- `pdm venv create && pdm install` - Set up local development environment
- `pdm run cement --help` - Run the cement CLI
**Documentation:**
- `make docs` - Build Sphinx documentation
**Build and Distribution:**
- `pdm build` - Build distribution packages
## Architecture Overview
Cement is a CLI application framework built around a handler/interface pattern with the following core concepts:
**Core Application (`cement.core.foundation.App`):**
- The main `App` class in `cement/core/foundation.py` is the central orchestrator
- Uses a Meta class pattern for configuration
- Manages lifecycle through setup(), run(), and close() methods
- Supports signal handling and application reloading
**Handler System:**
- Interface/Handler pattern where interfaces define contracts and handlers provide implementations
- Core handlers: arg, config, log, output, cache, controller, extension, plugin, template
- Handlers are registered and resolved through `HandlerManager`
- Located in `cement/core/` with corresponding modules (arg.py, config.py, etc.)
**Extensions System:**
- Extensions in `cement/ext/` provide additional functionality
- Examples: ext_yaml.py, ext_jinja2.py, ext_argparse.py, etc.
- Optional dependencies managed through pyproject.toml extras
**CLI Structure:**
- Main CLI application in `cement/cli/main.py`
- Uses CementApp class that extends core App
- Includes code generation templates in `cement/cli/templates/`
**Controllers:**
- MVC-style controllers handle command routing
- Base controller pattern in controllers/base.py files
- Support nested sub-commands and argument parsing
## Key Development Practices
- 100% test coverage required (pytest with coverage reporting)
- 100% PEP8 compliance enforced via ruff
- Type annotation compliance via mypy
- PDM for dependency management
- Zero external dependencies for core framework (optional for extensions)
## Testing Notes
- Tests located in `tests/` directory mirroring source structure
- Core tests can run independently via `make test-core`
- Coverage reports generated in `coverage-report/` directory
## Extension Development
When working with extensions:
- Check `cement/ext/` for existing extension patterns
- Optional dependencies declared in pyproject.toml under `[project.optional-dependencies]`
- Extensions follow naming pattern `ext_<name>.py`
- Must implement proper interface contracts

View File

@ -19,5 +19,7 @@ documentation, or testing:
- Stelios Tymvios (namedLambda)
- Spyros Vlachos (devspyrosv)
- Joe Roberts (jjroberts)
- Mudassir Chapra(muddi900)
- Mudassir Chapra (muddi900)
- Christian Hengl (rednar)
- sigma67
- Blake Jameson (blakejameson)

View File

@ -1,9 +1,18 @@
FROM python:3.11-alpine
FROM python:3.14-alpine
LABEL MAINTAINER="BJ Dierkes <derks@datafolklabs.com>"
ENV PS1="\[\e[0;33m\]|> cement <| \[\e[1;35m\]\W\[\e[0m\] \[\e[0m\]# "
ENV PATH="${PATH}:/root/.local/bin"
WORKDIR /src
COPY . /src
RUN python setup.py install \
&& rm -rf /src
COPY docker/vimrc /root/.vimrc
COPY docker/bashrc /root/.bashrc
RUN apk update \
&& apk add pipx vim \
&& ln -sf /usr/bin/vim /usr/bin/vi \
&& pipx install pdm
RUN pdm build
RUN pip install `ls dist/cement-*.tar.gz`[cli]
WORKDIR /
ENTRYPOINT ["/usr/local/bin/cement"]

View File

@ -1,5 +0,0 @@
recursive-include *.py
include setup.cfg
include README.md CHANGELOG.md LICENSE CONTRIBUTORS.md
include *.txt
recursive-include cement/cli/templates/generate *

View File

@ -1,44 +1,41 @@
.PHONY: dev test test-core comply-fix docs clean dist dist-upload docker docker-push
dev:
docker-compose up -d
docker-compose exec cement pip install -r requirements-dev.txt
docker-compose exec cement python setup.py develop
docker-compose exec cement /bin/bash
docker compose up -d
docker compose exec cement pdm install
docker compose exec cement-py39 pdm install
docker compose exec cement-py310 pdm install
docker compose exec cement-py311 pdm install
docker compose exec cement-py312 pdm install
docker compose exec cement-py313 pdm install
docker compose exec cement /bin/bash
test: comply
python -m pytest -v --cov=cement --cov-report=term --cov-report=html:coverage-report --capture=sys tests/
pdm run pytest --cov=cement tests
test-core: comply
python -m pytest -v --cov=cement.core --cov-report=term --cov-report=html:coverage-report --capture=sys tests/core
pdm run pytest --cov=cement.core tests/core
virtualenv:
virtualenv --prompt '|> cement <| ' env
env/bin/pip install -r requirements-dev.txt
env/bin/python setup.py develop
pdm venv create
pdm install
@echo
@echo "VirtualENV Setup Complete. Now run: source env/bin/activate"
@echo "VirtualENV Setup Complete. Now run: eval $(pdm venv activate)"
@echo
virtualenv-windows:
virtualenv --prompt '|> cement <| ' env-windows
env-windows\\Scripts\\pip.exe install -r requirements-dev-windows.txt
env-windows\\Scripts\\python.exe setup.py develop
@echo
@echo "VirtualENV Setup Complete. Now run: .\env-windows\Scripts\activate.ps1"
@echo
comply: comply-ruff comply-mypy
comply:
flake8 cement/ tests/
comply-ruff:
pdm run ruff check cement/ tests/
comply-fix:
autopep8 -ri cement/ tests/
comply-ruff-fix:
pdm run ruff check --fix cement/ tests/
comply-typing:
mypy ./cement
comply-mypy:
pdm run mypy
docs:
python setup.py build_sphinx
cd docs; pdm run sphinx-build ./source ./build; cd ..
@echo
@echo DOC: "file://"$$(echo `pwd`/docs/build/html/index.html)
@echo
@ -47,13 +44,8 @@ clean:
find . -name '*.py[co]' -delete
rm -rf doc/build
dist: clean
rm -rf dist/*
python setup.py sdist
python setup.py bdist_wheel
dist-upload:
twine upload dist/*
dist:
pdm build
docker:
docker build -t datafolklabs/cement:latest .

167
README.md
View File

@ -5,9 +5,22 @@
[![Continuous Integration Status](https://app.travis-ci.com/datafolklabs/cement.svg?branch=master)](https://app.travis-ci.com/github/datafolklabs/cement/)
Cement is an advanced Application Framework for Python, with a primary focus on Command Line Interfaces (CLI). Its goal is to introduce a standard, and feature-full platform for both simple and complex command line applications as well as support rapid development needs without sacrificing quality. Cement is flexible, and it's use cases span from the simplicity of a micro-framework to the complexity of a mega-framework. Whether it's a single file script, or a multi-tier application, Cement is the foundation you've been looking for.
Cement is an advanced Application Framework for Python, with a primary focus on Command Line Interfaces (CLI). Its goal is to introduce a standard and feature-full platform for both simple and complex command line applications as well as support rapid development needs without sacrificing quality. Cement is flexible, and its use cases span from the simplicity of a micro-framework to the complexity of a mega-framework. Whether it's a single file script or a multi-tier application, Cement is the foundation you've been looking for.
The first commit to Git was on Dec 4, 2009. Since then, the framework has seen several iterations in design and has continued to grow and improve since its inception. Cement is the most stable and complete framework for command line and backend application development.
## Installation
```
pip install cement
```
Optional CLI Extras (for development):
```
pip install cement[cli]
```
The first commit to Git was on Dec 4, 2009. Since then, the framework has seen several iterations in design, and has continued to grow and improve since it's inception. Cement is the most stable, and complete framework for command line and backend application development.
## Core Features
@ -25,12 +38,18 @@ Cement core features include (but are not limited to):
- Controller handler supports sub-commands, and nested controllers
- Hook support adds a bit of magic to apps and also ties into framework
- Zero external dependencies* (not including optional extensions)
- 100% test coverage (`pytest`)
- 100% PEP8 compliant (`flake8`)
- 100% test coverage (`pytest`, `coverage`)
- 100% PEP8 compliance (`ruff`)
- Type annotation compliance (`mypy`)
- Extensive API Reference (`sphinx`)
- Tested on Python 3.8+
- Tested on Python 3.9+
*Some optional extensions that are shipped with the mainline Cement sources do require external dependencies. It is the responsibility of the application developer to include these dependencies along with their application, as Cement explicitly does not include them.*
## Optional Extensions
Some extensions that are shipped with the mainline Cement source do require external dependencies. It is the responsibility of the application developer to include these dependencies along with their application, as Cement explicitly does not include them. Dependencies can be installed via each extensions optional package (ex: `cement[colorlog]`, `cement[redis]`, etc).
See: [https://docs.builtoncement.com/extensions](https://docs.builtoncement.com/extensions)
## More Information
@ -46,11 +65,12 @@ Cement core features include (but are not limited to):
The Cement CLI Application Framework is Open Source and is distributed under the BSD License (three clause). Please see the LICENSE file included with this software.
## Development
### Docker
This project includes a `docker-compose` configuration that sets up all required services, and dependencies for development and testing. This is the recommended path for local development, and is the only fully supported option.
This project includes a Docker Compose configuration that sets up all required services, and dependencies for development and testing. This is the recommended path for local development, and is the only fully supported option.
The following creates all required docker containers, and launches an BASH shell within the `cement` dev container for development.
```
@ -62,26 +82,30 @@ $ make dev
The above is the equivalent of running:
```
$ docker-compose up -d
$ docker compose up -d
$ docker-compose exec cement /bin/bash
$ docker compose exec cement /bin/bash
```
All execution is done *inside the docker containers*.
**Testing Alternative Versions of Python**
The latest stable version of Python 3 is the default, and target version accessible as the `cement` container within Docker Compose. For testing against alternative versions of python, additional containers are created (ex: `cement-py38`, `cement-py39`, etc). You can access these containers via:
The latest stable version of Python 3 is the default, and target version accessible as the `cement` container within Docker Compose. For testing against alternative versions of python, additional containers are created (ex: `cement-py39`, `cement-py310`, etc). You can access these containers via:
```
$ docker-compose ps
Name Command State Ports
-------------------------------------------------------------------------
cement_cement-py38_1 /bin/bash Up
cement_cement-py39_1 /bin/bash Up
cement_cement-py39_1 /bin/bash Up
cement_cement-py310_1 /bin/bash Up
cement_cement-py311_1 /bin/bash Up
cement_cement_1 /bin/bash Up
cement_memcached_1 docker-entrypoint.sh memcached Up 11211/tcp
cement_redis_1 docker-entrypoint.sh redis ... Up 6379/tcp
cement_cement-py312_1 /bin/bash Up
cement_cement-py313_1 /bin/bash Up
cement_cement_1 /bin/bash Up
cement_memcached_1 docker-entrypoint.sh memcached Up 11211/tcp
cement_redis_1 docker-entrypoint.sh redis ... Up 6379/tcp
$ docker-compose exec cement-py39 /bin/bash
@ -90,78 +114,85 @@ $ docker-compose exec cement-py39 /bin/bash
```
### VirtualENV
### Windows Targeted Development
An traditional VirtualENV helper is available:
*Windows development and support is not 100% complete. Applications Built on Cement is known to run and work on Windows well, however it is not a primary target for development and as such the setup is not as streamlined and currently has several known issues.*
If you are developing on Windows, the recommended path is still Docker. However if you are specifically targeting development *for* Windows you will want to run Python/Cement natively which will require setting up a development environment on the Windows host.
This is very rough (future doc coming), however the following will be required:
- Python 3.x (latest stable preferred)
- pip
- pipx
- pdm
- Visual C++ 14.0 or Greater Build Tools
- Including: CMake
Assuming Python/PIP are installed, the following will install PDM:
```
$ make virtualenv
pip install pipx
$ source env/bin/activate
|> cement <| $
pipx install pdm
```
### Vagrant
An alternative option is included to run Vagrant for development. This is partially supported, primarily for the purpose of developing/testing on Windows as well as testing specific issues on target operating systems.
To see a list of configured systems:
C++ Build Tools are install, the following will create a development virtual env:
```
$ vagrant status
pdm venv create
pdm install --without memcached
```
#### Linux
You can then run the core tests:
```
$ vagrant up linux
$ vagrant ssh linux
vagrant@linux $ cd /vagrant
vagrant@linux $ bash scripts/vagrant/bootstrap.sh
vagrant@linux $ make virtualenv
vagrant@linux $ source env/bin/activate
|> cement >| $
```
#### Windows
*Windows development and support is not 100% complete. Cement is known to run and work on Windows, however it is not a primary target for development and as such the setup is not as streamlined and currently has several known errors.*
The following assumes you're running these two initial commands from a unix based system:
```
$ make clean
$ vagrant up windows
```
RDP or Login to Desktop/Console, and open a PowerShell terminal:
```
C:\> cd C:\Vagrant
C:\Vagrant> powershell.exe scripts\vagrant\bootstrap.ps1
C:\Vagrant> make virtualenv-windows
C:\Vagrant> .\env-windows\Scripts\activate.ps1
C:\Vagrant> make test-core
pdm run pytest --cov=cement.core tests/core
```
*Note that only the core library is fully tested on Windows.*
Please explore the Makefile for helpers that may or may not work. Example, the following will run the same as the above `pdm run pytest` command:
```
make test-core
```
And, you can run Cement CLI via:
```
pdm run cement --help
```
### macOS Targeted Development
Similar to the above... if you are developing on macOS, the recommended path is still Docker. However if you are specifically targeting development *for* macOS you will want to run Python/Cement natively which will require setting up a development environment on the macOS host.
This is less nuanced than Windows, however still required some dependencies that will not be fully covered here (example: memcached). The following will get you setup to run the core library tests.
```
pip install pipx
pipx install pdm
pdm venv create
pdm install --without memcached
make test-core
```
And, you can run Cement CLI via:
```
pdm run cement --help
```
### Running Tests and Compliance
Cement has a strict policy that all code and tests meet PEP8 guidelines, therefore `flake8` is called before any unit tests run. All code submissions require 100% test coverage and PEP8 compliance:
Cement has a strict policy that all code and tests meet PEP8 guidelines, therefore `ruff` is called before any unit tests run. All code submissions require 100% test coverage and PEP8 compliance:
Execute the following to run all compliance and unit tests:

View File

@ -10,3 +10,21 @@ from .ext.ext_argparse import expose as ex
from .utils.misc import init_defaults, minimal_logger
from .utils import misc, fs, shell
from .utils.version import get_version
__all__ = [
"App",
"TestApp",
"Interface",
"Handler",
"FrameworkError",
"InterfaceError",
"CaughtSignal",
"Controller",
"ex",
"init_defaults",
"minimal_logger",
"misc",
"fs",
"shell",
"get_version",
]

View File

@ -1,28 +0,0 @@
Copyright 2007 Pallets
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@ -1,37 +0,0 @@
"""Jinja is a template engine written in pure Python. It provides a
non-XML syntax that supports inline expressions and an optional
sandboxed environment.
"""
from .bccache import BytecodeCache as BytecodeCache
from .bccache import FileSystemBytecodeCache as FileSystemBytecodeCache
from .bccache import MemcachedBytecodeCache as MemcachedBytecodeCache
from .environment import Environment as Environment
from .environment import Template as Template
from .exceptions import TemplateAssertionError as TemplateAssertionError
from .exceptions import TemplateError as TemplateError
from .exceptions import TemplateNotFound as TemplateNotFound
from .exceptions import TemplateRuntimeError as TemplateRuntimeError
from .exceptions import TemplatesNotFound as TemplatesNotFound
from .exceptions import TemplateSyntaxError as TemplateSyntaxError
from .exceptions import UndefinedError as UndefinedError
from .loaders import BaseLoader as BaseLoader
from .loaders import ChoiceLoader as ChoiceLoader
from .loaders import DictLoader as DictLoader
from .loaders import FileSystemLoader as FileSystemLoader
from .loaders import FunctionLoader as FunctionLoader
from .loaders import ModuleLoader as ModuleLoader
from .loaders import PackageLoader as PackageLoader
from .loaders import PrefixLoader as PrefixLoader
from .runtime import ChainableUndefined as ChainableUndefined
from .runtime import DebugUndefined as DebugUndefined
from .runtime import make_logging_undefined as make_logging_undefined
from .runtime import StrictUndefined as StrictUndefined
from .runtime import Undefined as Undefined
from .utils import clear_caches as clear_caches
from .utils import is_undefined as is_undefined
from .utils import pass_context as pass_context
from .utils import pass_environment as pass_environment
from .utils import pass_eval_context as pass_eval_context
from .utils import select_autoescape as select_autoescape
__version__ = "3.1.2"

View File

@ -1,6 +0,0 @@
import re
# generated by scripts/generate_identifier_pattern.py
pattern = re.compile(
r"[\w·̀-ͯ·҃-֑҇-ׇֽֿׁׂׅׄؐ-ًؚ-ٰٟۖ-ۜ۟-۪ۤۧۨ-ܑۭܰ-݊ަ-ް߫-߽߳ࠖ-࠙ࠛ-ࠣࠥ-ࠧࠩ-࡙࠭-࡛࣓-ࣣ࣡-ःऺ-़ा-ॏ॑-ॗॢॣঁ-ঃ়া-ৄেৈো-্ৗৢৣ৾ਁ-ਃ਼ਾ-ੂੇੈੋ-੍ੑੰੱੵઁ-ઃ઼ા-ૅે-ૉો-્ૢૣૺ-૿ଁ-ଃ଼ା-ୄେୈୋ-୍ୖୗୢୣஂா-ூெ-ைொ-்ௗఀ-ఄా-ౄె-ైొ-్ౕౖౢౣಁ-ಃ಼ಾ-ೄೆ-ೈೊ-್ೕೖೢೣഀ-ഃ഻഼ാ-ൄെ-ൈൊ-്ൗൢൣංඃ්ා-ුූෘ-ෟෲෳัิ-ฺ็-๎ັິ-ູົຼ່-ໍ༹༘༙༵༷༾༿ཱ-྄྆྇ྍ-ྗྙ-ྼ࿆ါ-ှၖ-ၙၞ-ၠၢ-ၤၧ-ၭၱ-ၴႂ-ႍႏႚ-ႝ፝-፟ᜒ-᜔ᜲ-᜴ᝒᝓᝲᝳ឴-៓៝᠋-᠍ᢅᢆᢩᤠ-ᤫᤰ-᤻ᨗ-ᨛᩕ-ᩞ᩠-᩿᩼᪰-᪽ᬀ-ᬄ᬴-᭄᭫-᭳ᮀ-ᮂᮡ-ᮭ᯦-᯳ᰤ-᰷᳐-᳔᳒-᳨᳭ᳲ-᳴᳷-᳹᷀-᷹᷻-᷿‿⁀⁔⃐-⃥⃜⃡-⃰℘℮⳯-⵿⳱ⷠ-〪ⷿ-゙゚〯꙯ꙴ-꙽ꚞꚟ꛰꛱ꠂ꠆ꠋꠣ-ꠧꢀꢁꢴ-ꣅ꣠-꣱ꣿꤦ-꤭ꥇ-꥓ꦀ-ꦃ꦳-꧀ꧥꨩ-ꨶꩃꩌꩍꩻ-ꩽꪰꪲ-ꪴꪷꪸꪾ꪿꫁ꫫ-ꫯꫵ꫶ꯣ-ꯪ꯬꯭ﬞ︀-️︠-︯︳︴﹍-﹏_𐇽𐋠𐍶-𐍺𐨁-𐨃𐨅𐨆𐨌-𐨏𐨸-𐨿𐨺𐫦𐫥𐴤-𐽆𐴧-𐽐𑀀-𑀂𑀸-𑁆𑁿-𑂂𑂰-𑂺𑄀-𑄂𑄧-𑄴𑅅𑅆𑅳𑆀-𑆂𑆳-𑇀𑇉-𑇌𑈬-𑈷𑈾𑋟-𑋪𑌀-𑌃𑌻𑌼𑌾-𑍄𑍇𑍈𑍋-𑍍𑍗𑍢𑍣𑍦-𑍬𑍰-𑍴𑐵-𑑆𑑞𑒰-𑓃𑖯-𑖵𑖸-𑗀𑗜𑗝𑘰-𑙀𑚫-𑚷𑜝-𑜫𑠬-𑠺𑨁-𑨊𑨳-𑨹𑨻-𑨾𑩇𑩑-𑩛𑪊-𑪙𑰯-𑰶𑰸-𑰿𑲒-𑲧𑲩-𑲶𑴱-𑴶𑴺𑴼𑴽𑴿-𑵅𑵇𑶊-𑶎𑶐𑶑𑶓-𑶗𑻳-𑻶𖫰-𖫴𖬰-𖬶𖽑-𖽾𖾏-𖾒𛲝𛲞𝅥-𝅩𝅭-𝅲𝅻-𝆂𝆅-𝆋𝆪-𝆭𝉂-𝉄𝨀-𝨶𝨻-𝩬𝩵𝪄𝪛-𝪟𝪡-𝪯𞀀-𞀆𞀈-𞀘𞀛-𞀡𞀣𞀤𞀦-𞣐𞀪-𞣖𞥄-𞥊󠄀-󠇯]+" # noqa: B950
)

View File

@ -1,84 +0,0 @@
import inspect
import typing as t
from functools import WRAPPER_ASSIGNMENTS
from functools import wraps
from .utils import _PassArg
from .utils import pass_eval_context
V = t.TypeVar("V")
def async_variant(normal_func): # type: ignore
def decorator(async_func): # type: ignore
pass_arg = _PassArg.from_obj(normal_func)
need_eval_context = pass_arg is None
if pass_arg is _PassArg.environment:
def is_async(args: t.Any) -> bool:
return t.cast(bool, args[0].is_async)
else:
def is_async(args: t.Any) -> bool:
return t.cast(bool, args[0].environment.is_async)
# Take the doc and annotations from the sync function, but the
# name from the async function. Pallets-Sphinx-Themes
# build_function_directive expects __wrapped__ to point to the
# sync function.
async_func_attrs = ("__module__", "__name__", "__qualname__")
normal_func_attrs = tuple(set(WRAPPER_ASSIGNMENTS).difference(async_func_attrs))
@wraps(normal_func, assigned=normal_func_attrs)
@wraps(async_func, assigned=async_func_attrs, updated=())
def wrapper(*args, **kwargs): # type: ignore
b = is_async(args)
if need_eval_context:
args = args[1:]
if b:
return async_func(*args, **kwargs)
return normal_func(*args, **kwargs)
if need_eval_context:
wrapper = pass_eval_context(wrapper)
wrapper.jinja_async_variant = True
return wrapper
return decorator
_common_primitives = {int, float, bool, str, list, dict, tuple, type(None)}
async def auto_await(value: t.Union[t.Awaitable["V"], "V"]) -> "V":
# Avoid a costly call to isawaitable
if type(value) in _common_primitives:
return t.cast("V", value)
if inspect.isawaitable(value):
return await t.cast("t.Awaitable[V]", value)
return t.cast("V", value)
async def auto_aiter(
iterable: "t.Union[t.AsyncIterable[V], t.Iterable[V]]",
) -> "t.AsyncIterator[V]":
if hasattr(iterable, "__aiter__"):
async for item in t.cast("t.AsyncIterable[V]", iterable):
yield item
else:
for item in t.cast("t.Iterable[V]", iterable):
yield item
async def auto_to_list(
value: "t.Union[t.AsyncIterable[V], t.Iterable[V]]",
) -> t.List["V"]:
return [x async for x in auto_aiter(value)]

View File

@ -1,406 +0,0 @@
"""The optional bytecode cache system. This is useful if you have very
complex template situations and the compilation of all those templates
slows down your application too much.
Situations where this is useful are often forking web applications that
are initialized on the first request.
"""
import errno
import fnmatch
import marshal
import os
import pickle
import stat
import sys
import tempfile
import typing as t
from hashlib import sha1
from io import BytesIO
from types import CodeType
if t.TYPE_CHECKING:
import typing_extensions as te
from .environment import Environment
class _MemcachedClient(te.Protocol):
def get(self, key: str) -> bytes:
...
def set(self, key: str, value: bytes, timeout: t.Optional[int] = None) -> None:
...
bc_version = 5
# Magic bytes to identify Jinja bytecode cache files. Contains the
# Python major and minor version to avoid loading incompatible bytecode
# if a project upgrades its Python version.
bc_magic = (
b"j2"
+ pickle.dumps(bc_version, 2)
+ pickle.dumps((sys.version_info[0] << 24) | sys.version_info[1], 2)
)
class Bucket:
"""Buckets are used to store the bytecode for one template. It's created
and initialized by the bytecode cache and passed to the loading functions.
The buckets get an internal checksum from the cache assigned and use this
to automatically reject outdated cache material. Individual bytecode
cache subclasses don't have to care about cache invalidation.
"""
def __init__(self, environment: "Environment", key: str, checksum: str) -> None:
self.environment = environment
self.key = key
self.checksum = checksum
self.reset()
def reset(self) -> None:
"""Resets the bucket (unloads the bytecode)."""
self.code: t.Optional[CodeType] = None
def load_bytecode(self, f: t.BinaryIO) -> None:
"""Loads bytecode from a file or file like object."""
# make sure the magic header is correct
magic = f.read(len(bc_magic))
if magic != bc_magic:
self.reset()
return
# the source code of the file changed, we need to reload
checksum = pickle.load(f)
if self.checksum != checksum:
self.reset()
return
# if marshal_load fails then we need to reload
try:
self.code = marshal.load(f)
except (EOFError, ValueError, TypeError):
self.reset()
return
def write_bytecode(self, f: t.IO[bytes]) -> None:
"""Dump the bytecode into the file or file like object passed."""
if self.code is None:
raise TypeError("can't write empty bucket")
f.write(bc_magic)
pickle.dump(self.checksum, f, 2)
marshal.dump(self.code, f)
def bytecode_from_string(self, string: bytes) -> None:
"""Load bytecode from bytes."""
self.load_bytecode(BytesIO(string))
def bytecode_to_string(self) -> bytes:
"""Return the bytecode as bytes."""
out = BytesIO()
self.write_bytecode(out)
return out.getvalue()
class BytecodeCache:
"""To implement your own bytecode cache you have to subclass this class
and override :meth:`load_bytecode` and :meth:`dump_bytecode`. Both of
these methods are passed a :class:`~jinja2.bccache.Bucket`.
A very basic bytecode cache that saves the bytecode on the file system::
from os import path
class MyCache(BytecodeCache):
def __init__(self, directory):
self.directory = directory
def load_bytecode(self, bucket):
filename = path.join(self.directory, bucket.key)
if path.exists(filename):
with open(filename, 'rb') as f:
bucket.load_bytecode(f)
def dump_bytecode(self, bucket):
filename = path.join(self.directory, bucket.key)
with open(filename, 'wb') as f:
bucket.write_bytecode(f)
A more advanced version of a filesystem based bytecode cache is part of
Jinja.
"""
def load_bytecode(self, bucket: Bucket) -> None:
"""Subclasses have to override this method to load bytecode into a
bucket. If they are not able to find code in the cache for the
bucket, it must not do anything.
"""
raise NotImplementedError()
def dump_bytecode(self, bucket: Bucket) -> None:
"""Subclasses have to override this method to write the bytecode
from a bucket back to the cache. If it unable to do so it must not
fail silently but raise an exception.
"""
raise NotImplementedError()
def clear(self) -> None:
"""Clears the cache. This method is not used by Jinja but should be
implemented to allow applications to clear the bytecode cache used
by a particular environment.
"""
def get_cache_key(
self, name: str, filename: t.Optional[t.Union[str]] = None
) -> str:
"""Returns the unique hash key for this template name."""
hash = sha1(name.encode("utf-8"))
if filename is not None:
hash.update(f"|{filename}".encode())
return hash.hexdigest()
def get_source_checksum(self, source: str) -> str:
"""Returns a checksum for the source."""
return sha1(source.encode("utf-8")).hexdigest()
def get_bucket(
self,
environment: "Environment",
name: str,
filename: t.Optional[str],
source: str,
) -> Bucket:
"""Return a cache bucket for the given template. All arguments are
mandatory but filename may be `None`.
"""
key = self.get_cache_key(name, filename)
checksum = self.get_source_checksum(source)
bucket = Bucket(environment, key, checksum)
self.load_bytecode(bucket)
return bucket
def set_bucket(self, bucket: Bucket) -> None:
"""Put the bucket into the cache."""
self.dump_bytecode(bucket)
class FileSystemBytecodeCache(BytecodeCache):
"""A bytecode cache that stores bytecode on the filesystem. It accepts
two arguments: The directory where the cache items are stored and a
pattern string that is used to build the filename.
If no directory is specified a default cache directory is selected. On
Windows the user's temp directory is used, on UNIX systems a directory
is created for the user in the system temp directory.
The pattern can be used to have multiple separate caches operate on the
same directory. The default pattern is ``'__jinja2_%s.cache'``. ``%s``
is replaced with the cache key.
>>> bcc = FileSystemBytecodeCache('/tmp/jinja_cache', '%s.cache')
This bytecode cache supports clearing of the cache using the clear method.
"""
def __init__(
self, directory: t.Optional[str] = None, pattern: str = "__jinja2_%s.cache"
) -> None:
if directory is None:
directory = self._get_default_cache_dir()
self.directory = directory
self.pattern = pattern
def _get_default_cache_dir(self) -> str:
def _unsafe_dir() -> "te.NoReturn":
raise RuntimeError(
"Cannot determine safe temp directory. You "
"need to explicitly provide one."
)
tmpdir = tempfile.gettempdir()
# On windows the temporary directory is used specific unless
# explicitly forced otherwise. We can just use that.
if os.name == "nt":
return tmpdir
if not hasattr(os, "getuid"):
_unsafe_dir()
dirname = f"_jinja2-cache-{os.getuid()}"
actual_dir = os.path.join(tmpdir, dirname)
try:
os.mkdir(actual_dir, stat.S_IRWXU)
except OSError as e:
if e.errno != errno.EEXIST:
raise
try:
os.chmod(actual_dir, stat.S_IRWXU)
actual_dir_stat = os.lstat(actual_dir)
if (
actual_dir_stat.st_uid != os.getuid()
or not stat.S_ISDIR(actual_dir_stat.st_mode)
or stat.S_IMODE(actual_dir_stat.st_mode) != stat.S_IRWXU
):
_unsafe_dir()
except OSError as e:
if e.errno != errno.EEXIST:
raise
actual_dir_stat = os.lstat(actual_dir)
if (
actual_dir_stat.st_uid != os.getuid()
or not stat.S_ISDIR(actual_dir_stat.st_mode)
or stat.S_IMODE(actual_dir_stat.st_mode) != stat.S_IRWXU
):
_unsafe_dir()
return actual_dir
def _get_cache_filename(self, bucket: Bucket) -> str:
return os.path.join(self.directory, self.pattern % (bucket.key,))
def load_bytecode(self, bucket: Bucket) -> None:
filename = self._get_cache_filename(bucket)
# Don't test for existence before opening the file, since the
# file could disappear after the test before the open.
try:
f = open(filename, "rb")
except (FileNotFoundError, IsADirectoryError, PermissionError):
# PermissionError can occur on Windows when an operation is
# in progress, such as calling clear().
return
with f:
bucket.load_bytecode(f)
def dump_bytecode(self, bucket: Bucket) -> None:
# Write to a temporary file, then rename to the real name after
# writing. This avoids another process reading the file before
# it is fully written.
name = self._get_cache_filename(bucket)
f = tempfile.NamedTemporaryFile(
mode="wb",
dir=os.path.dirname(name),
prefix=os.path.basename(name),
suffix=".tmp",
delete=False,
)
def remove_silent() -> None:
try:
os.remove(f.name)
except OSError:
# Another process may have called clear(). On Windows,
# another program may be holding the file open.
pass
try:
with f:
bucket.write_bytecode(f)
except BaseException:
remove_silent()
raise
try:
os.replace(f.name, name)
except OSError:
# Another process may have called clear(). On Windows,
# another program may be holding the file open.
remove_silent()
except BaseException:
remove_silent()
raise
def clear(self) -> None:
# imported lazily here because google app-engine doesn't support
# write access on the file system and the function does not exist
# normally.
from os import remove
files = fnmatch.filter(os.listdir(self.directory), self.pattern % ("*",))
for filename in files:
try:
remove(os.path.join(self.directory, filename))
except OSError:
pass
class MemcachedBytecodeCache(BytecodeCache):
"""This class implements a bytecode cache that uses a memcache cache for
storing the information. It does not enforce a specific memcache library
(tummy's memcache or cmemcache) but will accept any class that provides
the minimal interface required.
Libraries compatible with this class:
- `cachelib <https://github.com/pallets/cachelib>`_
- `python-memcached <https://pypi.org/project/python-memcached/>`_
(Unfortunately the django cache interface is not compatible because it
does not support storing binary data, only text. You can however pass
the underlying cache client to the bytecode cache which is available
as `django.core.cache.cache._client`.)
The minimal interface for the client passed to the constructor is this:
.. class:: MinimalClientInterface
.. method:: set(key, value[, timeout])
Stores the bytecode in the cache. `value` is a string and
`timeout` the timeout of the key. If timeout is not provided
a default timeout or no timeout should be assumed, if it's
provided it's an integer with the number of seconds the cache
item should exist.
.. method:: get(key)
Returns the value for the cache key. If the item does not
exist in the cache the return value must be `None`.
The other arguments to the constructor are the prefix for all keys that
is added before the actual cache key and the timeout for the bytecode in
the cache system. We recommend a high (or no) timeout.
This bytecode cache does not support clearing of used items in the cache.
The clear method is a no-operation function.
.. versionadded:: 2.7
Added support for ignoring memcache errors through the
`ignore_memcache_errors` parameter.
"""
def __init__(
self,
client: "_MemcachedClient",
prefix: str = "jinja2/bytecode/",
timeout: t.Optional[int] = None,
ignore_memcache_errors: bool = True,
):
self.client = client
self.prefix = prefix
self.timeout = timeout
self.ignore_memcache_errors = ignore_memcache_errors
def load_bytecode(self, bucket: Bucket) -> None:
try:
code = self.client.get(self.prefix + bucket.key)
except Exception:
if not self.ignore_memcache_errors:
raise
else:
bucket.bytecode_from_string(code)
def dump_bytecode(self, bucket: Bucket) -> None:
key = self.prefix + bucket.key
value = bucket.bytecode_to_string()
try:
if self.timeout is not None:
self.client.set(key, value, self.timeout)
else:
self.client.set(key, value)
except Exception:
if not self.ignore_memcache_errors:
raise

File diff suppressed because it is too large Load Diff

View File

@ -1,20 +0,0 @@
#: list of lorem ipsum words used by the lipsum() helper function
LOREM_IPSUM_WORDS = """\
a ac accumsan ad adipiscing aenean aliquam aliquet amet ante aptent arcu at
auctor augue bibendum blandit class commodo condimentum congue consectetuer
consequat conubia convallis cras cubilia cum curabitur curae cursus dapibus
diam dictum dictumst dignissim dis dolor donec dui duis egestas eget eleifend
elementum elit enim erat eros est et etiam eu euismod facilisi facilisis fames
faucibus felis fermentum feugiat fringilla fusce gravida habitant habitasse hac
hendrerit hymenaeos iaculis id imperdiet in inceptos integer interdum ipsum
justo lacinia lacus laoreet lectus leo libero ligula litora lobortis lorem
luctus maecenas magna magnis malesuada massa mattis mauris metus mi molestie
mollis montes morbi mus nam nascetur natoque nec neque netus nibh nisi nisl non
nonummy nostra nulla nullam nunc odio orci ornare parturient pede pellentesque
penatibus per pharetra phasellus placerat platea porta porttitor posuere
potenti praesent pretium primis proin pulvinar purus quam quis quisque rhoncus
ridiculus risus rutrum sagittis sapien scelerisque sed sem semper senectus sit
sociis sociosqu sodales sollicitudin suscipit suspendisse taciti tellus tempor
tempus tincidunt torquent tortor tristique turpis ullamcorper ultrices
ultricies urna ut varius vehicula vel velit venenatis vestibulum vitae vivamus
viverra volutpat vulputate"""

View File

@ -1,191 +0,0 @@
import sys
import typing as t
from types import CodeType
from types import TracebackType
from .exceptions import TemplateSyntaxError
from .utils import internal_code
from .utils import missing
if t.TYPE_CHECKING:
from .runtime import Context
def rewrite_traceback_stack(source: t.Optional[str] = None) -> BaseException:
"""Rewrite the current exception to replace any tracebacks from
within compiled template code with tracebacks that look like they
came from the template source.
This must be called within an ``except`` block.
:param source: For ``TemplateSyntaxError``, the original source if
known.
:return: The original exception with the rewritten traceback.
"""
_, exc_value, tb = sys.exc_info()
exc_value = t.cast(BaseException, exc_value)
tb = t.cast(TracebackType, tb)
if isinstance(exc_value, TemplateSyntaxError) and not exc_value.translated:
exc_value.translated = True
exc_value.source = source
# Remove the old traceback, otherwise the frames from the
# compiler still show up.
exc_value.with_traceback(None)
# Outside of runtime, so the frame isn't executing template
# code, but it still needs to point at the template.
tb = fake_traceback(
exc_value, None, exc_value.filename or "<unknown>", exc_value.lineno
)
else:
# Skip the frame for the render function.
tb = tb.tb_next
stack = []
# Build the stack of traceback object, replacing any in template
# code with the source file and line information.
while tb is not None:
# Skip frames decorated with @internalcode. These are internal
# calls that aren't useful in template debugging output.
if tb.tb_frame.f_code in internal_code:
tb = tb.tb_next
continue
template = tb.tb_frame.f_globals.get("__jinja_template__")
if template is not None:
lineno = template.get_corresponding_lineno(tb.tb_lineno)
fake_tb = fake_traceback(exc_value, tb, template.filename, lineno)
stack.append(fake_tb)
else:
stack.append(tb)
tb = tb.tb_next
tb_next = None
# Assign tb_next in reverse to avoid circular references.
for tb in reversed(stack):
tb.tb_next = tb_next
tb_next = tb
return exc_value.with_traceback(tb_next)
def fake_traceback( # type: ignore
exc_value: BaseException, tb: t.Optional[TracebackType], filename: str, lineno: int
) -> TracebackType:
"""Produce a new traceback object that looks like it came from the
template source instead of the compiled code. The filename, line
number, and location name will point to the template, and the local
variables will be the current template context.
:param exc_value: The original exception to be re-raised to create
the new traceback.
:param tb: The original traceback to get the local variables and
code info from.
:param filename: The template filename.
:param lineno: The line number in the template source.
"""
if tb is not None:
# Replace the real locals with the context that would be
# available at that point in the template.
locals = get_template_locals(tb.tb_frame.f_locals)
locals.pop("__jinja_exception__", None)
else:
locals = {}
globals = {
"__name__": filename,
"__file__": filename,
"__jinja_exception__": exc_value,
}
# Raise an exception at the correct line number.
code: CodeType = compile(
"\n" * (lineno - 1) + "raise __jinja_exception__", filename, "exec"
)
# Build a new code object that points to the template file and
# replaces the location with a block name.
location = "template"
if tb is not None:
function = tb.tb_frame.f_code.co_name
if function == "root":
location = "top-level template code"
elif function.startswith("block_"):
location = f"block {function[6:]!r}"
if sys.version_info >= (3, 8):
code = code.replace(co_name=location)
else:
code = CodeType(
code.co_argcount,
code.co_kwonlyargcount,
code.co_nlocals,
code.co_stacksize,
code.co_flags,
code.co_code,
code.co_consts,
code.co_names,
code.co_varnames,
code.co_filename,
location,
code.co_firstlineno,
code.co_lnotab,
code.co_freevars,
code.co_cellvars,
)
# Execute the new code, which is guaranteed to raise, and return
# the new traceback without this frame.
try:
exec(code, globals, locals)
except BaseException:
return sys.exc_info()[2].tb_next # type: ignore
def get_template_locals(real_locals: t.Mapping[str, t.Any]) -> t.Dict[str, t.Any]:
"""Based on the runtime locals, get the context that would be
available at that point in the template.
"""
# Start with the current template context.
ctx: "t.Optional[Context]" = real_locals.get("context")
if ctx is not None:
data: t.Dict[str, t.Any] = ctx.get_all().copy()
else:
data = {}
# Might be in a derived context that only sets local variables
# rather than pushing a context. Local variables follow the scheme
# l_depth_name. Find the highest-depth local that has a value for
# each name.
local_overrides: t.Dict[str, t.Tuple[int, t.Any]] = {}
for name, value in real_locals.items():
if not name.startswith("l_") or value is missing:
# Not a template variable, or no longer relevant.
continue
try:
_, depth_str, name = name.split("_", 2)
depth = int(depth_str)
except ValueError:
continue
cur_depth = local_overrides.get(name, (-1,))[0]
if cur_depth < depth:
local_overrides[name] = (depth, value)
# Modify the context with any derived context.
for name, (_, value) in local_overrides.items():
if value is missing:
data.pop(name, None)
else:
data[name] = value
return data

View File

@ -1,48 +0,0 @@
import typing as t
from .filters import FILTERS as DEFAULT_FILTERS # noqa: F401
from .tests import TESTS as DEFAULT_TESTS # noqa: F401
from .utils import Cycler
from .utils import generate_lorem_ipsum
from .utils import Joiner
from .utils import Namespace
if t.TYPE_CHECKING:
import typing_extensions as te
# defaults for the parser / lexer
BLOCK_START_STRING = "{%"
BLOCK_END_STRING = "%}"
VARIABLE_START_STRING = "{{"
VARIABLE_END_STRING = "}}"
COMMENT_START_STRING = "{#"
COMMENT_END_STRING = "#}"
LINE_STATEMENT_PREFIX: t.Optional[str] = None
LINE_COMMENT_PREFIX: t.Optional[str] = None
TRIM_BLOCKS = False
LSTRIP_BLOCKS = False
NEWLINE_SEQUENCE: "te.Literal['\\n', '\\r\\n', '\\r']" = "\n"
KEEP_TRAILING_NEWLINE = False
# default filters, tests and namespace
DEFAULT_NAMESPACE = {
"range": range,
"dict": dict,
"lipsum": generate_lorem_ipsum,
"cycler": Cycler,
"joiner": Joiner,
"namespace": Namespace,
}
# default policies
DEFAULT_POLICIES: t.Dict[str, t.Any] = {
"compiler.ascii_str": True,
"urlize.rel": "noopener",
"urlize.target": None,
"urlize.extra_schemes": None,
"truncate.leeway": 5,
"json.dumps_function": None,
"json.dumps_kwargs": {"sort_keys": True},
"ext.i18n.trimmed": False,
}

File diff suppressed because it is too large Load Diff

View File

@ -1,166 +0,0 @@
import typing as t
if t.TYPE_CHECKING:
from .runtime import Undefined
class TemplateError(Exception):
"""Baseclass for all template errors."""
def __init__(self, message: t.Optional[str] = None) -> None:
super().__init__(message)
@property
def message(self) -> t.Optional[str]:
return self.args[0] if self.args else None
class TemplateNotFound(IOError, LookupError, TemplateError):
"""Raised if a template does not exist.
.. versionchanged:: 2.11
If the given name is :class:`Undefined` and no message was
provided, an :exc:`UndefinedError` is raised.
"""
# Silence the Python warning about message being deprecated since
# it's not valid here.
message: t.Optional[str] = None
def __init__(
self,
name: t.Optional[t.Union[str, "Undefined"]],
message: t.Optional[str] = None,
) -> None:
IOError.__init__(self, name)
if message is None:
from .runtime import Undefined
if isinstance(name, Undefined):
name._fail_with_undefined_error()
message = name
self.message = message
self.name = name
self.templates = [name]
def __str__(self) -> str:
return str(self.message)
class TemplatesNotFound(TemplateNotFound):
"""Like :class:`TemplateNotFound` but raised if multiple templates
are selected. This is a subclass of :class:`TemplateNotFound`
exception, so just catching the base exception will catch both.
.. versionchanged:: 2.11
If a name in the list of names is :class:`Undefined`, a message
about it being undefined is shown rather than the empty string.
.. versionadded:: 2.2
"""
def __init__(
self,
names: t.Sequence[t.Union[str, "Undefined"]] = (),
message: t.Optional[str] = None,
) -> None:
if message is None:
from .runtime import Undefined
parts = []
for name in names:
if isinstance(name, Undefined):
parts.append(name._undefined_message)
else:
parts.append(name)
parts_str = ", ".join(map(str, parts))
message = f"none of the templates given were found: {parts_str}"
super().__init__(names[-1] if names else None, message)
self.templates = list(names)
class TemplateSyntaxError(TemplateError):
"""Raised to tell the user that there is a problem with the template."""
def __init__(
self,
message: str,
lineno: int,
name: t.Optional[str] = None,
filename: t.Optional[str] = None,
) -> None:
super().__init__(message)
self.lineno = lineno
self.name = name
self.filename = filename
self.source: t.Optional[str] = None
# this is set to True if the debug.translate_syntax_error
# function translated the syntax error into a new traceback
self.translated = False
def __str__(self) -> str:
# for translated errors we only return the message
if self.translated:
return t.cast(str, self.message)
# otherwise attach some stuff
location = f"line {self.lineno}"
name = self.filename or self.name
if name:
location = f'File "{name}", {location}'
lines = [t.cast(str, self.message), " " + location]
# if the source is set, add the line to the output
if self.source is not None:
try:
line = self.source.splitlines()[self.lineno - 1]
except IndexError:
pass
else:
lines.append(" " + line.strip())
return "\n".join(lines)
def __reduce__(self): # type: ignore
# https://bugs.python.org/issue1692335 Exceptions that take
# multiple required arguments have problems with pickling.
# Without this, raises TypeError: __init__() missing 1 required
# positional argument: 'lineno'
return self.__class__, (self.message, self.lineno, self.name, self.filename)
class TemplateAssertionError(TemplateSyntaxError):
"""Like a template syntax error, but covers cases where something in the
template caused an error at compile time that wasn't necessarily caused
by a syntax error. However it's a direct subclass of
:exc:`TemplateSyntaxError` and has the same attributes.
"""
class TemplateRuntimeError(TemplateError):
"""A generic runtime error in the template engine. Under some situations
Jinja may raise this exception.
"""
class UndefinedError(TemplateRuntimeError):
"""Raised if a template tries to operate on :class:`Undefined`."""
class SecurityError(TemplateRuntimeError):
"""Raised if a template tries to do something insecure if the
sandbox is enabled.
"""
class FilterArgumentError(TemplateRuntimeError):
"""This error is raised if a filter was called with inappropriate
arguments
"""

View File

@ -1,859 +0,0 @@
"""Extension API for adding custom tags and behavior."""
import pprint
import re
import typing as t
from markupsafe import Markup
from . import defaults
from . import nodes
from .environment import Environment
from .exceptions import TemplateAssertionError
from .exceptions import TemplateSyntaxError
from .runtime import concat # type: ignore
from .runtime import Context
from .runtime import Undefined
from .utils import import_string
from .utils import pass_context
if t.TYPE_CHECKING:
import typing_extensions as te
from .lexer import Token
from .lexer import TokenStream
from .parser import Parser
class _TranslationsBasic(te.Protocol):
def gettext(self, message: str) -> str:
...
def ngettext(self, singular: str, plural: str, n: int) -> str:
pass
class _TranslationsContext(_TranslationsBasic):
def pgettext(self, context: str, message: str) -> str:
...
def npgettext(self, context: str, singular: str, plural: str, n: int) -> str:
...
_SupportedTranslations = t.Union[_TranslationsBasic, _TranslationsContext]
# I18N functions available in Jinja templates. If the I18N library
# provides ugettext, it will be assigned to gettext.
GETTEXT_FUNCTIONS: t.Tuple[str, ...] = (
"_",
"gettext",
"ngettext",
"pgettext",
"npgettext",
)
_ws_re = re.compile(r"\s*\n\s*")
class Extension:
"""Extensions can be used to add extra functionality to the Jinja template
system at the parser level. Custom extensions are bound to an environment
but may not store environment specific data on `self`. The reason for
this is that an extension can be bound to another environment (for
overlays) by creating a copy and reassigning the `environment` attribute.
As extensions are created by the environment they cannot accept any
arguments for configuration. One may want to work around that by using
a factory function, but that is not possible as extensions are identified
by their import name. The correct way to configure the extension is
storing the configuration values on the environment. Because this way the
environment ends up acting as central configuration storage the
attributes may clash which is why extensions have to ensure that the names
they choose for configuration are not too generic. ``prefix`` for example
is a terrible name, ``fragment_cache_prefix`` on the other hand is a good
name as includes the name of the extension (fragment cache).
"""
identifier: t.ClassVar[str]
def __init_subclass__(cls) -> None:
cls.identifier = f"{cls.__module__}.{cls.__name__}"
#: if this extension parses this is the list of tags it's listening to.
tags: t.Set[str] = set()
#: the priority of that extension. This is especially useful for
#: extensions that preprocess values. A lower value means higher
#: priority.
#:
#: .. versionadded:: 2.4
priority = 100
def __init__(self, environment: Environment) -> None:
self.environment = environment
def bind(self, environment: Environment) -> "Extension":
"""Create a copy of this extension bound to another environment."""
rv = object.__new__(self.__class__)
rv.__dict__.update(self.__dict__)
rv.environment = environment
return rv
def preprocess(
self, source: str, name: t.Optional[str], filename: t.Optional[str] = None
) -> str:
"""This method is called before the actual lexing and can be used to
preprocess the source. The `filename` is optional. The return value
must be the preprocessed source.
"""
return source
def filter_stream(
self, stream: "TokenStream"
) -> t.Union["TokenStream", t.Iterable["Token"]]:
"""It's passed a :class:`~jinja2.lexer.TokenStream` that can be used
to filter tokens returned. This method has to return an iterable of
:class:`~jinja2.lexer.Token`\\s, but it doesn't have to return a
:class:`~jinja2.lexer.TokenStream`.
"""
return stream
def parse(self, parser: "Parser") -> t.Union[nodes.Node, t.List[nodes.Node]]:
"""If any of the :attr:`tags` matched this method is called with the
parser as first argument. The token the parser stream is pointing at
is the name token that matched. This method has to return one or a
list of multiple nodes.
"""
raise NotImplementedError()
def attr(
self, name: str, lineno: t.Optional[int] = None
) -> nodes.ExtensionAttribute:
"""Return an attribute node for the current extension. This is useful
to pass constants on extensions to generated template code.
::
self.attr('_my_attribute', lineno=lineno)
"""
return nodes.ExtensionAttribute(self.identifier, name, lineno=lineno)
def call_method(
self,
name: str,
args: t.Optional[t.List[nodes.Expr]] = None,
kwargs: t.Optional[t.List[nodes.Keyword]] = None,
dyn_args: t.Optional[nodes.Expr] = None,
dyn_kwargs: t.Optional[nodes.Expr] = None,
lineno: t.Optional[int] = None,
) -> nodes.Call:
"""Call a method of the extension. This is a shortcut for
:meth:`attr` + :class:`jinja2.nodes.Call`.
"""
if args is None:
args = []
if kwargs is None:
kwargs = []
return nodes.Call(
self.attr(name, lineno=lineno),
args,
kwargs,
dyn_args,
dyn_kwargs,
lineno=lineno,
)
@pass_context
def _gettext_alias(
__context: Context, *args: t.Any, **kwargs: t.Any
) -> t.Union[t.Any, Undefined]:
return __context.call(__context.resolve("gettext"), *args, **kwargs)
def _make_new_gettext(func: t.Callable[[str], str]) -> t.Callable[..., str]:
@pass_context
def gettext(__context: Context, __string: str, **variables: t.Any) -> str:
rv = __context.call(func, __string)
if __context.eval_ctx.autoescape:
rv = Markup(rv)
# Always treat as a format string, even if there are no
# variables. This makes translation strings more consistent
# and predictable. This requires escaping
return rv % variables # type: ignore
return gettext
def _make_new_ngettext(func: t.Callable[[str, str, int], str]) -> t.Callable[..., str]:
@pass_context
def ngettext(
__context: Context,
__singular: str,
__plural: str,
__num: int,
**variables: t.Any,
) -> str:
variables.setdefault("num", __num)
rv = __context.call(func, __singular, __plural, __num)
if __context.eval_ctx.autoescape:
rv = Markup(rv)
# Always treat as a format string, see gettext comment above.
return rv % variables # type: ignore
return ngettext
def _make_new_pgettext(func: t.Callable[[str, str], str]) -> t.Callable[..., str]:
@pass_context
def pgettext(
__context: Context, __string_ctx: str, __string: str, **variables: t.Any
) -> str:
variables.setdefault("context", __string_ctx)
rv = __context.call(func, __string_ctx, __string)
if __context.eval_ctx.autoescape:
rv = Markup(rv)
# Always treat as a format string, see gettext comment above.
return rv % variables # type: ignore
return pgettext
def _make_new_npgettext(
func: t.Callable[[str, str, str, int], str]
) -> t.Callable[..., str]:
@pass_context
def npgettext(
__context: Context,
__string_ctx: str,
__singular: str,
__plural: str,
__num: int,
**variables: t.Any,
) -> str:
variables.setdefault("context", __string_ctx)
variables.setdefault("num", __num)
rv = __context.call(func, __string_ctx, __singular, __plural, __num)
if __context.eval_ctx.autoescape:
rv = Markup(rv)
# Always treat as a format string, see gettext comment above.
return rv % variables # type: ignore
return npgettext
class InternationalizationExtension(Extension):
"""This extension adds gettext support to Jinja."""
tags = {"trans"}
# TODO: the i18n extension is currently reevaluating values in a few
# situations. Take this example:
# {% trans count=something() %}{{ count }} foo{% pluralize
# %}{{ count }} fooss{% endtrans %}
# something is called twice here. One time for the gettext value and
# the other time for the n-parameter of the ngettext function.
def __init__(self, environment: Environment) -> None:
super().__init__(environment)
environment.globals["_"] = _gettext_alias
environment.extend(
install_gettext_translations=self._install,
install_null_translations=self._install_null,
install_gettext_callables=self._install_callables,
uninstall_gettext_translations=self._uninstall,
extract_translations=self._extract,
newstyle_gettext=False,
)
def _install(
self, translations: "_SupportedTranslations", newstyle: t.Optional[bool] = None
) -> None:
# ugettext and ungettext are preferred in case the I18N library
# is providing compatibility with older Python versions.
gettext = getattr(translations, "ugettext", None)
if gettext is None:
gettext = translations.gettext
ngettext = getattr(translations, "ungettext", None)
if ngettext is None:
ngettext = translations.ngettext
pgettext = getattr(translations, "pgettext", None)
npgettext = getattr(translations, "npgettext", None)
self._install_callables(
gettext, ngettext, newstyle=newstyle, pgettext=pgettext, npgettext=npgettext
)
def _install_null(self, newstyle: t.Optional[bool] = None) -> None:
import gettext
translations = gettext.NullTranslations()
if hasattr(translations, "pgettext"):
# Python < 3.8
pgettext = translations.pgettext # type: ignore
else:
def pgettext(c: str, s: str) -> str:
return s
if hasattr(translations, "npgettext"):
npgettext = translations.npgettext # type: ignore
else:
def npgettext(c: str, s: str, p: str, n: int) -> str:
return s if n == 1 else p
self._install_callables(
gettext=translations.gettext,
ngettext=translations.ngettext,
newstyle=newstyle,
pgettext=pgettext,
npgettext=npgettext,
)
def _install_callables(
self,
gettext: t.Callable[[str], str],
ngettext: t.Callable[[str, str, int], str],
newstyle: t.Optional[bool] = None,
pgettext: t.Optional[t.Callable[[str, str], str]] = None,
npgettext: t.Optional[t.Callable[[str, str, str, int], str]] = None,
) -> None:
if newstyle is not None:
self.environment.newstyle_gettext = newstyle # type: ignore
if self.environment.newstyle_gettext: # type: ignore
gettext = _make_new_gettext(gettext)
ngettext = _make_new_ngettext(ngettext)
if pgettext is not None:
pgettext = _make_new_pgettext(pgettext)
if npgettext is not None:
npgettext = _make_new_npgettext(npgettext)
self.environment.globals.update(
gettext=gettext, ngettext=ngettext, pgettext=pgettext, npgettext=npgettext
)
def _uninstall(self, translations: "_SupportedTranslations") -> None:
for key in ("gettext", "ngettext", "pgettext", "npgettext"):
self.environment.globals.pop(key, None)
def _extract(
self,
source: t.Union[str, nodes.Template],
gettext_functions: t.Sequence[str] = GETTEXT_FUNCTIONS,
) -> t.Iterator[
t.Tuple[int, str, t.Union[t.Optional[str], t.Tuple[t.Optional[str], ...]]]
]:
if isinstance(source, str):
source = self.environment.parse(source)
return extract_from_ast(source, gettext_functions)
def parse(self, parser: "Parser") -> t.Union[nodes.Node, t.List[nodes.Node]]:
"""Parse a translatable tag."""
lineno = next(parser.stream).lineno
context = None
context_token = parser.stream.next_if("string")
if context_token is not None:
context = context_token.value
# find all the variables referenced. Additionally a variable can be
# defined in the body of the trans block too, but this is checked at
# a later state.
plural_expr: t.Optional[nodes.Expr] = None
plural_expr_assignment: t.Optional[nodes.Assign] = None
num_called_num = False
variables: t.Dict[str, nodes.Expr] = {}
trimmed = None
while parser.stream.current.type != "block_end":
if variables:
parser.stream.expect("comma")
# skip colon for python compatibility
if parser.stream.skip_if("colon"):
break
token = parser.stream.expect("name")
if token.value in variables:
parser.fail(
f"translatable variable {token.value!r} defined twice.",
token.lineno,
exc=TemplateAssertionError,
)
# expressions
if parser.stream.current.type == "assign":
next(parser.stream)
variables[token.value] = var = parser.parse_expression()
elif trimmed is None and token.value in ("trimmed", "notrimmed"):
trimmed = token.value == "trimmed"
continue
else:
variables[token.value] = var = nodes.Name(token.value, "load")
if plural_expr is None:
if isinstance(var, nodes.Call):
plural_expr = nodes.Name("_trans", "load")
variables[token.value] = plural_expr
plural_expr_assignment = nodes.Assign(
nodes.Name("_trans", "store"), var
)
else:
plural_expr = var
num_called_num = token.value == "num"
parser.stream.expect("block_end")
plural = None
have_plural = False
referenced = set()
# now parse until endtrans or pluralize
singular_names, singular = self._parse_block(parser, True)
if singular_names:
referenced.update(singular_names)
if plural_expr is None:
plural_expr = nodes.Name(singular_names[0], "load")
num_called_num = singular_names[0] == "num"
# if we have a pluralize block, we parse that too
if parser.stream.current.test("name:pluralize"):
have_plural = True
next(parser.stream)
if parser.stream.current.type != "block_end":
token = parser.stream.expect("name")
if token.value not in variables:
parser.fail(
f"unknown variable {token.value!r} for pluralization",
token.lineno,
exc=TemplateAssertionError,
)
plural_expr = variables[token.value]
num_called_num = token.value == "num"
parser.stream.expect("block_end")
plural_names, plural = self._parse_block(parser, False)
next(parser.stream)
referenced.update(plural_names)
else:
next(parser.stream)
# register free names as simple name expressions
for name in referenced:
if name not in variables:
variables[name] = nodes.Name(name, "load")
if not have_plural:
plural_expr = None
elif plural_expr is None:
parser.fail("pluralize without variables", lineno)
if trimmed is None:
trimmed = self.environment.policies["ext.i18n.trimmed"]
if trimmed:
singular = self._trim_whitespace(singular)
if plural:
plural = self._trim_whitespace(plural)
node = self._make_node(
singular,
plural,
context,
variables,
plural_expr,
bool(referenced),
num_called_num and have_plural,
)
node.set_lineno(lineno)
if plural_expr_assignment is not None:
return [plural_expr_assignment, node]
else:
return node
def _trim_whitespace(self, string: str, _ws_re: t.Pattern[str] = _ws_re) -> str:
return _ws_re.sub(" ", string.strip())
def _parse_block(
self, parser: "Parser", allow_pluralize: bool
) -> t.Tuple[t.List[str], str]:
"""Parse until the next block tag with a given name."""
referenced = []
buf = []
while True:
if parser.stream.current.type == "data":
buf.append(parser.stream.current.value.replace("%", "%%"))
next(parser.stream)
elif parser.stream.current.type == "variable_begin":
next(parser.stream)
name = parser.stream.expect("name").value
referenced.append(name)
buf.append(f"%({name})s")
parser.stream.expect("variable_end")
elif parser.stream.current.type == "block_begin":
next(parser.stream)
if parser.stream.current.test("name:endtrans"):
break
elif parser.stream.current.test("name:pluralize"):
if allow_pluralize:
break
parser.fail(
"a translatable section can have only one pluralize section"
)
parser.fail(
"control structures in translatable sections are not allowed"
)
elif parser.stream.eos:
parser.fail("unclosed translation block")
else:
raise RuntimeError("internal parser error")
return referenced, concat(buf)
def _make_node(
self,
singular: str,
plural: t.Optional[str],
context: t.Optional[str],
variables: t.Dict[str, nodes.Expr],
plural_expr: t.Optional[nodes.Expr],
vars_referenced: bool,
num_called_num: bool,
) -> nodes.Output:
"""Generates a useful node from the data provided."""
newstyle = self.environment.newstyle_gettext # type: ignore
node: nodes.Expr
# no variables referenced? no need to escape for old style
# gettext invocations only if there are vars.
if not vars_referenced and not newstyle:
singular = singular.replace("%%", "%")
if plural:
plural = plural.replace("%%", "%")
func_name = "gettext"
func_args: t.List[nodes.Expr] = [nodes.Const(singular)]
if context is not None:
func_args.insert(0, nodes.Const(context))
func_name = f"p{func_name}"
if plural_expr is not None:
func_name = f"n{func_name}"
func_args.extend((nodes.Const(plural), plural_expr))
node = nodes.Call(nodes.Name(func_name, "load"), func_args, [], None, None)
# in case newstyle gettext is used, the method is powerful
# enough to handle the variable expansion and autoescape
# handling itself
if newstyle:
for key, value in variables.items():
# the function adds that later anyways in case num was
# called num, so just skip it.
if num_called_num and key == "num":
continue
node.kwargs.append(nodes.Keyword(key, value))
# otherwise do that here
else:
# mark the return value as safe if we are in an
# environment with autoescaping turned on
node = nodes.MarkSafeIfAutoescape(node)
if variables:
node = nodes.Mod(
node,
nodes.Dict(
[
nodes.Pair(nodes.Const(key), value)
for key, value in variables.items()
]
),
)
return nodes.Output([node])
class ExprStmtExtension(Extension):
"""Adds a `do` tag to Jinja that works like the print statement just
that it doesn't print the return value.
"""
tags = {"do"}
def parse(self, parser: "Parser") -> nodes.ExprStmt:
node = nodes.ExprStmt(lineno=next(parser.stream).lineno)
node.node = parser.parse_tuple()
return node
class LoopControlExtension(Extension):
"""Adds break and continue to the template engine."""
tags = {"break", "continue"}
def parse(self, parser: "Parser") -> t.Union[nodes.Break, nodes.Continue]:
token = next(parser.stream)
if token.value == "break":
return nodes.Break(lineno=token.lineno)
return nodes.Continue(lineno=token.lineno)
class DebugExtension(Extension):
"""A ``{% debug %}`` tag that dumps the available variables,
filters, and tests.
.. code-block:: html+jinja
<pre>{% debug %}</pre>
.. code-block:: text
{'context': {'cycler': <class 'jinja2.utils.Cycler'>,
...,
'namespace': <class 'jinja2.utils.Namespace'>},
'filters': ['abs', 'attr', 'batch', 'capitalize', 'center', 'count', 'd',
..., 'urlencode', 'urlize', 'wordcount', 'wordwrap', 'xmlattr'],
'tests': ['!=', '<', '<=', '==', '>', '>=', 'callable', 'defined',
..., 'odd', 'sameas', 'sequence', 'string', 'undefined', 'upper']}
.. versionadded:: 2.11.0
"""
tags = {"debug"}
def parse(self, parser: "Parser") -> nodes.Output:
lineno = parser.stream.expect("name:debug").lineno
context = nodes.ContextReference()
result = self.call_method("_render", [context], lineno=lineno)
return nodes.Output([result], lineno=lineno)
def _render(self, context: Context) -> str:
result = {
"context": context.get_all(),
"filters": sorted(self.environment.filters.keys()),
"tests": sorted(self.environment.tests.keys()),
}
# Set the depth since the intent is to show the top few names.
return pprint.pformat(result, depth=3, compact=True)
def extract_from_ast(
ast: nodes.Template,
gettext_functions: t.Sequence[str] = GETTEXT_FUNCTIONS,
babel_style: bool = True,
) -> t.Iterator[
t.Tuple[int, str, t.Union[t.Optional[str], t.Tuple[t.Optional[str], ...]]]
]:
"""Extract localizable strings from the given template node. Per
default this function returns matches in babel style that means non string
parameters as well as keyword arguments are returned as `None`. This
allows Babel to figure out what you really meant if you are using
gettext functions that allow keyword arguments for placeholder expansion.
If you don't want that behavior set the `babel_style` parameter to `False`
which causes only strings to be returned and parameters are always stored
in tuples. As a consequence invalid gettext calls (calls without a single
string parameter or string parameters after non-string parameters) are
skipped.
This example explains the behavior:
>>> from jinja2 import Environment
>>> env = Environment()
>>> node = env.parse('{{ (_("foo"), _(), ngettext("foo", "bar", 42)) }}')
>>> list(extract_from_ast(node))
[(1, '_', 'foo'), (1, '_', ()), (1, 'ngettext', ('foo', 'bar', None))]
>>> list(extract_from_ast(node, babel_style=False))
[(1, '_', ('foo',)), (1, 'ngettext', ('foo', 'bar'))]
For every string found this function yields a ``(lineno, function,
message)`` tuple, where:
* ``lineno`` is the number of the line on which the string was found,
* ``function`` is the name of the ``gettext`` function used (if the
string was extracted from embedded Python code), and
* ``message`` is the string, or a tuple of strings for functions
with multiple string arguments.
This extraction function operates on the AST and is because of that unable
to extract any comments. For comment support you have to use the babel
extraction interface or extract comments yourself.
"""
out: t.Union[t.Optional[str], t.Tuple[t.Optional[str], ...]]
for node in ast.find_all(nodes.Call):
if (
not isinstance(node.node, nodes.Name)
or node.node.name not in gettext_functions
):
continue
strings: t.List[t.Optional[str]] = []
for arg in node.args:
if isinstance(arg, nodes.Const) and isinstance(arg.value, str):
strings.append(arg.value)
else:
strings.append(None)
for _ in node.kwargs:
strings.append(None)
if node.dyn_args is not None:
strings.append(None)
if node.dyn_kwargs is not None:
strings.append(None)
if not babel_style:
out = tuple(x for x in strings if x is not None)
if not out:
continue
else:
if len(strings) == 1:
out = strings[0]
else:
out = tuple(strings)
yield node.lineno, node.node.name, out
class _CommentFinder:
"""Helper class to find comments in a token stream. Can only
find comments for gettext calls forwards. Once the comment
from line 4 is found, a comment for line 1 will not return a
usable value.
"""
def __init__(
self, tokens: t.Sequence[t.Tuple[int, str, str]], comment_tags: t.Sequence[str]
) -> None:
self.tokens = tokens
self.comment_tags = comment_tags
self.offset = 0
self.last_lineno = 0
def find_backwards(self, offset: int) -> t.List[str]:
try:
for _, token_type, token_value in reversed(
self.tokens[self.offset : offset]
):
if token_type in ("comment", "linecomment"):
try:
prefix, comment = token_value.split(None, 1)
except ValueError:
continue
if prefix in self.comment_tags:
return [comment.rstrip()]
return []
finally:
self.offset = offset
def find_comments(self, lineno: int) -> t.List[str]:
if not self.comment_tags or self.last_lineno > lineno:
return []
for idx, (token_lineno, _, _) in enumerate(self.tokens[self.offset :]):
if token_lineno > lineno:
return self.find_backwards(self.offset + idx)
return self.find_backwards(len(self.tokens))
def babel_extract(
fileobj: t.BinaryIO,
keywords: t.Sequence[str],
comment_tags: t.Sequence[str],
options: t.Dict[str, t.Any],
) -> t.Iterator[
t.Tuple[
int, str, t.Union[t.Optional[str], t.Tuple[t.Optional[str], ...]], t.List[str]
]
]:
"""Babel extraction method for Jinja templates.
.. versionchanged:: 2.3
Basic support for translation comments was added. If `comment_tags`
is now set to a list of keywords for extraction, the extractor will
try to find the best preceding comment that begins with one of the
keywords. For best results, make sure to not have more than one
gettext call in one line of code and the matching comment in the
same line or the line before.
.. versionchanged:: 2.5.1
The `newstyle_gettext` flag can be set to `True` to enable newstyle
gettext calls.
.. versionchanged:: 2.7
A `silent` option can now be provided. If set to `False` template
syntax errors are propagated instead of being ignored.
:param fileobj: the file-like object the messages should be extracted from
:param keywords: a list of keywords (i.e. function names) that should be
recognized as translation functions
:param comment_tags: a list of translator tags to search for and include
in the results.
:param options: a dictionary of additional options (optional)
:return: an iterator over ``(lineno, funcname, message, comments)`` tuples.
(comments will be empty currently)
"""
extensions: t.Dict[t.Type[Extension], None] = {}
for extension_name in options.get("extensions", "").split(","):
extension_name = extension_name.strip()
if not extension_name:
continue
extensions[import_string(extension_name)] = None
if InternationalizationExtension not in extensions:
extensions[InternationalizationExtension] = None
def getbool(options: t.Mapping[str, str], key: str, default: bool = False) -> bool:
return options.get(key, str(default)).lower() in {"1", "on", "yes", "true"}
silent = getbool(options, "silent", True)
environment = Environment(
options.get("block_start_string", defaults.BLOCK_START_STRING),
options.get("block_end_string", defaults.BLOCK_END_STRING),
options.get("variable_start_string", defaults.VARIABLE_START_STRING),
options.get("variable_end_string", defaults.VARIABLE_END_STRING),
options.get("comment_start_string", defaults.COMMENT_START_STRING),
options.get("comment_end_string", defaults.COMMENT_END_STRING),
options.get("line_statement_prefix") or defaults.LINE_STATEMENT_PREFIX,
options.get("line_comment_prefix") or defaults.LINE_COMMENT_PREFIX,
getbool(options, "trim_blocks", defaults.TRIM_BLOCKS),
getbool(options, "lstrip_blocks", defaults.LSTRIP_BLOCKS),
defaults.NEWLINE_SEQUENCE,
getbool(options, "keep_trailing_newline", defaults.KEEP_TRAILING_NEWLINE),
tuple(extensions),
cache_size=0,
auto_reload=False,
)
if getbool(options, "trimmed"):
environment.policies["ext.i18n.trimmed"] = True
if getbool(options, "newstyle_gettext"):
environment.newstyle_gettext = True # type: ignore
source = fileobj.read().decode(options.get("encoding", "utf-8"))
try:
node = environment.parse(source)
tokens = list(environment.lex(environment.preprocess(source)))
except TemplateSyntaxError:
if not silent:
raise
# skip templates with syntax errors
return
finder = _CommentFinder(tokens, comment_tags)
for lineno, func, message in extract_from_ast(node, keywords):
yield lineno, func, message, finder.find_comments(lineno)
#: nicer import names
i18n = InternationalizationExtension
do = ExprStmtExtension
loopcontrols = LoopControlExtension
debug = DebugExtension

File diff suppressed because it is too large Load Diff

View File

@ -1,318 +0,0 @@
import typing as t
from . import nodes
from .visitor import NodeVisitor
VAR_LOAD_PARAMETER = "param"
VAR_LOAD_RESOLVE = "resolve"
VAR_LOAD_ALIAS = "alias"
VAR_LOAD_UNDEFINED = "undefined"
def find_symbols(
nodes: t.Iterable[nodes.Node], parent_symbols: t.Optional["Symbols"] = None
) -> "Symbols":
sym = Symbols(parent=parent_symbols)
visitor = FrameSymbolVisitor(sym)
for node in nodes:
visitor.visit(node)
return sym
def symbols_for_node(
node: nodes.Node, parent_symbols: t.Optional["Symbols"] = None
) -> "Symbols":
sym = Symbols(parent=parent_symbols)
sym.analyze_node(node)
return sym
class Symbols:
def __init__(
self, parent: t.Optional["Symbols"] = None, level: t.Optional[int] = None
) -> None:
if level is None:
if parent is None:
level = 0
else:
level = parent.level + 1
self.level: int = level
self.parent = parent
self.refs: t.Dict[str, str] = {}
self.loads: t.Dict[str, t.Any] = {}
self.stores: t.Set[str] = set()
def analyze_node(self, node: nodes.Node, **kwargs: t.Any) -> None:
visitor = RootVisitor(self)
visitor.visit(node, **kwargs)
def _define_ref(
self, name: str, load: t.Optional[t.Tuple[str, t.Optional[str]]] = None
) -> str:
ident = f"l_{self.level}_{name}"
self.refs[name] = ident
if load is not None:
self.loads[ident] = load
return ident
def find_load(self, target: str) -> t.Optional[t.Any]:
if target in self.loads:
return self.loads[target]
if self.parent is not None:
return self.parent.find_load(target)
return None
def find_ref(self, name: str) -> t.Optional[str]:
if name in self.refs:
return self.refs[name]
if self.parent is not None:
return self.parent.find_ref(name)
return None
def ref(self, name: str) -> str:
rv = self.find_ref(name)
if rv is None:
raise AssertionError(
"Tried to resolve a name to a reference that was"
f" unknown to the frame ({name!r})"
)
return rv
def copy(self) -> "Symbols":
rv = object.__new__(self.__class__)
rv.__dict__.update(self.__dict__)
rv.refs = self.refs.copy()
rv.loads = self.loads.copy()
rv.stores = self.stores.copy()
return rv
def store(self, name: str) -> None:
self.stores.add(name)
# If we have not see the name referenced yet, we need to figure
# out what to set it to.
if name not in self.refs:
# If there is a parent scope we check if the name has a
# reference there. If it does it means we might have to alias
# to a variable there.
if self.parent is not None:
outer_ref = self.parent.find_ref(name)
if outer_ref is not None:
self._define_ref(name, load=(VAR_LOAD_ALIAS, outer_ref))
return
# Otherwise we can just set it to undefined.
self._define_ref(name, load=(VAR_LOAD_UNDEFINED, None))
def declare_parameter(self, name: str) -> str:
self.stores.add(name)
return self._define_ref(name, load=(VAR_LOAD_PARAMETER, None))
def load(self, name: str) -> None:
if self.find_ref(name) is None:
self._define_ref(name, load=(VAR_LOAD_RESOLVE, name))
def branch_update(self, branch_symbols: t.Sequence["Symbols"]) -> None:
stores: t.Dict[str, int] = {}
for branch in branch_symbols:
for target in branch.stores:
if target in self.stores:
continue
stores[target] = stores.get(target, 0) + 1
for sym in branch_symbols:
self.refs.update(sym.refs)
self.loads.update(sym.loads)
self.stores.update(sym.stores)
for name, branch_count in stores.items():
if branch_count == len(branch_symbols):
continue
target = self.find_ref(name) # type: ignore
assert target is not None, "should not happen"
if self.parent is not None:
outer_target = self.parent.find_ref(name)
if outer_target is not None:
self.loads[target] = (VAR_LOAD_ALIAS, outer_target)
continue
self.loads[target] = (VAR_LOAD_RESOLVE, name)
def dump_stores(self) -> t.Dict[str, str]:
rv: t.Dict[str, str] = {}
node: t.Optional["Symbols"] = self
while node is not None:
for name in sorted(node.stores):
if name not in rv:
rv[name] = self.find_ref(name) # type: ignore
node = node.parent
return rv
def dump_param_targets(self) -> t.Set[str]:
rv = set()
node: t.Optional["Symbols"] = self
while node is not None:
for target, (instr, _) in self.loads.items():
if instr == VAR_LOAD_PARAMETER:
rv.add(target)
node = node.parent
return rv
class RootVisitor(NodeVisitor):
def __init__(self, symbols: "Symbols") -> None:
self.sym_visitor = FrameSymbolVisitor(symbols)
def _simple_visit(self, node: nodes.Node, **kwargs: t.Any) -> None:
for child in node.iter_child_nodes():
self.sym_visitor.visit(child)
visit_Template = _simple_visit
visit_Block = _simple_visit
visit_Macro = _simple_visit
visit_FilterBlock = _simple_visit
visit_Scope = _simple_visit
visit_If = _simple_visit
visit_ScopedEvalContextModifier = _simple_visit
def visit_AssignBlock(self, node: nodes.AssignBlock, **kwargs: t.Any) -> None:
for child in node.body:
self.sym_visitor.visit(child)
def visit_CallBlock(self, node: nodes.CallBlock, **kwargs: t.Any) -> None:
for child in node.iter_child_nodes(exclude=("call",)):
self.sym_visitor.visit(child)
def visit_OverlayScope(self, node: nodes.OverlayScope, **kwargs: t.Any) -> None:
for child in node.body:
self.sym_visitor.visit(child)
def visit_For(
self, node: nodes.For, for_branch: str = "body", **kwargs: t.Any
) -> None:
if for_branch == "body":
self.sym_visitor.visit(node.target, store_as_param=True)
branch = node.body
elif for_branch == "else":
branch = node.else_
elif for_branch == "test":
self.sym_visitor.visit(node.target, store_as_param=True)
if node.test is not None:
self.sym_visitor.visit(node.test)
return
else:
raise RuntimeError("Unknown for branch")
if branch:
for item in branch:
self.sym_visitor.visit(item)
def visit_With(self, node: nodes.With, **kwargs: t.Any) -> None:
for target in node.targets:
self.sym_visitor.visit(target)
for child in node.body:
self.sym_visitor.visit(child)
def generic_visit(self, node: nodes.Node, *args: t.Any, **kwargs: t.Any) -> None:
raise NotImplementedError(f"Cannot find symbols for {type(node).__name__!r}")
class FrameSymbolVisitor(NodeVisitor):
"""A visitor for `Frame.inspect`."""
def __init__(self, symbols: "Symbols") -> None:
self.symbols = symbols
def visit_Name(
self, node: nodes.Name, store_as_param: bool = False, **kwargs: t.Any
) -> None:
"""All assignments to names go through this function."""
if store_as_param or node.ctx == "param":
self.symbols.declare_parameter(node.name)
elif node.ctx == "store":
self.symbols.store(node.name)
elif node.ctx == "load":
self.symbols.load(node.name)
def visit_NSRef(self, node: nodes.NSRef, **kwargs: t.Any) -> None:
self.symbols.load(node.name)
def visit_If(self, node: nodes.If, **kwargs: t.Any) -> None:
self.visit(node.test, **kwargs)
original_symbols = self.symbols
def inner_visit(nodes: t.Iterable[nodes.Node]) -> "Symbols":
self.symbols = rv = original_symbols.copy()
for subnode in nodes:
self.visit(subnode, **kwargs)
self.symbols = original_symbols
return rv
body_symbols = inner_visit(node.body)
elif_symbols = inner_visit(node.elif_)
else_symbols = inner_visit(node.else_ or ())
self.symbols.branch_update([body_symbols, elif_symbols, else_symbols])
def visit_Macro(self, node: nodes.Macro, **kwargs: t.Any) -> None:
self.symbols.store(node.name)
def visit_Import(self, node: nodes.Import, **kwargs: t.Any) -> None:
self.generic_visit(node, **kwargs)
self.symbols.store(node.target)
def visit_FromImport(self, node: nodes.FromImport, **kwargs: t.Any) -> None:
self.generic_visit(node, **kwargs)
for name in node.names:
if isinstance(name, tuple):
self.symbols.store(name[1])
else:
self.symbols.store(name)
def visit_Assign(self, node: nodes.Assign, **kwargs: t.Any) -> None:
"""Visit assignments in the correct order."""
self.visit(node.node, **kwargs)
self.visit(node.target, **kwargs)
def visit_For(self, node: nodes.For, **kwargs: t.Any) -> None:
"""Visiting stops at for blocks. However the block sequence
is visited as part of the outer scope.
"""
self.visit(node.iter, **kwargs)
def visit_CallBlock(self, node: nodes.CallBlock, **kwargs: t.Any) -> None:
self.visit(node.call, **kwargs)
def visit_FilterBlock(self, node: nodes.FilterBlock, **kwargs: t.Any) -> None:
self.visit(node.filter, **kwargs)
def visit_With(self, node: nodes.With, **kwargs: t.Any) -> None:
for target in node.values:
self.visit(target)
def visit_AssignBlock(self, node: nodes.AssignBlock, **kwargs: t.Any) -> None:
"""Stop visiting at block assigns."""
self.visit(node.target, **kwargs)
def visit_Scope(self, node: nodes.Scope, **kwargs: t.Any) -> None:
"""Stop visiting at scopes."""
def visit_Block(self, node: nodes.Block, **kwargs: t.Any) -> None:
"""Stop visiting at blocks."""
def visit_OverlayScope(self, node: nodes.OverlayScope, **kwargs: t.Any) -> None:
"""Do not visit into overlay scopes."""

View File

@ -1,866 +0,0 @@
"""Implements a Jinja / Python combination lexer. The ``Lexer`` class
is used to do some preprocessing. It filters out invalid operators like
the bitshift operators we don't allow in templates. It separates
template code and python code in expressions.
"""
import re
import typing as t
from ast import literal_eval
from collections import deque
from sys import intern
from ._identifier import pattern as name_re
from .exceptions import TemplateSyntaxError
from .utils import LRUCache
if t.TYPE_CHECKING:
import typing_extensions as te
from .environment import Environment
# cache for the lexers. Exists in order to be able to have multiple
# environments with the same lexer
_lexer_cache: t.MutableMapping[t.Tuple, "Lexer"] = LRUCache(50) # type: ignore
# static regular expressions
whitespace_re = re.compile(r"\s+")
newline_re = re.compile(r"(\r\n|\r|\n)")
string_re = re.compile(
r"('([^'\\]*(?:\\.[^'\\]*)*)'" r'|"([^"\\]*(?:\\.[^"\\]*)*)")', re.S
)
integer_re = re.compile(
r"""
(
0b(_?[0-1])+ # binary
|
0o(_?[0-7])+ # octal
|
0x(_?[\da-f])+ # hex
|
[1-9](_?\d)* # decimal
|
0(_?0)* # decimal zero
)
""",
re.IGNORECASE | re.VERBOSE,
)
float_re = re.compile(
r"""
(?<!\.) # doesn't start with a .
(\d+_)*\d+ # digits, possibly _ separated
(
(\.(\d+_)*\d+)? # optional fractional part
e[+\-]?(\d+_)*\d+ # exponent part
|
\.(\d+_)*\d+ # required fractional part
)
""",
re.IGNORECASE | re.VERBOSE,
)
# internal the tokens and keep references to them
TOKEN_ADD = intern("add")
TOKEN_ASSIGN = intern("assign")
TOKEN_COLON = intern("colon")
TOKEN_COMMA = intern("comma")
TOKEN_DIV = intern("div")
TOKEN_DOT = intern("dot")
TOKEN_EQ = intern("eq")
TOKEN_FLOORDIV = intern("floordiv")
TOKEN_GT = intern("gt")
TOKEN_GTEQ = intern("gteq")
TOKEN_LBRACE = intern("lbrace")
TOKEN_LBRACKET = intern("lbracket")
TOKEN_LPAREN = intern("lparen")
TOKEN_LT = intern("lt")
TOKEN_LTEQ = intern("lteq")
TOKEN_MOD = intern("mod")
TOKEN_MUL = intern("mul")
TOKEN_NE = intern("ne")
TOKEN_PIPE = intern("pipe")
TOKEN_POW = intern("pow")
TOKEN_RBRACE = intern("rbrace")
TOKEN_RBRACKET = intern("rbracket")
TOKEN_RPAREN = intern("rparen")
TOKEN_SEMICOLON = intern("semicolon")
TOKEN_SUB = intern("sub")
TOKEN_TILDE = intern("tilde")
TOKEN_WHITESPACE = intern("whitespace")
TOKEN_FLOAT = intern("float")
TOKEN_INTEGER = intern("integer")
TOKEN_NAME = intern("name")
TOKEN_STRING = intern("string")
TOKEN_OPERATOR = intern("operator")
TOKEN_BLOCK_BEGIN = intern("block_begin")
TOKEN_BLOCK_END = intern("block_end")
TOKEN_VARIABLE_BEGIN = intern("variable_begin")
TOKEN_VARIABLE_END = intern("variable_end")
TOKEN_RAW_BEGIN = intern("raw_begin")
TOKEN_RAW_END = intern("raw_end")
TOKEN_COMMENT_BEGIN = intern("comment_begin")
TOKEN_COMMENT_END = intern("comment_end")
TOKEN_COMMENT = intern("comment")
TOKEN_LINESTATEMENT_BEGIN = intern("linestatement_begin")
TOKEN_LINESTATEMENT_END = intern("linestatement_end")
TOKEN_LINECOMMENT_BEGIN = intern("linecomment_begin")
TOKEN_LINECOMMENT_END = intern("linecomment_end")
TOKEN_LINECOMMENT = intern("linecomment")
TOKEN_DATA = intern("data")
TOKEN_INITIAL = intern("initial")
TOKEN_EOF = intern("eof")
# bind operators to token types
operators = {
"+": TOKEN_ADD,
"-": TOKEN_SUB,
"/": TOKEN_DIV,
"//": TOKEN_FLOORDIV,
"*": TOKEN_MUL,
"%": TOKEN_MOD,
"**": TOKEN_POW,
"~": TOKEN_TILDE,
"[": TOKEN_LBRACKET,
"]": TOKEN_RBRACKET,
"(": TOKEN_LPAREN,
")": TOKEN_RPAREN,
"{": TOKEN_LBRACE,
"}": TOKEN_RBRACE,
"==": TOKEN_EQ,
"!=": TOKEN_NE,
">": TOKEN_GT,
">=": TOKEN_GTEQ,
"<": TOKEN_LT,
"<=": TOKEN_LTEQ,
"=": TOKEN_ASSIGN,
".": TOKEN_DOT,
":": TOKEN_COLON,
"|": TOKEN_PIPE,
",": TOKEN_COMMA,
";": TOKEN_SEMICOLON,
}
reverse_operators = {v: k for k, v in operators.items()}
assert len(operators) == len(reverse_operators), "operators dropped"
operator_re = re.compile(
f"({'|'.join(re.escape(x) for x in sorted(operators, key=lambda x: -len(x)))})"
)
ignored_tokens = frozenset(
[
TOKEN_COMMENT_BEGIN,
TOKEN_COMMENT,
TOKEN_COMMENT_END,
TOKEN_WHITESPACE,
TOKEN_LINECOMMENT_BEGIN,
TOKEN_LINECOMMENT_END,
TOKEN_LINECOMMENT,
]
)
ignore_if_empty = frozenset(
[TOKEN_WHITESPACE, TOKEN_DATA, TOKEN_COMMENT, TOKEN_LINECOMMENT]
)
def _describe_token_type(token_type: str) -> str:
if token_type in reverse_operators:
return reverse_operators[token_type]
return {
TOKEN_COMMENT_BEGIN: "begin of comment",
TOKEN_COMMENT_END: "end of comment",
TOKEN_COMMENT: "comment",
TOKEN_LINECOMMENT: "comment",
TOKEN_BLOCK_BEGIN: "begin of statement block",
TOKEN_BLOCK_END: "end of statement block",
TOKEN_VARIABLE_BEGIN: "begin of print statement",
TOKEN_VARIABLE_END: "end of print statement",
TOKEN_LINESTATEMENT_BEGIN: "begin of line statement",
TOKEN_LINESTATEMENT_END: "end of line statement",
TOKEN_DATA: "template data / text",
TOKEN_EOF: "end of template",
}.get(token_type, token_type)
def describe_token(token: "Token") -> str:
"""Returns a description of the token."""
if token.type == TOKEN_NAME:
return token.value
return _describe_token_type(token.type)
def describe_token_expr(expr: str) -> str:
"""Like `describe_token` but for token expressions."""
if ":" in expr:
type, value = expr.split(":", 1)
if type == TOKEN_NAME:
return value
else:
type = expr
return _describe_token_type(type)
def count_newlines(value: str) -> int:
"""Count the number of newline characters in the string. This is
useful for extensions that filter a stream.
"""
return len(newline_re.findall(value))
def compile_rules(environment: "Environment") -> t.List[t.Tuple[str, str]]:
"""Compiles all the rules from the environment into a list of rules."""
e = re.escape
rules = [
(
len(environment.comment_start_string),
TOKEN_COMMENT_BEGIN,
e(environment.comment_start_string),
),
(
len(environment.block_start_string),
TOKEN_BLOCK_BEGIN,
e(environment.block_start_string),
),
(
len(environment.variable_start_string),
TOKEN_VARIABLE_BEGIN,
e(environment.variable_start_string),
),
]
if environment.line_statement_prefix is not None:
rules.append(
(
len(environment.line_statement_prefix),
TOKEN_LINESTATEMENT_BEGIN,
r"^[ \t\v]*" + e(environment.line_statement_prefix),
)
)
if environment.line_comment_prefix is not None:
rules.append(
(
len(environment.line_comment_prefix),
TOKEN_LINECOMMENT_BEGIN,
r"(?:^|(?<=\S))[^\S\r\n]*" + e(environment.line_comment_prefix),
)
)
return [x[1:] for x in sorted(rules, reverse=True)]
class Failure:
"""Class that raises a `TemplateSyntaxError` if called.
Used by the `Lexer` to specify known errors.
"""
def __init__(
self, message: str, cls: t.Type[TemplateSyntaxError] = TemplateSyntaxError
) -> None:
self.message = message
self.error_class = cls
def __call__(self, lineno: int, filename: str) -> "te.NoReturn":
raise self.error_class(self.message, lineno, filename)
class Token(t.NamedTuple):
lineno: int
type: str
value: str
def __str__(self) -> str:
return describe_token(self)
def test(self, expr: str) -> bool:
"""Test a token against a token expression. This can either be a
token type or ``'token_type:token_value'``. This can only test
against string values and types.
"""
# here we do a regular string equality check as test_any is usually
# passed an iterable of not interned strings.
if self.type == expr:
return True
if ":" in expr:
return expr.split(":", 1) == [self.type, self.value]
return False
def test_any(self, *iterable: str) -> bool:
"""Test against multiple token expressions."""
return any(self.test(expr) for expr in iterable)
class TokenStreamIterator:
"""The iterator for tokenstreams. Iterate over the stream
until the eof token is reached.
"""
def __init__(self, stream: "TokenStream") -> None:
self.stream = stream
def __iter__(self) -> "TokenStreamIterator":
return self
def __next__(self) -> Token:
token = self.stream.current
if token.type is TOKEN_EOF:
self.stream.close()
raise StopIteration
next(self.stream)
return token
class TokenStream:
"""A token stream is an iterable that yields :class:`Token`\\s. The
parser however does not iterate over it but calls :meth:`next` to go
one token ahead. The current active token is stored as :attr:`current`.
"""
def __init__(
self,
generator: t.Iterable[Token],
name: t.Optional[str],
filename: t.Optional[str],
):
self._iter = iter(generator)
self._pushed: "te.Deque[Token]" = deque()
self.name = name
self.filename = filename
self.closed = False
self.current = Token(1, TOKEN_INITIAL, "")
next(self)
def __iter__(self) -> TokenStreamIterator:
return TokenStreamIterator(self)
def __bool__(self) -> bool:
return bool(self._pushed) or self.current.type is not TOKEN_EOF
@property
def eos(self) -> bool:
"""Are we at the end of the stream?"""
return not self
def push(self, token: Token) -> None:
"""Push a token back to the stream."""
self._pushed.append(token)
def look(self) -> Token:
"""Look at the next token."""
old_token = next(self)
result = self.current
self.push(result)
self.current = old_token
return result
def skip(self, n: int = 1) -> None:
"""Got n tokens ahead."""
for _ in range(n):
next(self)
def next_if(self, expr: str) -> t.Optional[Token]:
"""Perform the token test and return the token if it matched.
Otherwise the return value is `None`.
"""
if self.current.test(expr):
return next(self)
return None
def skip_if(self, expr: str) -> bool:
"""Like :meth:`next_if` but only returns `True` or `False`."""
return self.next_if(expr) is not None
def __next__(self) -> Token:
"""Go one token ahead and return the old one.
Use the built-in :func:`next` instead of calling this directly.
"""
rv = self.current
if self._pushed:
self.current = self._pushed.popleft()
elif self.current.type is not TOKEN_EOF:
try:
self.current = next(self._iter)
except StopIteration:
self.close()
return rv
def close(self) -> None:
"""Close the stream."""
self.current = Token(self.current.lineno, TOKEN_EOF, "")
self._iter = iter(())
self.closed = True
def expect(self, expr: str) -> Token:
"""Expect a given token type and return it. This accepts the same
argument as :meth:`jinja2.lexer.Token.test`.
"""
if not self.current.test(expr):
expr = describe_token_expr(expr)
if self.current.type is TOKEN_EOF:
raise TemplateSyntaxError(
f"unexpected end of template, expected {expr!r}.",
self.current.lineno,
self.name,
self.filename,
)
raise TemplateSyntaxError(
f"expected token {expr!r}, got {describe_token(self.current)!r}",
self.current.lineno,
self.name,
self.filename,
)
return next(self)
def get_lexer(environment: "Environment") -> "Lexer":
"""Return a lexer which is probably cached."""
key = (
environment.block_start_string,
environment.block_end_string,
environment.variable_start_string,
environment.variable_end_string,
environment.comment_start_string,
environment.comment_end_string,
environment.line_statement_prefix,
environment.line_comment_prefix,
environment.trim_blocks,
environment.lstrip_blocks,
environment.newline_sequence,
environment.keep_trailing_newline,
)
lexer = _lexer_cache.get(key)
if lexer is None:
_lexer_cache[key] = lexer = Lexer(environment)
return lexer
class OptionalLStrip(tuple):
"""A special tuple for marking a point in the state that can have
lstrip applied.
"""
__slots__ = ()
# Even though it looks like a no-op, creating instances fails
# without this.
def __new__(cls, *members, **kwargs): # type: ignore
return super().__new__(cls, members)
class _Rule(t.NamedTuple):
pattern: t.Pattern[str]
tokens: t.Union[str, t.Tuple[str, ...], t.Tuple[Failure]]
command: t.Optional[str]
class Lexer:
"""Class that implements a lexer for a given environment. Automatically
created by the environment class, usually you don't have to do that.
Note that the lexer is not automatically bound to an environment.
Multiple environments can share the same lexer.
"""
def __init__(self, environment: "Environment") -> None:
# shortcuts
e = re.escape
def c(x: str) -> t.Pattern[str]:
return re.compile(x, re.M | re.S)
# lexing rules for tags
tag_rules: t.List[_Rule] = [
_Rule(whitespace_re, TOKEN_WHITESPACE, None),
_Rule(float_re, TOKEN_FLOAT, None),
_Rule(integer_re, TOKEN_INTEGER, None),
_Rule(name_re, TOKEN_NAME, None),
_Rule(string_re, TOKEN_STRING, None),
_Rule(operator_re, TOKEN_OPERATOR, None),
]
# assemble the root lexing rule. because "|" is ungreedy
# we have to sort by length so that the lexer continues working
# as expected when we have parsing rules like <% for block and
# <%= for variables. (if someone wants asp like syntax)
# variables are just part of the rules if variable processing
# is required.
root_tag_rules = compile_rules(environment)
block_start_re = e(environment.block_start_string)
block_end_re = e(environment.block_end_string)
comment_end_re = e(environment.comment_end_string)
variable_end_re = e(environment.variable_end_string)
# block suffix if trimming is enabled
block_suffix_re = "\\n?" if environment.trim_blocks else ""
self.lstrip_blocks = environment.lstrip_blocks
self.newline_sequence = environment.newline_sequence
self.keep_trailing_newline = environment.keep_trailing_newline
root_raw_re = (
rf"(?P<raw_begin>{block_start_re}(\-|\+|)\s*raw\s*"
rf"(?:\-{block_end_re}\s*|{block_end_re}))"
)
root_parts_re = "|".join(
[root_raw_re] + [rf"(?P<{n}>{r}(\-|\+|))" for n, r in root_tag_rules]
)
# global lexing rules
self.rules: t.Dict[str, t.List[_Rule]] = {
"root": [
# directives
_Rule(
c(rf"(.*?)(?:{root_parts_re})"),
OptionalLStrip(TOKEN_DATA, "#bygroup"), # type: ignore
"#bygroup",
),
# data
_Rule(c(".+"), TOKEN_DATA, None),
],
# comments
TOKEN_COMMENT_BEGIN: [
_Rule(
c(
rf"(.*?)((?:\+{comment_end_re}|\-{comment_end_re}\s*"
rf"|{comment_end_re}{block_suffix_re}))"
),
(TOKEN_COMMENT, TOKEN_COMMENT_END),
"#pop",
),
_Rule(c(r"(.)"), (Failure("Missing end of comment tag"),), None),
],
# blocks
TOKEN_BLOCK_BEGIN: [
_Rule(
c(
rf"(?:\+{block_end_re}|\-{block_end_re}\s*"
rf"|{block_end_re}{block_suffix_re})"
),
TOKEN_BLOCK_END,
"#pop",
),
]
+ tag_rules,
# variables
TOKEN_VARIABLE_BEGIN: [
_Rule(
c(rf"\-{variable_end_re}\s*|{variable_end_re}"),
TOKEN_VARIABLE_END,
"#pop",
)
]
+ tag_rules,
# raw block
TOKEN_RAW_BEGIN: [
_Rule(
c(
rf"(.*?)((?:{block_start_re}(\-|\+|))\s*endraw\s*"
rf"(?:\+{block_end_re}|\-{block_end_re}\s*"
rf"|{block_end_re}{block_suffix_re}))"
),
OptionalLStrip(TOKEN_DATA, TOKEN_RAW_END), # type: ignore
"#pop",
),
_Rule(c(r"(.)"), (Failure("Missing end of raw directive"),), None),
],
# line statements
TOKEN_LINESTATEMENT_BEGIN: [
_Rule(c(r"\s*(\n|$)"), TOKEN_LINESTATEMENT_END, "#pop")
]
+ tag_rules,
# line comments
TOKEN_LINECOMMENT_BEGIN: [
_Rule(
c(r"(.*?)()(?=\n|$)"),
(TOKEN_LINECOMMENT, TOKEN_LINECOMMENT_END),
"#pop",
)
],
}
def _normalize_newlines(self, value: str) -> str:
"""Replace all newlines with the configured sequence in strings
and template data.
"""
return newline_re.sub(self.newline_sequence, value)
def tokenize(
self,
source: str,
name: t.Optional[str] = None,
filename: t.Optional[str] = None,
state: t.Optional[str] = None,
) -> TokenStream:
"""Calls tokeniter + tokenize and wraps it in a token stream."""
stream = self.tokeniter(source, name, filename, state)
return TokenStream(self.wrap(stream, name, filename), name, filename)
def wrap(
self,
stream: t.Iterable[t.Tuple[int, str, str]],
name: t.Optional[str] = None,
filename: t.Optional[str] = None,
) -> t.Iterator[Token]:
"""This is called with the stream as returned by `tokenize` and wraps
every token in a :class:`Token` and converts the value.
"""
for lineno, token, value_str in stream:
if token in ignored_tokens:
continue
value: t.Any = value_str
if token == TOKEN_LINESTATEMENT_BEGIN:
token = TOKEN_BLOCK_BEGIN
elif token == TOKEN_LINESTATEMENT_END:
token = TOKEN_BLOCK_END
# we are not interested in those tokens in the parser
elif token in (TOKEN_RAW_BEGIN, TOKEN_RAW_END):
continue
elif token == TOKEN_DATA:
value = self._normalize_newlines(value_str)
elif token == "keyword":
token = value_str
elif token == TOKEN_NAME:
value = value_str
if not value.isidentifier():
raise TemplateSyntaxError(
"Invalid character in identifier", lineno, name, filename
)
elif token == TOKEN_STRING:
# try to unescape string
try:
value = (
self._normalize_newlines(value_str[1:-1])
.encode("ascii", "backslashreplace")
.decode("unicode-escape")
)
except Exception as e:
msg = str(e).split(":")[-1].strip()
raise TemplateSyntaxError(msg, lineno, name, filename) from e
elif token == TOKEN_INTEGER:
value = int(value_str.replace("_", ""), 0)
elif token == TOKEN_FLOAT:
# remove all "_" first to support more Python versions
value = literal_eval(value_str.replace("_", ""))
elif token == TOKEN_OPERATOR:
token = operators[value_str]
yield Token(lineno, token, value)
def tokeniter(
self,
source: str,
name: t.Optional[str],
filename: t.Optional[str] = None,
state: t.Optional[str] = None,
) -> t.Iterator[t.Tuple[int, str, str]]:
"""This method tokenizes the text and returns the tokens in a
generator. Use this method if you just want to tokenize a template.
.. versionchanged:: 3.0
Only ``\\n``, ``\\r\\n`` and ``\\r`` are treated as line
breaks.
"""
lines = newline_re.split(source)[::2]
if not self.keep_trailing_newline and lines[-1] == "":
del lines[-1]
source = "\n".join(lines)
pos = 0
lineno = 1
stack = ["root"]
if state is not None and state != "root":
assert state in ("variable", "block"), "invalid state"
stack.append(state + "_begin")
statetokens = self.rules[stack[-1]]
source_length = len(source)
balancing_stack: t.List[str] = []
newlines_stripped = 0
line_starting = True
while True:
# tokenizer loop
for regex, tokens, new_state in statetokens:
m = regex.match(source, pos)
# if no match we try again with the next rule
if m is None:
continue
# we only match blocks and variables if braces / parentheses
# are balanced. continue parsing with the lower rule which
# is the operator rule. do this only if the end tags look
# like operators
if balancing_stack and tokens in (
TOKEN_VARIABLE_END,
TOKEN_BLOCK_END,
TOKEN_LINESTATEMENT_END,
):
continue
# tuples support more options
if isinstance(tokens, tuple):
groups: t.Sequence[str] = m.groups()
if isinstance(tokens, OptionalLStrip):
# Rule supports lstrip. Match will look like
# text, block type, whitespace control, type, control, ...
text = groups[0]
# Skipping the text and first type, every other group is the
# whitespace control for each type. One of the groups will be
# -, +, or empty string instead of None.
strip_sign = next(g for g in groups[2::2] if g is not None)
if strip_sign == "-":
# Strip all whitespace between the text and the tag.
stripped = text.rstrip()
newlines_stripped = text[len(stripped) :].count("\n")
groups = [stripped, *groups[1:]]
elif (
# Not marked for preserving whitespace.
strip_sign != "+"
# lstrip is enabled.
and self.lstrip_blocks
# Not a variable expression.
and not m.groupdict().get(TOKEN_VARIABLE_BEGIN)
):
# The start of text between the last newline and the tag.
l_pos = text.rfind("\n") + 1
if l_pos > 0 or line_starting:
# If there's only whitespace between the newline and the
# tag, strip it.
if whitespace_re.fullmatch(text, l_pos):
groups = [text[:l_pos], *groups[1:]]
for idx, token in enumerate(tokens):
# failure group
if token.__class__ is Failure:
raise token(lineno, filename)
# bygroup is a bit more complex, in that case we
# yield for the current token the first named
# group that matched
elif token == "#bygroup":
for key, value in m.groupdict().items():
if value is not None:
yield lineno, key, value
lineno += value.count("\n")
break
else:
raise RuntimeError(
f"{regex!r} wanted to resolve the token dynamically"
" but no group matched"
)
# normal group
else:
data = groups[idx]
if data or token not in ignore_if_empty:
yield lineno, token, data
lineno += data.count("\n") + newlines_stripped
newlines_stripped = 0
# strings as token just are yielded as it.
else:
data = m.group()
# update brace/parentheses balance
if tokens == TOKEN_OPERATOR:
if data == "{":
balancing_stack.append("}")
elif data == "(":
balancing_stack.append(")")
elif data == "[":
balancing_stack.append("]")
elif data in ("}", ")", "]"):
if not balancing_stack:
raise TemplateSyntaxError(
f"unexpected '{data}'", lineno, name, filename
)
expected_op = balancing_stack.pop()
if expected_op != data:
raise TemplateSyntaxError(
f"unexpected '{data}', expected '{expected_op}'",
lineno,
name,
filename,
)
# yield items
if data or tokens not in ignore_if_empty:
yield lineno, tokens, data
lineno += data.count("\n")
line_starting = m.group()[-1:] == "\n"
# fetch new position into new variable so that we can check
# if there is a internal parsing error which would result
# in an infinite loop
pos2 = m.end()
# handle state changes
if new_state is not None:
# remove the uppermost state
if new_state == "#pop":
stack.pop()
# resolve the new state by group checking
elif new_state == "#bygroup":
for key, value in m.groupdict().items():
if value is not None:
stack.append(key)
break
else:
raise RuntimeError(
f"{regex!r} wanted to resolve the new state dynamically"
f" but no group matched"
)
# direct state name given
else:
stack.append(new_state)
statetokens = self.rules[stack[-1]]
# we are still at the same position and no stack change.
# this means a loop without break condition, avoid that and
# raise error
elif pos2 == pos:
raise RuntimeError(
f"{regex!r} yielded empty string without stack change"
)
# publish new function and start again
pos = pos2
break
# if loop terminated without break we haven't found a single match
# either we are at the end of the file or we have a problem
else:
# end of text
if pos >= source_length:
return
# something went wrong
raise TemplateSyntaxError(
f"unexpected char {source[pos]!r} at {pos}", lineno, name, filename
)

View File

@ -1,661 +0,0 @@
"""API and implementations for loading templates from different data
sources.
"""
import importlib.util
import os
import posixpath
import sys
import typing as t
import weakref
import zipimport
from collections import abc
from hashlib import sha1
from importlib import import_module
from types import ModuleType
from .exceptions import TemplateNotFound
from .utils import internalcode
from .utils import open_if_exists
if t.TYPE_CHECKING:
from .environment import Environment
from .environment import Template
def split_template_path(template: str) -> t.List[str]:
"""Split a path into segments and perform a sanity check. If it detects
'..' in the path it will raise a `TemplateNotFound` error.
"""
pieces = []
for piece in template.split("/"):
if (
os.path.sep in piece
or (os.path.altsep and os.path.altsep in piece)
or piece == os.path.pardir
):
raise TemplateNotFound(template)
elif piece and piece != ".":
pieces.append(piece)
return pieces
class BaseLoader:
"""Baseclass for all loaders. Subclass this and override `get_source` to
implement a custom loading mechanism. The environment provides a
`get_template` method that calls the loader's `load` method to get the
:class:`Template` object.
A very basic example for a loader that looks up templates on the file
system could look like this::
from jinja2 import BaseLoader, TemplateNotFound
from os.path import join, exists, getmtime
class MyLoader(BaseLoader):
def __init__(self, path):
self.path = path
def get_source(self, environment, template):
path = join(self.path, template)
if not exists(path):
raise TemplateNotFound(template)
mtime = getmtime(path)
with open(path) as f:
source = f.read()
return source, path, lambda: mtime == getmtime(path)
"""
#: if set to `False` it indicates that the loader cannot provide access
#: to the source of templates.
#:
#: .. versionadded:: 2.4
has_source_access = True
def get_source(
self, environment: "Environment", template: str
) -> t.Tuple[str, t.Optional[str], t.Optional[t.Callable[[], bool]]]:
"""Get the template source, filename and reload helper for a template.
It's passed the environment and template name and has to return a
tuple in the form ``(source, filename, uptodate)`` or raise a
`TemplateNotFound` error if it can't locate the template.
The source part of the returned tuple must be the source of the
template as a string. The filename should be the name of the
file on the filesystem if it was loaded from there, otherwise
``None``. The filename is used by Python for the tracebacks
if no loader extension is used.
The last item in the tuple is the `uptodate` function. If auto
reloading is enabled it's always called to check if the template
changed. No arguments are passed so the function must store the
old state somewhere (for example in a closure). If it returns `False`
the template will be reloaded.
"""
if not self.has_source_access:
raise RuntimeError(
f"{type(self).__name__} cannot provide access to the source"
)
raise TemplateNotFound(template)
def list_templates(self) -> t.List[str]:
"""Iterates over all templates. If the loader does not support that
it should raise a :exc:`TypeError` which is the default behavior.
"""
raise TypeError("this loader cannot iterate over all templates")
@internalcode
def load(
self,
environment: "Environment",
name: str,
globals: t.Optional[t.MutableMapping[str, t.Any]] = None,
) -> "Template":
"""Loads a template. This method looks up the template in the cache
or loads one by calling :meth:`get_source`. Subclasses should not
override this method as loaders working on collections of other
loaders (such as :class:`PrefixLoader` or :class:`ChoiceLoader`)
will not call this method but `get_source` directly.
"""
code = None
if globals is None:
globals = {}
# first we try to get the source for this template together
# with the filename and the uptodate function.
source, filename, uptodate = self.get_source(environment, name)
# try to load the code from the bytecode cache if there is a
# bytecode cache configured.
bcc = environment.bytecode_cache
if bcc is not None:
bucket = bcc.get_bucket(environment, name, filename, source)
code = bucket.code
# if we don't have code so far (not cached, no longer up to
# date) etc. we compile the template
if code is None:
code = environment.compile(source, name, filename)
# if the bytecode cache is available and the bucket doesn't
# have a code so far, we give the bucket the new code and put
# it back to the bytecode cache.
if bcc is not None and bucket.code is None:
bucket.code = code
bcc.set_bucket(bucket)
return environment.template_class.from_code(
environment, code, globals, uptodate
)
class FileSystemLoader(BaseLoader):
"""Load templates from a directory in the file system.
The path can be relative or absolute. Relative paths are relative to
the current working directory.
.. code-block:: python
loader = FileSystemLoader("templates")
A list of paths can be given. The directories will be searched in
order, stopping at the first matching template.
.. code-block:: python
loader = FileSystemLoader(["/override/templates", "/default/templates"])
:param searchpath: A path, or list of paths, to the directory that
contains the templates.
:param encoding: Use this encoding to read the text from template
files.
:param followlinks: Follow symbolic links in the path.
.. versionchanged:: 2.8
Added the ``followlinks`` parameter.
"""
def __init__(
self,
searchpath: t.Union[str, os.PathLike, t.Sequence[t.Union[str, os.PathLike]]],
encoding: str = "utf-8",
followlinks: bool = False,
) -> None:
if not isinstance(searchpath, abc.Iterable) or isinstance(searchpath, str):
searchpath = [searchpath]
self.searchpath = [os.fspath(p) for p in searchpath]
self.encoding = encoding
self.followlinks = followlinks
def get_source(
self, environment: "Environment", template: str
) -> t.Tuple[str, str, t.Callable[[], bool]]:
pieces = split_template_path(template)
for searchpath in self.searchpath:
# Use posixpath even on Windows to avoid "drive:" or UNC
# segments breaking out of the search directory.
filename = posixpath.join(searchpath, *pieces)
f = open_if_exists(filename)
if f is None:
continue
try:
contents = f.read().decode(self.encoding)
finally:
f.close()
mtime = os.path.getmtime(filename)
def uptodate() -> bool:
try:
return os.path.getmtime(filename) == mtime
except OSError:
return False
# Use normpath to convert Windows altsep to sep.
return contents, os.path.normpath(filename), uptodate
raise TemplateNotFound(template)
def list_templates(self) -> t.List[str]:
found = set()
for searchpath in self.searchpath:
walk_dir = os.walk(searchpath, followlinks=self.followlinks)
for dirpath, _, filenames in walk_dir:
for filename in filenames:
template = (
os.path.join(dirpath, filename)[len(searchpath) :]
.strip(os.path.sep)
.replace(os.path.sep, "/")
)
if template[:2] == "./":
template = template[2:]
if template not in found:
found.add(template)
return sorted(found)
class PackageLoader(BaseLoader):
"""Load templates from a directory in a Python package.
:param package_name: Import name of the package that contains the
template directory.
:param package_path: Directory within the imported package that
contains the templates.
:param encoding: Encoding of template files.
The following example looks up templates in the ``pages`` directory
within the ``project.ui`` package.
.. code-block:: python
loader = PackageLoader("project.ui", "pages")
Only packages installed as directories (standard pip behavior) or
zip/egg files (less common) are supported. The Python API for
introspecting data in packages is too limited to support other
installation methods the way this loader requires.
There is limited support for :pep:`420` namespace packages. The
template directory is assumed to only be in one namespace
contributor. Zip files contributing to a namespace are not
supported.
.. versionchanged:: 3.0
No longer uses ``setuptools`` as a dependency.
.. versionchanged:: 3.0
Limited PEP 420 namespace package support.
"""
def __init__(
self,
package_name: str,
package_path: "str" = "templates",
encoding: str = "utf-8",
) -> None:
package_path = os.path.normpath(package_path).rstrip(os.path.sep)
# normpath preserves ".", which isn't valid in zip paths.
if package_path == os.path.curdir:
package_path = ""
elif package_path[:2] == os.path.curdir + os.path.sep:
package_path = package_path[2:]
self.package_path = package_path
self.package_name = package_name
self.encoding = encoding
# Make sure the package exists. This also makes namespace
# packages work, otherwise get_loader returns None.
import_module(package_name)
spec = importlib.util.find_spec(package_name)
assert spec is not None, "An import spec was not found for the package."
loader = spec.loader
assert loader is not None, "A loader was not found for the package."
self._loader = loader
self._archive = None
template_root = None
if isinstance(loader, zipimport.zipimporter):
self._archive = loader.archive
pkgdir = next(iter(spec.submodule_search_locations)) # type: ignore
template_root = os.path.join(pkgdir, package_path).rstrip(os.path.sep)
else:
roots: t.List[str] = []
# One element for regular packages, multiple for namespace
# packages, or None for single module file.
if spec.submodule_search_locations:
roots.extend(spec.submodule_search_locations)
# A single module file, use the parent directory instead.
elif spec.origin is not None:
roots.append(os.path.dirname(spec.origin))
for root in roots:
root = os.path.join(root, package_path)
if os.path.isdir(root):
template_root = root
break
if template_root is None:
raise ValueError(
f"The {package_name!r} package was not installed in a"
" way that PackageLoader understands."
)
self._template_root = template_root
def get_source(
self, environment: "Environment", template: str
) -> t.Tuple[str, str, t.Optional[t.Callable[[], bool]]]:
# Use posixpath even on Windows to avoid "drive:" or UNC
# segments breaking out of the search directory. Use normpath to
# convert Windows altsep to sep.
p = os.path.normpath(
posixpath.join(self._template_root, *split_template_path(template))
)
up_to_date: t.Optional[t.Callable[[], bool]]
if self._archive is None:
# Package is a directory.
if not os.path.isfile(p):
raise TemplateNotFound(template)
with open(p, "rb") as f:
source = f.read()
mtime = os.path.getmtime(p)
def up_to_date() -> bool:
return os.path.isfile(p) and os.path.getmtime(p) == mtime
else:
# Package is a zip file.
try:
source = self._loader.get_data(p) # type: ignore
except OSError as e:
raise TemplateNotFound(template) from e
# Could use the zip's mtime for all template mtimes, but
# would need to safely reload the module if it's out of
# date, so just report it as always current.
up_to_date = None
return source.decode(self.encoding), p, up_to_date
def list_templates(self) -> t.List[str]:
results: t.List[str] = []
if self._archive is None:
# Package is a directory.
offset = len(self._template_root)
for dirpath, _, filenames in os.walk(self._template_root):
dirpath = dirpath[offset:].lstrip(os.path.sep)
results.extend(
os.path.join(dirpath, name).replace(os.path.sep, "/")
for name in filenames
)
else:
if not hasattr(self._loader, "_files"):
raise TypeError(
"This zip import does not have the required"
" metadata to list templates."
)
# Package is a zip file.
prefix = (
self._template_root[len(self._archive) :].lstrip(os.path.sep)
+ os.path.sep
)
offset = len(prefix)
for name in self._loader._files.keys(): # type: ignore
# Find names under the templates directory that aren't directories.
if name.startswith(prefix) and name[-1] != os.path.sep:
results.append(name[offset:].replace(os.path.sep, "/"))
results.sort()
return results
class DictLoader(BaseLoader):
"""Loads a template from a Python dict mapping template names to
template source. This loader is useful for unittesting:
>>> loader = DictLoader({'index.html': 'source here'})
Because auto reloading is rarely useful this is disabled per default.
"""
def __init__(self, mapping: t.Mapping[str, str]) -> None:
self.mapping = mapping
def get_source(
self, environment: "Environment", template: str
) -> t.Tuple[str, None, t.Callable[[], bool]]:
if template in self.mapping:
source = self.mapping[template]
return source, None, lambda: source == self.mapping.get(template)
raise TemplateNotFound(template)
def list_templates(self) -> t.List[str]:
return sorted(self.mapping)
class FunctionLoader(BaseLoader):
"""A loader that is passed a function which does the loading. The
function receives the name of the template and has to return either
a string with the template source, a tuple in the form ``(source,
filename, uptodatefunc)`` or `None` if the template does not exist.
>>> def load_template(name):
... if name == 'index.html':
... return '...'
...
>>> loader = FunctionLoader(load_template)
The `uptodatefunc` is a function that is called if autoreload is enabled
and has to return `True` if the template is still up to date. For more
details have a look at :meth:`BaseLoader.get_source` which has the same
return value.
"""
def __init__(
self,
load_func: t.Callable[
[str],
t.Optional[
t.Union[
str, t.Tuple[str, t.Optional[str], t.Optional[t.Callable[[], bool]]]
]
],
],
) -> None:
self.load_func = load_func
def get_source(
self, environment: "Environment", template: str
) -> t.Tuple[str, t.Optional[str], t.Optional[t.Callable[[], bool]]]:
rv = self.load_func(template)
if rv is None:
raise TemplateNotFound(template)
if isinstance(rv, str):
return rv, None, None
return rv
class PrefixLoader(BaseLoader):
"""A loader that is passed a dict of loaders where each loader is bound
to a prefix. The prefix is delimited from the template by a slash per
default, which can be changed by setting the `delimiter` argument to
something else::
loader = PrefixLoader({
'app1': PackageLoader('mypackage.app1'),
'app2': PackageLoader('mypackage.app2')
})
By loading ``'app1/index.html'`` the file from the app1 package is loaded,
by loading ``'app2/index.html'`` the file from the second.
"""
def __init__(
self, mapping: t.Mapping[str, BaseLoader], delimiter: str = "/"
) -> None:
self.mapping = mapping
self.delimiter = delimiter
def get_loader(self, template: str) -> t.Tuple[BaseLoader, str]:
try:
prefix, name = template.split(self.delimiter, 1)
loader = self.mapping[prefix]
except (ValueError, KeyError) as e:
raise TemplateNotFound(template) from e
return loader, name
def get_source(
self, environment: "Environment", template: str
) -> t.Tuple[str, t.Optional[str], t.Optional[t.Callable[[], bool]]]:
loader, name = self.get_loader(template)
try:
return loader.get_source(environment, name)
except TemplateNotFound as e:
# re-raise the exception with the correct filename here.
# (the one that includes the prefix)
raise TemplateNotFound(template) from e
@internalcode
def load(
self,
environment: "Environment",
name: str,
globals: t.Optional[t.MutableMapping[str, t.Any]] = None,
) -> "Template":
loader, local_name = self.get_loader(name)
try:
return loader.load(environment, local_name, globals)
except TemplateNotFound as e:
# re-raise the exception with the correct filename here.
# (the one that includes the prefix)
raise TemplateNotFound(name) from e
def list_templates(self) -> t.List[str]:
result = []
for prefix, loader in self.mapping.items():
for template in loader.list_templates():
result.append(prefix + self.delimiter + template)
return result
class ChoiceLoader(BaseLoader):
"""This loader works like the `PrefixLoader` just that no prefix is
specified. If a template could not be found by one loader the next one
is tried.
>>> loader = ChoiceLoader([
... FileSystemLoader('/path/to/user/templates'),
... FileSystemLoader('/path/to/system/templates')
... ])
This is useful if you want to allow users to override builtin templates
from a different location.
"""
def __init__(self, loaders: t.Sequence[BaseLoader]) -> None:
self.loaders = loaders
def get_source(
self, environment: "Environment", template: str
) -> t.Tuple[str, t.Optional[str], t.Optional[t.Callable[[], bool]]]:
for loader in self.loaders:
try:
return loader.get_source(environment, template)
except TemplateNotFound:
pass
raise TemplateNotFound(template)
@internalcode
def load(
self,
environment: "Environment",
name: str,
globals: t.Optional[t.MutableMapping[str, t.Any]] = None,
) -> "Template":
for loader in self.loaders:
try:
return loader.load(environment, name, globals)
except TemplateNotFound:
pass
raise TemplateNotFound(name)
def list_templates(self) -> t.List[str]:
found = set()
for loader in self.loaders:
found.update(loader.list_templates())
return sorted(found)
class _TemplateModule(ModuleType):
"""Like a normal module but with support for weak references"""
class ModuleLoader(BaseLoader):
"""This loader loads templates from precompiled templates.
Example usage:
>>> loader = ChoiceLoader([
... ModuleLoader('/path/to/compiled/templates'),
... FileSystemLoader('/path/to/templates')
... ])
Templates can be precompiled with :meth:`Environment.compile_templates`.
"""
has_source_access = False
def __init__(
self, path: t.Union[str, os.PathLike, t.Sequence[t.Union[str, os.PathLike]]]
) -> None:
package_name = f"_jinja2_module_templates_{id(self):x}"
# create a fake module that looks for the templates in the
# path given.
mod = _TemplateModule(package_name)
if not isinstance(path, abc.Iterable) or isinstance(path, str):
path = [path]
mod.__path__ = [os.fspath(p) for p in path]
sys.modules[package_name] = weakref.proxy(
mod, lambda x: sys.modules.pop(package_name, None)
)
# the only strong reference, the sys.modules entry is weak
# so that the garbage collector can remove it once the
# loader that created it goes out of business.
self.module = mod
self.package_name = package_name
@staticmethod
def get_template_key(name: str) -> str:
return "tmpl_" + sha1(name.encode("utf-8")).hexdigest()
@staticmethod
def get_module_filename(name: str) -> str:
return ModuleLoader.get_template_key(name) + ".py"
@internalcode
def load(
self,
environment: "Environment",
name: str,
globals: t.Optional[t.MutableMapping[str, t.Any]] = None,
) -> "Template":
key = self.get_template_key(name)
module = f"{self.package_name}.{key}"
mod = getattr(self.module, module, None)
if mod is None:
try:
mod = __import__(module, None, None, ["root"])
except ImportError as e:
raise TemplateNotFound(name) from e
# remove the entry from sys.modules, we only want the attribute
# on the module object we have stored on the loader.
sys.modules.pop(module, None)
if globals is None:
globals = {}
return environment.template_class.from_module_dict(
environment, mod.__dict__, globals
)

View File

@ -1,111 +0,0 @@
"""Functions that expose information about templates that might be
interesting for introspection.
"""
import typing as t
from . import nodes
from .compiler import CodeGenerator
from .compiler import Frame
if t.TYPE_CHECKING:
from .environment import Environment
class TrackingCodeGenerator(CodeGenerator):
"""We abuse the code generator for introspection."""
def __init__(self, environment: "Environment") -> None:
super().__init__(environment, "<introspection>", "<introspection>")
self.undeclared_identifiers: t.Set[str] = set()
def write(self, x: str) -> None:
"""Don't write."""
def enter_frame(self, frame: Frame) -> None:
"""Remember all undeclared identifiers."""
super().enter_frame(frame)
for _, (action, param) in frame.symbols.loads.items():
if action == "resolve" and param not in self.environment.globals:
self.undeclared_identifiers.add(param)
def find_undeclared_variables(ast: nodes.Template) -> t.Set[str]:
"""Returns a set of all variables in the AST that will be looked up from
the context at runtime. Because at compile time it's not known which
variables will be used depending on the path the execution takes at
runtime, all variables are returned.
>>> from jinja2 import Environment, meta
>>> env = Environment()
>>> ast = env.parse('{% set foo = 42 %}{{ bar + foo }}')
>>> meta.find_undeclared_variables(ast) == {'bar'}
True
.. admonition:: Implementation
Internally the code generator is used for finding undeclared variables.
This is good to know because the code generator might raise a
:exc:`TemplateAssertionError` during compilation and as a matter of
fact this function can currently raise that exception as well.
"""
codegen = TrackingCodeGenerator(ast.environment) # type: ignore
codegen.visit(ast)
return codegen.undeclared_identifiers
_ref_types = (nodes.Extends, nodes.FromImport, nodes.Import, nodes.Include)
_RefType = t.Union[nodes.Extends, nodes.FromImport, nodes.Import, nodes.Include]
def find_referenced_templates(ast: nodes.Template) -> t.Iterator[t.Optional[str]]:
"""Finds all the referenced templates from the AST. This will return an
iterator over all the hardcoded template extensions, inclusions and
imports. If dynamic inheritance or inclusion is used, `None` will be
yielded.
>>> from jinja2 import Environment, meta
>>> env = Environment()
>>> ast = env.parse('{% extends "layout.html" %}{% include helper %}')
>>> list(meta.find_referenced_templates(ast))
['layout.html', None]
This function is useful for dependency tracking. For example if you want
to rebuild parts of the website after a layout template has changed.
"""
template_name: t.Any
for node in ast.find_all(_ref_types):
template: nodes.Expr = node.template # type: ignore
if not isinstance(template, nodes.Const):
# a tuple with some non consts in there
if isinstance(template, (nodes.Tuple, nodes.List)):
for template_name in template.items:
# something const, only yield the strings and ignore
# non-string consts that really just make no sense
if isinstance(template_name, nodes.Const):
if isinstance(template_name.value, str):
yield template_name.value
# something dynamic in there
else:
yield None
# something dynamic we don't know about here
else:
yield None
continue
# constant is a basestring, direct template name
if isinstance(template.value, str):
yield template.value
# a tuple or list (latter *should* not happen) made of consts,
# yield the consts that are strings. We could warn here for
# non string values
elif isinstance(node, nodes.Include) and isinstance(
template.value, (tuple, list)
):
for template_name in template.value:
if isinstance(template_name, str):
yield template_name
# something else we don't care about, we could warn here
else:
yield None

View File

@ -1,130 +0,0 @@
import typing as t
from ast import literal_eval
from ast import parse
from itertools import chain
from itertools import islice
from types import GeneratorType
from . import nodes
from .compiler import CodeGenerator
from .compiler import Frame
from .compiler import has_safe_repr
from .environment import Environment
from .environment import Template
def native_concat(values: t.Iterable[t.Any]) -> t.Optional[t.Any]:
"""Return a native Python type from the list of compiled nodes. If
the result is a single node, its value is returned. Otherwise, the
nodes are concatenated as strings. If the result can be parsed with
:func:`ast.literal_eval`, the parsed value is returned. Otherwise,
the string is returned.
:param values: Iterable of outputs to concatenate.
"""
head = list(islice(values, 2))
if not head:
return None
if len(head) == 1:
raw = head[0]
if not isinstance(raw, str):
return raw
else:
if isinstance(values, GeneratorType):
values = chain(head, values)
raw = "".join([str(v) for v in values])
try:
return literal_eval(
# In Python 3.10+ ast.literal_eval removes leading spaces/tabs
# from the given string. For backwards compatibility we need to
# parse the string ourselves without removing leading spaces/tabs.
parse(raw, mode="eval")
)
except (ValueError, SyntaxError, MemoryError):
return raw
class NativeCodeGenerator(CodeGenerator):
"""A code generator which renders Python types by not adding
``str()`` around output nodes.
"""
@staticmethod
def _default_finalize(value: t.Any) -> t.Any:
return value
def _output_const_repr(self, group: t.Iterable[t.Any]) -> str:
return repr("".join([str(v) for v in group]))
def _output_child_to_const(
self, node: nodes.Expr, frame: Frame, finalize: CodeGenerator._FinalizeInfo
) -> t.Any:
const = node.as_const(frame.eval_ctx)
if not has_safe_repr(const):
raise nodes.Impossible()
if isinstance(node, nodes.TemplateData):
return const
return finalize.const(const) # type: ignore
def _output_child_pre(
self, node: nodes.Expr, frame: Frame, finalize: CodeGenerator._FinalizeInfo
) -> None:
if finalize.src is not None:
self.write(finalize.src)
def _output_child_post(
self, node: nodes.Expr, frame: Frame, finalize: CodeGenerator._FinalizeInfo
) -> None:
if finalize.src is not None:
self.write(")")
class NativeEnvironment(Environment):
"""An environment that renders templates to native Python types."""
code_generator_class = NativeCodeGenerator
concat = staticmethod(native_concat) # type: ignore
class NativeTemplate(Template):
environment_class = NativeEnvironment
def render(self, *args: t.Any, **kwargs: t.Any) -> t.Any:
"""Render the template to produce a native Python type. If the
result is a single node, its value is returned. Otherwise, the
nodes are concatenated as strings. If the result can be parsed
with :func:`ast.literal_eval`, the parsed value is returned.
Otherwise, the string is returned.
"""
ctx = self.new_context(dict(*args, **kwargs))
try:
return self.environment_class.concat( # type: ignore
self.root_render_func(ctx) # type: ignore
)
except Exception:
return self.environment.handle_exception()
async def render_async(self, *args: t.Any, **kwargs: t.Any) -> t.Any:
if not self.environment.is_async:
raise RuntimeError(
"The environment was not created with async mode enabled."
)
ctx = self.new_context(dict(*args, **kwargs))
try:
return self.environment_class.concat( # type: ignore
[n async for n in self.root_render_func(ctx)] # type: ignore
)
except Exception:
return self.environment.handle_exception()
NativeEnvironment.template_class = NativeTemplate

File diff suppressed because it is too large Load Diff

View File

@ -1,47 +0,0 @@
"""The optimizer tries to constant fold expressions and modify the AST
in place so that it should be faster to evaluate.
Because the AST does not contain all the scoping information and the
compiler has to find that out, we cannot do all the optimizations we
want. For example, loop unrolling doesn't work because unrolled loops
would have a different scope. The solution would be a second syntax tree
that stored the scoping rules.
"""
import typing as t
from . import nodes
from .visitor import NodeTransformer
if t.TYPE_CHECKING:
from .environment import Environment
def optimize(node: nodes.Node, environment: "Environment") -> nodes.Node:
"""The context hint can be used to perform an static optimization
based on the context given."""
optimizer = Optimizer(environment)
return t.cast(nodes.Node, optimizer.visit(node))
class Optimizer(NodeTransformer):
def __init__(self, environment: "t.Optional[Environment]") -> None:
self.environment = environment
def generic_visit(
self, node: nodes.Node, *args: t.Any, **kwargs: t.Any
) -> nodes.Node:
node = super().generic_visit(node, *args, **kwargs)
# Do constant folding. Some other nodes besides Expr have
# as_const, but folding them causes errors later on.
if isinstance(node, nodes.Expr):
try:
return nodes.Const.from_untrusted(
node.as_const(args[0] if args else None),
lineno=node.lineno,
environment=self.environment,
)
except nodes.Impossible:
pass
return node

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,428 +0,0 @@
"""A sandbox layer that ensures unsafe operations cannot be performed.
Useful when the template itself comes from an untrusted source.
"""
import operator
import types
import typing as t
from _string import formatter_field_name_split # type: ignore
from collections import abc
from collections import deque
from string import Formatter
from markupsafe import EscapeFormatter
from markupsafe import Markup
from .environment import Environment
from .exceptions import SecurityError
from .runtime import Context
from .runtime import Undefined
F = t.TypeVar("F", bound=t.Callable[..., t.Any])
#: maximum number of items a range may produce
MAX_RANGE = 100000
#: Unsafe function attributes.
UNSAFE_FUNCTION_ATTRIBUTES: t.Set[str] = set()
#: Unsafe method attributes. Function attributes are unsafe for methods too.
UNSAFE_METHOD_ATTRIBUTES: t.Set[str] = set()
#: unsafe generator attributes.
UNSAFE_GENERATOR_ATTRIBUTES = {"gi_frame", "gi_code"}
#: unsafe attributes on coroutines
UNSAFE_COROUTINE_ATTRIBUTES = {"cr_frame", "cr_code"}
#: unsafe attributes on async generators
UNSAFE_ASYNC_GENERATOR_ATTRIBUTES = {"ag_code", "ag_frame"}
_mutable_spec: t.Tuple[t.Tuple[t.Type, t.FrozenSet[str]], ...] = (
(
abc.MutableSet,
frozenset(
[
"add",
"clear",
"difference_update",
"discard",
"pop",
"remove",
"symmetric_difference_update",
"update",
]
),
),
(
abc.MutableMapping,
frozenset(["clear", "pop", "popitem", "setdefault", "update"]),
),
(
abc.MutableSequence,
frozenset(["append", "reverse", "insert", "sort", "extend", "remove"]),
),
(
deque,
frozenset(
[
"append",
"appendleft",
"clear",
"extend",
"extendleft",
"pop",
"popleft",
"remove",
"rotate",
]
),
),
)
def inspect_format_method(callable: t.Callable) -> t.Optional[str]:
if not isinstance(
callable, (types.MethodType, types.BuiltinMethodType)
) or callable.__name__ not in ("format", "format_map"):
return None
obj = callable.__self__
if isinstance(obj, str):
return obj
return None
def safe_range(*args: int) -> range:
"""A range that can't generate ranges with a length of more than
MAX_RANGE items.
"""
rng = range(*args)
if len(rng) > MAX_RANGE:
raise OverflowError(
"Range too big. The sandbox blocks ranges larger than"
f" MAX_RANGE ({MAX_RANGE})."
)
return rng
def unsafe(f: F) -> F:
"""Marks a function or method as unsafe.
.. code-block: python
@unsafe
def delete(self):
pass
"""
f.unsafe_callable = True # type: ignore
return f
def is_internal_attribute(obj: t.Any, attr: str) -> bool:
"""Test if the attribute given is an internal python attribute. For
example this function returns `True` for the `func_code` attribute of
python objects. This is useful if the environment method
:meth:`~SandboxedEnvironment.is_safe_attribute` is overridden.
>>> from jinja2.sandbox import is_internal_attribute
>>> is_internal_attribute(str, "mro")
True
>>> is_internal_attribute(str, "upper")
False
"""
if isinstance(obj, types.FunctionType):
if attr in UNSAFE_FUNCTION_ATTRIBUTES:
return True
elif isinstance(obj, types.MethodType):
if attr in UNSAFE_FUNCTION_ATTRIBUTES or attr in UNSAFE_METHOD_ATTRIBUTES:
return True
elif isinstance(obj, type):
if attr == "mro":
return True
elif isinstance(obj, (types.CodeType, types.TracebackType, types.FrameType)):
return True
elif isinstance(obj, types.GeneratorType):
if attr in UNSAFE_GENERATOR_ATTRIBUTES:
return True
elif hasattr(types, "CoroutineType") and isinstance(obj, types.CoroutineType):
if attr in UNSAFE_COROUTINE_ATTRIBUTES:
return True
elif hasattr(types, "AsyncGeneratorType") and isinstance(
obj, types.AsyncGeneratorType
):
if attr in UNSAFE_ASYNC_GENERATOR_ATTRIBUTES:
return True
return attr.startswith("__")
def modifies_known_mutable(obj: t.Any, attr: str) -> bool:
"""This function checks if an attribute on a builtin mutable object
(list, dict, set or deque) or the corresponding ABCs would modify it
if called.
>>> modifies_known_mutable({}, "clear")
True
>>> modifies_known_mutable({}, "keys")
False
>>> modifies_known_mutable([], "append")
True
>>> modifies_known_mutable([], "index")
False
If called with an unsupported object, ``False`` is returned.
>>> modifies_known_mutable("foo", "upper")
False
"""
for typespec, unsafe in _mutable_spec:
if isinstance(obj, typespec):
return attr in unsafe
return False
class SandboxedEnvironment(Environment):
"""The sandboxed environment. It works like the regular environment but
tells the compiler to generate sandboxed code. Additionally subclasses of
this environment may override the methods that tell the runtime what
attributes or functions are safe to access.
If the template tries to access insecure code a :exc:`SecurityError` is
raised. However also other exceptions may occur during the rendering so
the caller has to ensure that all exceptions are caught.
"""
sandboxed = True
#: default callback table for the binary operators. A copy of this is
#: available on each instance of a sandboxed environment as
#: :attr:`binop_table`
default_binop_table: t.Dict[str, t.Callable[[t.Any, t.Any], t.Any]] = {
"+": operator.add,
"-": operator.sub,
"*": operator.mul,
"/": operator.truediv,
"//": operator.floordiv,
"**": operator.pow,
"%": operator.mod,
}
#: default callback table for the unary operators. A copy of this is
#: available on each instance of a sandboxed environment as
#: :attr:`unop_table`
default_unop_table: t.Dict[str, t.Callable[[t.Any], t.Any]] = {
"+": operator.pos,
"-": operator.neg,
}
#: a set of binary operators that should be intercepted. Each operator
#: that is added to this set (empty by default) is delegated to the
#: :meth:`call_binop` method that will perform the operator. The default
#: operator callback is specified by :attr:`binop_table`.
#:
#: The following binary operators are interceptable:
#: ``//``, ``%``, ``+``, ``*``, ``-``, ``/``, and ``**``
#:
#: The default operation form the operator table corresponds to the
#: builtin function. Intercepted calls are always slower than the native
#: operator call, so make sure only to intercept the ones you are
#: interested in.
#:
#: .. versionadded:: 2.6
intercepted_binops: t.FrozenSet[str] = frozenset()
#: a set of unary operators that should be intercepted. Each operator
#: that is added to this set (empty by default) is delegated to the
#: :meth:`call_unop` method that will perform the operator. The default
#: operator callback is specified by :attr:`unop_table`.
#:
#: The following unary operators are interceptable: ``+``, ``-``
#:
#: The default operation form the operator table corresponds to the
#: builtin function. Intercepted calls are always slower than the native
#: operator call, so make sure only to intercept the ones you are
#: interested in.
#:
#: .. versionadded:: 2.6
intercepted_unops: t.FrozenSet[str] = frozenset()
def __init__(self, *args: t.Any, **kwargs: t.Any) -> None:
super().__init__(*args, **kwargs)
self.globals["range"] = safe_range
self.binop_table = self.default_binop_table.copy()
self.unop_table = self.default_unop_table.copy()
def is_safe_attribute(self, obj: t.Any, attr: str, value: t.Any) -> bool:
"""The sandboxed environment will call this method to check if the
attribute of an object is safe to access. Per default all attributes
starting with an underscore are considered private as well as the
special attributes of internal python objects as returned by the
:func:`is_internal_attribute` function.
"""
return not (attr.startswith("_") or is_internal_attribute(obj, attr))
def is_safe_callable(self, obj: t.Any) -> bool:
"""Check if an object is safely callable. By default callables
are considered safe unless decorated with :func:`unsafe`.
This also recognizes the Django convention of setting
``func.alters_data = True``.
"""
return not (
getattr(obj, "unsafe_callable", False) or getattr(obj, "alters_data", False)
)
def call_binop(
self, context: Context, operator: str, left: t.Any, right: t.Any
) -> t.Any:
"""For intercepted binary operator calls (:meth:`intercepted_binops`)
this function is executed instead of the builtin operator. This can
be used to fine tune the behavior of certain operators.
.. versionadded:: 2.6
"""
return self.binop_table[operator](left, right)
def call_unop(self, context: Context, operator: str, arg: t.Any) -> t.Any:
"""For intercepted unary operator calls (:meth:`intercepted_unops`)
this function is executed instead of the builtin operator. This can
be used to fine tune the behavior of certain operators.
.. versionadded:: 2.6
"""
return self.unop_table[operator](arg)
def getitem(
self, obj: t.Any, argument: t.Union[str, t.Any]
) -> t.Union[t.Any, Undefined]:
"""Subscribe an object from sandboxed code."""
try:
return obj[argument]
except (TypeError, LookupError):
if isinstance(argument, str):
try:
attr = str(argument)
except Exception:
pass
else:
try:
value = getattr(obj, attr)
except AttributeError:
pass
else:
if self.is_safe_attribute(obj, argument, value):
return value
return self.unsafe_undefined(obj, argument)
return self.undefined(obj=obj, name=argument)
def getattr(self, obj: t.Any, attribute: str) -> t.Union[t.Any, Undefined]:
"""Subscribe an object from sandboxed code and prefer the
attribute. The attribute passed *must* be a bytestring.
"""
try:
value = getattr(obj, attribute)
except AttributeError:
try:
return obj[attribute]
except (TypeError, LookupError):
pass
else:
if self.is_safe_attribute(obj, attribute, value):
return value
return self.unsafe_undefined(obj, attribute)
return self.undefined(obj=obj, name=attribute)
def unsafe_undefined(self, obj: t.Any, attribute: str) -> Undefined:
"""Return an undefined object for unsafe attributes."""
return self.undefined(
f"access to attribute {attribute!r} of"
f" {type(obj).__name__!r} object is unsafe.",
name=attribute,
obj=obj,
exc=SecurityError,
)
def format_string(
self,
s: str,
args: t.Tuple[t.Any, ...],
kwargs: t.Dict[str, t.Any],
format_func: t.Optional[t.Callable] = None,
) -> str:
"""If a format call is detected, then this is routed through this
method so that our safety sandbox can be used for it.
"""
formatter: SandboxedFormatter
if isinstance(s, Markup):
formatter = SandboxedEscapeFormatter(self, escape=s.escape)
else:
formatter = SandboxedFormatter(self)
if format_func is not None and format_func.__name__ == "format_map":
if len(args) != 1 or kwargs:
raise TypeError(
"format_map() takes exactly one argument"
f" {len(args) + (kwargs is not None)} given"
)
kwargs = args[0]
args = ()
rv = formatter.vformat(s, args, kwargs)
return type(s)(rv)
def call(
__self, # noqa: B902
__context: Context,
__obj: t.Any,
*args: t.Any,
**kwargs: t.Any,
) -> t.Any:
"""Call an object from sandboxed code."""
fmt = inspect_format_method(__obj)
if fmt is not None:
return __self.format_string(fmt, args, kwargs, __obj)
# the double prefixes are to avoid double keyword argument
# errors when proxying the call.
if not __self.is_safe_callable(__obj):
raise SecurityError(f"{__obj!r} is not safely callable")
return __context.call(__obj, *args, **kwargs)
class ImmutableSandboxedEnvironment(SandboxedEnvironment):
"""Works exactly like the regular `SandboxedEnvironment` but does not
permit modifications on the builtin mutable objects `list`, `set`, and
`dict` by using the :func:`modifies_known_mutable` function.
"""
def is_safe_attribute(self, obj: t.Any, attr: str, value: t.Any) -> bool:
if not super().is_safe_attribute(obj, attr, value):
return False
return not modifies_known_mutable(obj, attr)
class SandboxedFormatter(Formatter):
def __init__(self, env: Environment, **kwargs: t.Any) -> None:
self._env = env
super().__init__(**kwargs)
def get_field(
self, field_name: str, args: t.Sequence[t.Any], kwargs: t.Mapping[str, t.Any]
) -> t.Tuple[t.Any, str]:
first, rest = formatter_field_name_split(field_name)
obj = self.get_value(first, args, kwargs)
for is_attr, i in rest:
if is_attr:
obj = self._env.getattr(obj, i)
else:
obj = self._env.getitem(obj, i)
return obj, first
class SandboxedEscapeFormatter(SandboxedFormatter, EscapeFormatter):
pass

View File

@ -1,255 +0,0 @@
"""Built-in template tests used with the ``is`` operator."""
import operator
import typing as t
from collections import abc
from numbers import Number
from .runtime import Undefined
from .utils import pass_environment
if t.TYPE_CHECKING:
from .environment import Environment
def test_odd(value: int) -> bool:
"""Return true if the variable is odd."""
return value % 2 == 1
def test_even(value: int) -> bool:
"""Return true if the variable is even."""
return value % 2 == 0
def test_divisibleby(value: int, num: int) -> bool:
"""Check if a variable is divisible by a number."""
return value % num == 0
def test_defined(value: t.Any) -> bool:
"""Return true if the variable is defined:
.. sourcecode:: jinja
{% if variable is defined %}
value of variable: {{ variable }}
{% else %}
variable is not defined
{% endif %}
See the :func:`default` filter for a simple way to set undefined
variables.
"""
return not isinstance(value, Undefined)
def test_undefined(value: t.Any) -> bool:
"""Like :func:`defined` but the other way round."""
return isinstance(value, Undefined)
@pass_environment
def test_filter(env: "Environment", value: str) -> bool:
"""Check if a filter exists by name. Useful if a filter may be
optionally available.
.. code-block:: jinja
{% if 'markdown' is filter %}
{{ value | markdown }}
{% else %}
{{ value }}
{% endif %}
.. versionadded:: 3.0
"""
return value in env.filters
@pass_environment
def test_test(env: "Environment", value: str) -> bool:
"""Check if a test exists by name. Useful if a test may be
optionally available.
.. code-block:: jinja
{% if 'loud' is test %}
{% if value is loud %}
{{ value|upper }}
{% else %}
{{ value|lower }}
{% endif %}
{% else %}
{{ value }}
{% endif %}
.. versionadded:: 3.0
"""
return value in env.tests
def test_none(value: t.Any) -> bool:
"""Return true if the variable is none."""
return value is None
def test_boolean(value: t.Any) -> bool:
"""Return true if the object is a boolean value.
.. versionadded:: 2.11
"""
return value is True or value is False
def test_false(value: t.Any) -> bool:
"""Return true if the object is False.
.. versionadded:: 2.11
"""
return value is False
def test_true(value: t.Any) -> bool:
"""Return true if the object is True.
.. versionadded:: 2.11
"""
return value is True
# NOTE: The existing 'number' test matches booleans and floats
def test_integer(value: t.Any) -> bool:
"""Return true if the object is an integer.
.. versionadded:: 2.11
"""
return isinstance(value, int) and value is not True and value is not False
# NOTE: The existing 'number' test matches booleans and integers
def test_float(value: t.Any) -> bool:
"""Return true if the object is a float.
.. versionadded:: 2.11
"""
return isinstance(value, float)
def test_lower(value: str) -> bool:
"""Return true if the variable is lowercased."""
return str(value).islower()
def test_upper(value: str) -> bool:
"""Return true if the variable is uppercased."""
return str(value).isupper()
def test_string(value: t.Any) -> bool:
"""Return true if the object is a string."""
return isinstance(value, str)
def test_mapping(value: t.Any) -> bool:
"""Return true if the object is a mapping (dict etc.).
.. versionadded:: 2.6
"""
return isinstance(value, abc.Mapping)
def test_number(value: t.Any) -> bool:
"""Return true if the variable is a number."""
return isinstance(value, Number)
def test_sequence(value: t.Any) -> bool:
"""Return true if the variable is a sequence. Sequences are variables
that are iterable.
"""
try:
len(value)
value.__getitem__
except Exception:
return False
return True
def test_sameas(value: t.Any, other: t.Any) -> bool:
"""Check if an object points to the same memory address than another
object:
.. sourcecode:: jinja
{% if foo.attribute is sameas false %}
the foo attribute really is the `False` singleton
{% endif %}
"""
return value is other
def test_iterable(value: t.Any) -> bool:
"""Check if it's possible to iterate over an object."""
try:
iter(value)
except TypeError:
return False
return True
def test_escaped(value: t.Any) -> bool:
"""Check if the value is escaped."""
return hasattr(value, "__html__")
def test_in(value: t.Any, seq: t.Container) -> bool:
"""Check if value is in seq.
.. versionadded:: 2.10
"""
return value in seq
TESTS = {
"odd": test_odd,
"even": test_even,
"divisibleby": test_divisibleby,
"defined": test_defined,
"undefined": test_undefined,
"filter": test_filter,
"test": test_test,
"none": test_none,
"boolean": test_boolean,
"false": test_false,
"true": test_true,
"integer": test_integer,
"float": test_float,
"lower": test_lower,
"upper": test_upper,
"string": test_string,
"mapping": test_mapping,
"number": test_number,
"sequence": test_sequence,
"iterable": test_iterable,
"callable": callable,
"sameas": test_sameas,
"escaped": test_escaped,
"in": test_in,
"==": operator.eq,
"eq": operator.eq,
"equalto": operator.eq,
"!=": operator.ne,
"ne": operator.ne,
">": operator.gt,
"gt": operator.gt,
"greaterthan": operator.gt,
"ge": operator.ge,
">=": operator.ge,
"<": operator.lt,
"lt": operator.lt,
"lessthan": operator.lt,
"<=": operator.le,
"le": operator.le,
}

View File

@ -1,755 +0,0 @@
import enum
import json
import os
import re
import typing as t
from collections import abc
from collections import deque
from random import choice
from random import randrange
from threading import Lock
from types import CodeType
from urllib.parse import quote_from_bytes
import markupsafe
if t.TYPE_CHECKING:
import typing_extensions as te
F = t.TypeVar("F", bound=t.Callable[..., t.Any])
# special singleton representing missing values for the runtime
missing: t.Any = type("MissingType", (), {"__repr__": lambda x: "missing"})()
internal_code: t.MutableSet[CodeType] = set()
concat = "".join
def pass_context(f: F) -> F:
"""Pass the :class:`~jinja2.runtime.Context` as the first argument
to the decorated function when called while rendering a template.
Can be used on functions, filters, and tests.
If only ``Context.eval_context`` is needed, use
:func:`pass_eval_context`. If only ``Context.environment`` is
needed, use :func:`pass_environment`.
.. versionadded:: 3.0.0
Replaces ``contextfunction`` and ``contextfilter``.
"""
f.jinja_pass_arg = _PassArg.context # type: ignore
return f
def pass_eval_context(f: F) -> F:
"""Pass the :class:`~jinja2.nodes.EvalContext` as the first argument
to the decorated function when called while rendering a template.
See :ref:`eval-context`.
Can be used on functions, filters, and tests.
If only ``EvalContext.environment`` is needed, use
:func:`pass_environment`.
.. versionadded:: 3.0.0
Replaces ``evalcontextfunction`` and ``evalcontextfilter``.
"""
f.jinja_pass_arg = _PassArg.eval_context # type: ignore
return f
def pass_environment(f: F) -> F:
"""Pass the :class:`~jinja2.Environment` as the first argument to
the decorated function when called while rendering a template.
Can be used on functions, filters, and tests.
.. versionadded:: 3.0.0
Replaces ``environmentfunction`` and ``environmentfilter``.
"""
f.jinja_pass_arg = _PassArg.environment # type: ignore
return f
class _PassArg(enum.Enum):
context = enum.auto()
eval_context = enum.auto()
environment = enum.auto()
@classmethod
def from_obj(cls, obj: F) -> t.Optional["_PassArg"]:
if hasattr(obj, "jinja_pass_arg"):
return obj.jinja_pass_arg # type: ignore
return None
def internalcode(f: F) -> F:
"""Marks the function as internally used"""
internal_code.add(f.__code__)
return f
def is_undefined(obj: t.Any) -> bool:
"""Check if the object passed is undefined. This does nothing more than
performing an instance check against :class:`Undefined` but looks nicer.
This can be used for custom filters or tests that want to react to
undefined variables. For example a custom default filter can look like
this::
def default(var, default=''):
if is_undefined(var):
return default
return var
"""
from .runtime import Undefined
return isinstance(obj, Undefined)
def consume(iterable: t.Iterable[t.Any]) -> None:
"""Consumes an iterable without doing anything with it."""
for _ in iterable:
pass
def clear_caches() -> None:
"""Jinja keeps internal caches for environments and lexers. These are
used so that Jinja doesn't have to recreate environments and lexers all
the time. Normally you don't have to care about that but if you are
measuring memory consumption you may want to clean the caches.
"""
from .environment import get_spontaneous_environment
from .lexer import _lexer_cache
get_spontaneous_environment.cache_clear()
_lexer_cache.clear()
def import_string(import_name: str, silent: bool = False) -> t.Any:
"""Imports an object based on a string. This is useful if you want to
use import paths as endpoints or something similar. An import path can
be specified either in dotted notation (``xml.sax.saxutils.escape``)
or with a colon as object delimiter (``xml.sax.saxutils:escape``).
If the `silent` is True the return value will be `None` if the import
fails.
:return: imported object
"""
try:
if ":" in import_name:
module, obj = import_name.split(":", 1)
elif "." in import_name:
module, _, obj = import_name.rpartition(".")
else:
return __import__(import_name)
return getattr(__import__(module, None, None, [obj]), obj)
except (ImportError, AttributeError):
if not silent:
raise
def open_if_exists(filename: str, mode: str = "rb") -> t.Optional[t.IO]:
"""Returns a file descriptor for the filename if that file exists,
otherwise ``None``.
"""
if not os.path.isfile(filename):
return None
return open(filename, mode)
def object_type_repr(obj: t.Any) -> str:
"""Returns the name of the object's type. For some recognized
singletons the name of the object is returned instead. (For
example for `None` and `Ellipsis`).
"""
if obj is None:
return "None"
elif obj is Ellipsis:
return "Ellipsis"
cls = type(obj)
if cls.__module__ == "builtins":
return f"{cls.__name__} object"
return f"{cls.__module__}.{cls.__name__} object"
def pformat(obj: t.Any) -> str:
"""Format an object using :func:`pprint.pformat`."""
from pprint import pformat # type: ignore
return pformat(obj)
_http_re = re.compile(
r"""
^
(
(https?://|www\.) # scheme or www
(([\w%-]+\.)+)? # subdomain
(
[a-z]{2,63} # basic tld
|
xn--[\w%]{2,59} # idna tld
)
|
([\w%-]{2,63}\.)+ # basic domain
(com|net|int|edu|gov|org|info|mil) # basic tld
|
(https?://) # scheme
(
(([\d]{1,3})(\.[\d]{1,3}){3}) # IPv4
|
(\[([\da-f]{0,4}:){2}([\da-f]{0,4}:?){1,6}]) # IPv6
)
)
(?::[\d]{1,5})? # port
(?:[/?#]\S*)? # path, query, and fragment
$
""",
re.IGNORECASE | re.VERBOSE,
)
_email_re = re.compile(r"^\S+@\w[\w.-]*\.\w+$")
def urlize(
text: str,
trim_url_limit: t.Optional[int] = None,
rel: t.Optional[str] = None,
target: t.Optional[str] = None,
extra_schemes: t.Optional[t.Iterable[str]] = None,
) -> str:
"""Convert URLs in text into clickable links.
This may not recognize links in some situations. Usually, a more
comprehensive formatter, such as a Markdown library, is a better
choice.
Works on ``http://``, ``https://``, ``www.``, ``mailto:``, and email
addresses. Links with trailing punctuation (periods, commas, closing
parentheses) and leading punctuation (opening parentheses) are
recognized excluding the punctuation. Email addresses that include
header fields are not recognized (for example,
``mailto:address@example.com?cc=copy@example.com``).
:param text: Original text containing URLs to link.
:param trim_url_limit: Shorten displayed URL values to this length.
:param target: Add the ``target`` attribute to links.
:param rel: Add the ``rel`` attribute to links.
:param extra_schemes: Recognize URLs that start with these schemes
in addition to the default behavior.
.. versionchanged:: 3.0
The ``extra_schemes`` parameter was added.
.. versionchanged:: 3.0
Generate ``https://`` links for URLs without a scheme.
.. versionchanged:: 3.0
The parsing rules were updated. Recognize email addresses with
or without the ``mailto:`` scheme. Validate IP addresses. Ignore
parentheses and brackets in more cases.
"""
if trim_url_limit is not None:
def trim_url(x: str) -> str:
if len(x) > trim_url_limit: # type: ignore
return f"{x[:trim_url_limit]}..."
return x
else:
def trim_url(x: str) -> str:
return x
words = re.split(r"(\s+)", str(markupsafe.escape(text)))
rel_attr = f' rel="{markupsafe.escape(rel)}"' if rel else ""
target_attr = f' target="{markupsafe.escape(target)}"' if target else ""
for i, word in enumerate(words):
head, middle, tail = "", word, ""
match = re.match(r"^([(<]|&lt;)+", middle)
if match:
head = match.group()
middle = middle[match.end() :]
# Unlike lead, which is anchored to the start of the string,
# need to check that the string ends with any of the characters
# before trying to match all of them, to avoid backtracking.
if middle.endswith((")", ">", ".", ",", "\n", "&gt;")):
match = re.search(r"([)>.,\n]|&gt;)+$", middle)
if match:
tail = match.group()
middle = middle[: match.start()]
# Prefer balancing parentheses in URLs instead of ignoring a
# trailing character.
for start_char, end_char in ("(", ")"), ("<", ">"), ("&lt;", "&gt;"):
start_count = middle.count(start_char)
if start_count <= middle.count(end_char):
# Balanced, or lighter on the left
continue
# Move as many as possible from the tail to balance
for _ in range(min(start_count, tail.count(end_char))):
end_index = tail.index(end_char) + len(end_char)
# Move anything in the tail before the end char too
middle += tail[:end_index]
tail = tail[end_index:]
if _http_re.match(middle):
if middle.startswith("https://") or middle.startswith("http://"):
middle = (
f'<a href="{middle}"{rel_attr}{target_attr}>{trim_url(middle)}</a>'
)
else:
middle = (
f'<a href="https://{middle}"{rel_attr}{target_attr}>'
f"{trim_url(middle)}</a>"
)
elif middle.startswith("mailto:") and _email_re.match(middle[7:]):
middle = f'<a href="{middle}">{middle[7:]}</a>'
elif (
"@" in middle
and not middle.startswith("www.")
and ":" not in middle
and _email_re.match(middle)
):
middle = f'<a href="mailto:{middle}">{middle}</a>'
elif extra_schemes is not None:
for scheme in extra_schemes:
if middle != scheme and middle.startswith(scheme):
middle = f'<a href="{middle}"{rel_attr}{target_attr}>{middle}</a>'
words[i] = f"{head}{middle}{tail}"
return "".join(words)
def generate_lorem_ipsum(
n: int = 5, html: bool = True, min: int = 20, max: int = 100
) -> str:
"""Generate some lorem ipsum for the template."""
from .constants import LOREM_IPSUM_WORDS
words = LOREM_IPSUM_WORDS.split()
result = []
for _ in range(n):
next_capitalized = True
last_comma = last_fullstop = 0
word = None
last = None
p = []
# each paragraph contains out of 20 to 100 words.
for idx, _ in enumerate(range(randrange(min, max))):
while True:
word = choice(words)
if word != last:
last = word
break
if next_capitalized:
word = word.capitalize()
next_capitalized = False
# add commas
if idx - randrange(3, 8) > last_comma:
last_comma = idx
last_fullstop += 2
word += ","
# add end of sentences
if idx - randrange(10, 20) > last_fullstop:
last_comma = last_fullstop = idx
word += "."
next_capitalized = True
p.append(word)
# ensure that the paragraph ends with a dot.
p_str = " ".join(p)
if p_str.endswith(","):
p_str = p_str[:-1] + "."
elif not p_str.endswith("."):
p_str += "."
result.append(p_str)
if not html:
return "\n\n".join(result)
return markupsafe.Markup(
"\n".join(f"<p>{markupsafe.escape(x)}</p>" for x in result)
)
def url_quote(obj: t.Any, charset: str = "utf-8", for_qs: bool = False) -> str:
"""Quote a string for use in a URL using the given charset.
:param obj: String or bytes to quote. Other types are converted to
string then encoded to bytes using the given charset.
:param charset: Encode text to bytes using this charset.
:param for_qs: Quote "/" and use "+" for spaces.
"""
if not isinstance(obj, bytes):
if not isinstance(obj, str):
obj = str(obj)
obj = obj.encode(charset)
safe = b"" if for_qs else b"/"
rv = quote_from_bytes(obj, safe)
if for_qs:
rv = rv.replace("%20", "+")
return rv
@abc.MutableMapping.register
class LRUCache:
"""A simple LRU Cache implementation."""
# this is fast for small capacities (something below 1000) but doesn't
# scale. But as long as it's only used as storage for templates this
# won't do any harm.
def __init__(self, capacity: int) -> None:
self.capacity = capacity
self._mapping: t.Dict[t.Any, t.Any] = {}
self._queue: "te.Deque[t.Any]" = deque()
self._postinit()
def _postinit(self) -> None:
# alias all queue methods for faster lookup
self._popleft = self._queue.popleft
self._pop = self._queue.pop
self._remove = self._queue.remove
self._wlock = Lock()
self._append = self._queue.append
def __getstate__(self) -> t.Mapping[str, t.Any]:
return {
"capacity": self.capacity,
"_mapping": self._mapping,
"_queue": self._queue,
}
def __setstate__(self, d: t.Mapping[str, t.Any]) -> None:
self.__dict__.update(d)
self._postinit()
def __getnewargs__(self) -> t.Tuple:
return (self.capacity,)
def copy(self) -> "LRUCache":
"""Return a shallow copy of the instance."""
rv = self.__class__(self.capacity)
rv._mapping.update(self._mapping)
rv._queue.extend(self._queue)
return rv
def get(self, key: t.Any, default: t.Any = None) -> t.Any:
"""Return an item from the cache dict or `default`"""
try:
return self[key]
except KeyError:
return default
def setdefault(self, key: t.Any, default: t.Any = None) -> t.Any:
"""Set `default` if the key is not in the cache otherwise
leave unchanged. Return the value of this key.
"""
try:
return self[key]
except KeyError:
self[key] = default
return default
def clear(self) -> None:
"""Clear the cache."""
with self._wlock:
self._mapping.clear()
self._queue.clear()
def __contains__(self, key: t.Any) -> bool:
"""Check if a key exists in this cache."""
return key in self._mapping
def __len__(self) -> int:
"""Return the current size of the cache."""
return len(self._mapping)
def __repr__(self) -> str:
return f"<{type(self).__name__} {self._mapping!r}>"
def __getitem__(self, key: t.Any) -> t.Any:
"""Get an item from the cache. Moves the item up so that it has the
highest priority then.
Raise a `KeyError` if it does not exist.
"""
with self._wlock:
rv = self._mapping[key]
if self._queue[-1] != key:
try:
self._remove(key)
except ValueError:
# if something removed the key from the container
# when we read, ignore the ValueError that we would
# get otherwise.
pass
self._append(key)
return rv
def __setitem__(self, key: t.Any, value: t.Any) -> None:
"""Sets the value for an item. Moves the item up so that it
has the highest priority then.
"""
with self._wlock:
if key in self._mapping:
self._remove(key)
elif len(self._mapping) == self.capacity:
del self._mapping[self._popleft()]
self._append(key)
self._mapping[key] = value
def __delitem__(self, key: t.Any) -> None:
"""Remove an item from the cache dict.
Raise a `KeyError` if it does not exist.
"""
with self._wlock:
del self._mapping[key]
try:
self._remove(key)
except ValueError:
pass
def items(self) -> t.Iterable[t.Tuple[t.Any, t.Any]]:
"""Return a list of items."""
result = [(key, self._mapping[key]) for key in list(self._queue)]
result.reverse()
return result
def values(self) -> t.Iterable[t.Any]:
"""Return a list of all values."""
return [x[1] for x in self.items()]
def keys(self) -> t.Iterable[t.Any]:
"""Return a list of all keys ordered by most recent usage."""
return list(self)
def __iter__(self) -> t.Iterator[t.Any]:
return reversed(tuple(self._queue))
def __reversed__(self) -> t.Iterator[t.Any]:
"""Iterate over the keys in the cache dict, oldest items
coming first.
"""
return iter(tuple(self._queue))
__copy__ = copy
def select_autoescape(
enabled_extensions: t.Collection[str] = ("html", "htm", "xml"),
disabled_extensions: t.Collection[str] = (),
default_for_string: bool = True,
default: bool = False,
) -> t.Callable[[t.Optional[str]], bool]:
"""Intelligently sets the initial value of autoescaping based on the
filename of the template. This is the recommended way to configure
autoescaping if you do not want to write a custom function yourself.
If you want to enable it for all templates created from strings or
for all templates with `.html` and `.xml` extensions::
from jinja2 import Environment, select_autoescape
env = Environment(autoescape=select_autoescape(
enabled_extensions=('html', 'xml'),
default_for_string=True,
))
Example configuration to turn it on at all times except if the template
ends with `.txt`::
from jinja2 import Environment, select_autoescape
env = Environment(autoescape=select_autoescape(
disabled_extensions=('txt',),
default_for_string=True,
default=True,
))
The `enabled_extensions` is an iterable of all the extensions that
autoescaping should be enabled for. Likewise `disabled_extensions` is
a list of all templates it should be disabled for. If a template is
loaded from a string then the default from `default_for_string` is used.
If nothing matches then the initial value of autoescaping is set to the
value of `default`.
For security reasons this function operates case insensitive.
.. versionadded:: 2.9
"""
enabled_patterns = tuple(f".{x.lstrip('.').lower()}" for x in enabled_extensions)
disabled_patterns = tuple(f".{x.lstrip('.').lower()}" for x in disabled_extensions)
def autoescape(template_name: t.Optional[str]) -> bool:
if template_name is None:
return default_for_string
template_name = template_name.lower()
if template_name.endswith(enabled_patterns):
return True
if template_name.endswith(disabled_patterns):
return False
return default
return autoescape
def htmlsafe_json_dumps(
obj: t.Any, dumps: t.Optional[t.Callable[..., str]] = None, **kwargs: t.Any
) -> markupsafe.Markup:
"""Serialize an object to a string of JSON with :func:`json.dumps`,
then replace HTML-unsafe characters with Unicode escapes and mark
the result safe with :class:`~markupsafe.Markup`.
This is available in templates as the ``|tojson`` filter.
The following characters are escaped: ``<``, ``>``, ``&``, ``'``.
The returned string is safe to render in HTML documents and
``<script>`` tags. The exception is in HTML attributes that are
double quoted; either use single quotes or the ``|forceescape``
filter.
:param obj: The object to serialize to JSON.
:param dumps: The ``dumps`` function to use. Defaults to
``env.policies["json.dumps_function"]``, which defaults to
:func:`json.dumps`.
:param kwargs: Extra arguments to pass to ``dumps``. Merged onto
``env.policies["json.dumps_kwargs"]``.
.. versionchanged:: 3.0
The ``dumper`` parameter is renamed to ``dumps``.
.. versionadded:: 2.9
"""
if dumps is None:
dumps = json.dumps
return markupsafe.Markup(
dumps(obj, **kwargs)
.replace("<", "\\u003c")
.replace(">", "\\u003e")
.replace("&", "\\u0026")
.replace("'", "\\u0027")
)
class Cycler:
"""Cycle through values by yield them one at a time, then restarting
once the end is reached. Available as ``cycler`` in templates.
Similar to ``loop.cycle``, but can be used outside loops or across
multiple loops. For example, render a list of folders and files in a
list, alternating giving them "odd" and "even" classes.
.. code-block:: html+jinja
{% set row_class = cycler("odd", "even") %}
<ul class="browser">
{% for folder in folders %}
<li class="folder {{ row_class.next() }}">{{ folder }}
{% endfor %}
{% for file in files %}
<li class="file {{ row_class.next() }}">{{ file }}
{% endfor %}
</ul>
:param items: Each positional argument will be yielded in the order
given for each cycle.
.. versionadded:: 2.1
"""
def __init__(self, *items: t.Any) -> None:
if not items:
raise RuntimeError("at least one item has to be provided")
self.items = items
self.pos = 0
def reset(self) -> None:
"""Resets the current item to the first item."""
self.pos = 0
@property
def current(self) -> t.Any:
"""Return the current item. Equivalent to the item that will be
returned next time :meth:`next` is called.
"""
return self.items[self.pos]
def next(self) -> t.Any:
"""Return the current item, then advance :attr:`current` to the
next item.
"""
rv = self.current
self.pos = (self.pos + 1) % len(self.items)
return rv
__next__ = next
class Joiner:
"""A joining helper for templates."""
def __init__(self, sep: str = ", ") -> None:
self.sep = sep
self.used = False
def __call__(self) -> str:
if not self.used:
self.used = True
return ""
return self.sep
class Namespace:
"""A namespace object that can hold arbitrary attributes. It may be
initialized from a dictionary or with keyword arguments."""
def __init__(*args: t.Any, **kwargs: t.Any) -> None: # noqa: B902
self, args = args[0], args[1:]
self.__attrs = dict(*args, **kwargs)
def __getattribute__(self, name: str) -> t.Any:
# __class__ is needed for the awaitable check in async mode
if name in {"_Namespace__attrs", "__class__"}:
return object.__getattribute__(self, name)
try:
return self.__attrs[name]
except KeyError:
raise AttributeError(name) from None
def __setitem__(self, name: str, value: t.Any) -> None:
self.__attrs[name] = value
def __repr__(self) -> str:
return f"<Namespace {self.__attrs!r}>"

View File

@ -1,92 +0,0 @@
"""API for traversing the AST nodes. Implemented by the compiler and
meta introspection.
"""
import typing as t
from .nodes import Node
if t.TYPE_CHECKING:
import typing_extensions as te
class VisitCallable(te.Protocol):
def __call__(self, node: Node, *args: t.Any, **kwargs: t.Any) -> t.Any:
...
class NodeVisitor:
"""Walks the abstract syntax tree and call visitor functions for every
node found. The visitor functions may return values which will be
forwarded by the `visit` method.
Per default the visitor functions for the nodes are ``'visit_'`` +
class name of the node. So a `TryFinally` node visit function would
be `visit_TryFinally`. This behavior can be changed by overriding
the `get_visitor` function. If no visitor function exists for a node
(return value `None`) the `generic_visit` visitor is used instead.
"""
def get_visitor(self, node: Node) -> "t.Optional[VisitCallable]":
"""Return the visitor function for this node or `None` if no visitor
exists for this node. In that case the generic visit function is
used instead.
"""
return getattr(self, f"visit_{type(node).__name__}", None)
def visit(self, node: Node, *args: t.Any, **kwargs: t.Any) -> t.Any:
"""Visit a node."""
f = self.get_visitor(node)
if f is not None:
return f(node, *args, **kwargs)
return self.generic_visit(node, *args, **kwargs)
def generic_visit(self, node: Node, *args: t.Any, **kwargs: t.Any) -> t.Any:
"""Called if no explicit visitor function exists for a node."""
for child_node in node.iter_child_nodes():
self.visit(child_node, *args, **kwargs)
class NodeTransformer(NodeVisitor):
"""Walks the abstract syntax tree and allows modifications of nodes.
The `NodeTransformer` will walk the AST and use the return value of the
visitor functions to replace or remove the old node. If the return
value of the visitor function is `None` the node will be removed
from the previous location otherwise it's replaced with the return
value. The return value may be the original node in which case no
replacement takes place.
"""
def generic_visit(self, node: Node, *args: t.Any, **kwargs: t.Any) -> Node:
for field, old_value in node.iter_fields():
if isinstance(old_value, list):
new_values = []
for value in old_value:
if isinstance(value, Node):
value = self.visit(value, *args, **kwargs)
if value is None:
continue
elif not isinstance(value, Node):
new_values.extend(value)
continue
new_values.append(value)
old_value[:] = new_values
elif isinstance(old_value, Node):
new_node = self.visit(old_value, *args, **kwargs)
if new_node is None:
delattr(node, field)
else:
setattr(node, field, new_node)
return node
def visit_list(self, node: Node, *args: t.Any, **kwargs: t.Any) -> t.List[Node]:
"""As transformers may return lists in some places this method
can be used to enforce a list as return value.
"""
rv = self.visit(node, *args, **kwargs)
if not isinstance(rv, list):
return [rv]
return rv

View File

@ -1,28 +0,0 @@
Copyright 2010 Pallets
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@ -1,295 +0,0 @@
import functools
import re
import string
import typing as t
if t.TYPE_CHECKING:
import typing_extensions as te
class HasHTML(te.Protocol):
def __html__(self) -> str:
pass
__version__ = "2.1.1"
_strip_comments_re = re.compile(r"<!--.*?-->")
_strip_tags_re = re.compile(r"<.*?>")
def _simple_escaping_wrapper(name: str) -> t.Callable[..., "Markup"]:
orig = getattr(str, name)
@functools.wraps(orig)
def wrapped(self: "Markup", *args: t.Any, **kwargs: t.Any) -> "Markup":
args = _escape_argspec(list(args), enumerate(args), self.escape) # type: ignore
_escape_argspec(kwargs, kwargs.items(), self.escape)
return self.__class__(orig(self, *args, **kwargs))
return wrapped
class Markup(str):
"""A string that is ready to be safely inserted into an HTML or XML
document, either because it was escaped or because it was marked
safe.
Passing an object to the constructor converts it to text and wraps
it to mark it safe without escaping. To escape the text, use the
:meth:`escape` class method instead.
>>> Markup("Hello, <em>World</em>!")
Markup('Hello, <em>World</em>!')
>>> Markup(42)
Markup('42')
>>> Markup.escape("Hello, <em>World</em>!")
Markup('Hello &lt;em&gt;World&lt;/em&gt;!')
This implements the ``__html__()`` interface that some frameworks
use. Passing an object that implements ``__html__()`` will wrap the
output of that method, marking it safe.
>>> class Foo:
... def __html__(self):
... return '<a href="/foo">foo</a>'
...
>>> Markup(Foo())
Markup('<a href="/foo">foo</a>')
This is a subclass of :class:`str`. It has the same methods, but
escapes their arguments and returns a ``Markup`` instance.
>>> Markup("<em>%s</em>") % ("foo & bar",)
Markup('<em>foo &amp; bar</em>')
>>> Markup("<em>Hello</em> ") + "<foo>"
Markup('<em>Hello</em> &lt;foo&gt;')
"""
__slots__ = ()
def __new__(
cls, base: t.Any = "", encoding: t.Optional[str] = None, errors: str = "strict"
) -> "Markup":
if hasattr(base, "__html__"):
base = base.__html__()
if encoding is None:
return super().__new__(cls, base)
return super().__new__(cls, base, encoding, errors)
def __html__(self) -> "Markup":
return self
def __add__(self, other: t.Union[str, "HasHTML"]) -> "Markup":
if isinstance(other, str) or hasattr(other, "__html__"):
return self.__class__(super().__add__(self.escape(other)))
return NotImplemented
def __radd__(self, other: t.Union[str, "HasHTML"]) -> "Markup":
if isinstance(other, str) or hasattr(other, "__html__"):
return self.escape(other).__add__(self)
return NotImplemented
def __mul__(self, num: "te.SupportsIndex") -> "Markup":
if isinstance(num, int):
return self.__class__(super().__mul__(num))
return NotImplemented
__rmul__ = __mul__
def __mod__(self, arg: t.Any) -> "Markup":
if isinstance(arg, tuple):
# a tuple of arguments, each wrapped
arg = tuple(_MarkupEscapeHelper(x, self.escape) for x in arg)
elif hasattr(type(arg), "__getitem__") and not isinstance(arg, str):
# a mapping of arguments, wrapped
arg = _MarkupEscapeHelper(arg, self.escape)
else:
# a single argument, wrapped with the helper and a tuple
arg = (_MarkupEscapeHelper(arg, self.escape),)
return self.__class__(super().__mod__(arg))
def __repr__(self) -> str:
return f"{self.__class__.__name__}({super().__repr__()})"
def join(self, seq: t.Iterable[t.Union[str, "HasHTML"]]) -> "Markup":
return self.__class__(super().join(map(self.escape, seq)))
join.__doc__ = str.join.__doc__
def split( # type: ignore
self, sep: t.Optional[str] = None, maxsplit: int = -1
) -> t.List["Markup"]:
return [self.__class__(v) for v in super().split(sep, maxsplit)]
split.__doc__ = str.split.__doc__
def rsplit( # type: ignore
self, sep: t.Optional[str] = None, maxsplit: int = -1
) -> t.List["Markup"]:
return [self.__class__(v) for v in super().rsplit(sep, maxsplit)]
rsplit.__doc__ = str.rsplit.__doc__
def splitlines(self, keepends: bool = False) -> t.List["Markup"]: # type: ignore
return [self.__class__(v) for v in super().splitlines(keepends)]
splitlines.__doc__ = str.splitlines.__doc__
def unescape(self) -> str:
"""Convert escaped markup back into a text string. This replaces
HTML entities with the characters they represent.
>>> Markup("Main &raquo; <em>About</em>").unescape()
'Main » <em>About</em>'
"""
from html import unescape
return unescape(str(self))
def striptags(self) -> str:
""":meth:`unescape` the markup, remove tags, and normalize
whitespace to single spaces.
>>> Markup("Main &raquo;\t<em>About</em>").striptags()
'Main » About'
"""
# Use two regexes to avoid ambiguous matches.
value = _strip_comments_re.sub("", self)
value = _strip_tags_re.sub("", value)
value = " ".join(value.split())
return Markup(value).unescape()
@classmethod
def escape(cls, s: t.Any) -> "Markup":
"""Escape a string. Calls :func:`escape` and ensures that for
subclasses the correct type is returned.
"""
rv = escape(s)
if rv.__class__ is not cls:
return cls(rv)
return rv
for method in (
"__getitem__",
"capitalize",
"title",
"lower",
"upper",
"replace",
"ljust",
"rjust",
"lstrip",
"rstrip",
"center",
"strip",
"translate",
"expandtabs",
"swapcase",
"zfill",
):
locals()[method] = _simple_escaping_wrapper(method)
del method
def partition(self, sep: str) -> t.Tuple["Markup", "Markup", "Markup"]:
l, s, r = super().partition(self.escape(sep))
cls = self.__class__
return cls(l), cls(s), cls(r)
def rpartition(self, sep: str) -> t.Tuple["Markup", "Markup", "Markup"]:
l, s, r = super().rpartition(self.escape(sep))
cls = self.__class__
return cls(l), cls(s), cls(r)
def format(self, *args: t.Any, **kwargs: t.Any) -> "Markup":
formatter = EscapeFormatter(self.escape)
return self.__class__(formatter.vformat(self, args, kwargs))
def __html_format__(self, format_spec: str) -> "Markup":
if format_spec:
raise ValueError("Unsupported format specification for Markup.")
return self
class EscapeFormatter(string.Formatter):
__slots__ = ("escape",)
def __init__(self, escape: t.Callable[[t.Any], Markup]) -> None:
self.escape = escape
super().__init__()
def format_field(self, value: t.Any, format_spec: str) -> str:
if hasattr(value, "__html_format__"):
rv = value.__html_format__(format_spec)
elif hasattr(value, "__html__"):
if format_spec:
raise ValueError(
f"Format specifier {format_spec} given, but {type(value)} does not"
" define __html_format__. A class that defines __html__ must define"
" __html_format__ to work with format specifiers."
)
rv = value.__html__()
else:
# We need to make sure the format spec is str here as
# otherwise the wrong callback methods are invoked.
rv = string.Formatter.format_field(self, value, str(format_spec))
return str(self.escape(rv))
_ListOrDict = t.TypeVar("_ListOrDict", list, dict)
def _escape_argspec(
obj: _ListOrDict, iterable: t.Iterable[t.Any], escape: t.Callable[[t.Any], Markup]
) -> _ListOrDict:
"""Helper for various string-wrapped functions."""
for key, value in iterable:
if isinstance(value, str) or hasattr(value, "__html__"):
obj[key] = escape(value)
return obj
class _MarkupEscapeHelper:
"""Helper for :meth:`Markup.__mod__`."""
__slots__ = ("obj", "escape")
def __init__(self, obj: t.Any, escape: t.Callable[[t.Any], Markup]) -> None:
self.obj = obj
self.escape = escape
def __getitem__(self, item: t.Any) -> "_MarkupEscapeHelper":
return _MarkupEscapeHelper(self.obj[item], self.escape)
def __str__(self) -> str:
return str(self.escape(self.obj))
def __repr__(self) -> str:
return str(self.escape(repr(self.obj)))
def __int__(self) -> int:
return int(self.obj)
def __float__(self) -> float:
return float(self.obj)
# circular import
try:
from ._speedups import escape as escape
from ._speedups import escape_silent as escape_silent
from ._speedups import soft_str as soft_str
except ImportError:
from ._native import escape as escape
from ._native import escape_silent as escape_silent # noqa: F401
from ._native import soft_str as soft_str # noqa: F401

View File

@ -1,63 +0,0 @@
import typing as t
from . import Markup
def escape(s: t.Any) -> Markup:
"""Replace the characters ``&``, ``<``, ``>``, ``'``, and ``"`` in
the string with HTML-safe sequences. Use this if you need to display
text that might contain such characters in HTML.
If the object has an ``__html__`` method, it is called and the
return value is assumed to already be safe for HTML.
:param s: An object to be converted to a string and escaped.
:return: A :class:`Markup` string with the escaped text.
"""
if hasattr(s, "__html__"):
return Markup(s.__html__())
return Markup(
str(s)
.replace("&", "&amp;")
.replace(">", "&gt;")
.replace("<", "&lt;")
.replace("'", "&#39;")
.replace('"', "&#34;")
)
def escape_silent(s: t.Optional[t.Any]) -> Markup:
"""Like :func:`escape` but treats ``None`` as the empty string.
Useful with optional values, as otherwise you get the string
``'None'`` when the value is ``None``.
>>> escape(None)
Markup('None')
>>> escape_silent(None)
Markup('')
"""
if s is None:
return Markup()
return escape(s)
def soft_str(s: t.Any) -> str:
"""Convert an object to a string if it isn't already. This preserves
a :class:`Markup` string rather than converting it back to a basic
string, so it will still be marked as safe and won't be escaped
again.
>>> value = escape("<User 1>")
>>> value
Markup('&lt;User 1&gt;')
>>> escape(str(value))
Markup('&amp;lt;User 1&amp;gt;')
>>> escape(soft_str(value))
Markup('&lt;User 1&gt;')
"""
if not isinstance(s, str):
return str(s)
return s

View File

@ -1,320 +0,0 @@
#include <Python.h>
static PyObject* markup;
static int
init_constants(void)
{
PyObject *module;
/* import markup type so that we can mark the return value */
module = PyImport_ImportModule("markupsafe");
if (!module)
return 0;
markup = PyObject_GetAttrString(module, "Markup");
Py_DECREF(module);
return 1;
}
#define GET_DELTA(inp, inp_end, delta) \
while (inp < inp_end) { \
switch (*inp++) { \
case '"': \
case '\'': \
case '&': \
delta += 4; \
break; \
case '<': \
case '>': \
delta += 3; \
break; \
} \
}
#define DO_ESCAPE(inp, inp_end, outp) \
{ \
Py_ssize_t ncopy = 0; \
while (inp < inp_end) { \
switch (*inp) { \
case '"': \
memcpy(outp, inp-ncopy, sizeof(*outp)*ncopy); \
outp += ncopy; ncopy = 0; \
*outp++ = '&'; \
*outp++ = '#'; \
*outp++ = '3'; \
*outp++ = '4'; \
*outp++ = ';'; \
break; \
case '\'': \
memcpy(outp, inp-ncopy, sizeof(*outp)*ncopy); \
outp += ncopy; ncopy = 0; \
*outp++ = '&'; \
*outp++ = '#'; \
*outp++ = '3'; \
*outp++ = '9'; \
*outp++ = ';'; \
break; \
case '&': \
memcpy(outp, inp-ncopy, sizeof(*outp)*ncopy); \
outp += ncopy; ncopy = 0; \
*outp++ = '&'; \
*outp++ = 'a'; \
*outp++ = 'm'; \
*outp++ = 'p'; \
*outp++ = ';'; \
break; \
case '<': \
memcpy(outp, inp-ncopy, sizeof(*outp)*ncopy); \
outp += ncopy; ncopy = 0; \
*outp++ = '&'; \
*outp++ = 'l'; \
*outp++ = 't'; \
*outp++ = ';'; \
break; \
case '>': \
memcpy(outp, inp-ncopy, sizeof(*outp)*ncopy); \
outp += ncopy; ncopy = 0; \
*outp++ = '&'; \
*outp++ = 'g'; \
*outp++ = 't'; \
*outp++ = ';'; \
break; \
default: \
ncopy++; \
} \
inp++; \
} \
memcpy(outp, inp-ncopy, sizeof(*outp)*ncopy); \
}
static PyObject*
escape_unicode_kind1(PyUnicodeObject *in)
{
Py_UCS1 *inp = PyUnicode_1BYTE_DATA(in);
Py_UCS1 *inp_end = inp + PyUnicode_GET_LENGTH(in);
Py_UCS1 *outp;
PyObject *out;
Py_ssize_t delta = 0;
GET_DELTA(inp, inp_end, delta);
if (!delta) {
Py_INCREF(in);
return (PyObject*)in;
}
out = PyUnicode_New(PyUnicode_GET_LENGTH(in) + delta,
PyUnicode_IS_ASCII(in) ? 127 : 255);
if (!out)
return NULL;
inp = PyUnicode_1BYTE_DATA(in);
outp = PyUnicode_1BYTE_DATA(out);
DO_ESCAPE(inp, inp_end, outp);
return out;
}
static PyObject*
escape_unicode_kind2(PyUnicodeObject *in)
{
Py_UCS2 *inp = PyUnicode_2BYTE_DATA(in);
Py_UCS2 *inp_end = inp + PyUnicode_GET_LENGTH(in);
Py_UCS2 *outp;
PyObject *out;
Py_ssize_t delta = 0;
GET_DELTA(inp, inp_end, delta);
if (!delta) {
Py_INCREF(in);
return (PyObject*)in;
}
out = PyUnicode_New(PyUnicode_GET_LENGTH(in) + delta, 65535);
if (!out)
return NULL;
inp = PyUnicode_2BYTE_DATA(in);
outp = PyUnicode_2BYTE_DATA(out);
DO_ESCAPE(inp, inp_end, outp);
return out;
}
static PyObject*
escape_unicode_kind4(PyUnicodeObject *in)
{
Py_UCS4 *inp = PyUnicode_4BYTE_DATA(in);
Py_UCS4 *inp_end = inp + PyUnicode_GET_LENGTH(in);
Py_UCS4 *outp;
PyObject *out;
Py_ssize_t delta = 0;
GET_DELTA(inp, inp_end, delta);
if (!delta) {
Py_INCREF(in);
return (PyObject*)in;
}
out = PyUnicode_New(PyUnicode_GET_LENGTH(in) + delta, 1114111);
if (!out)
return NULL;
inp = PyUnicode_4BYTE_DATA(in);
outp = PyUnicode_4BYTE_DATA(out);
DO_ESCAPE(inp, inp_end, outp);
return out;
}
static PyObject*
escape_unicode(PyUnicodeObject *in)
{
if (PyUnicode_READY(in))
return NULL;
switch (PyUnicode_KIND(in)) {
case PyUnicode_1BYTE_KIND:
return escape_unicode_kind1(in);
case PyUnicode_2BYTE_KIND:
return escape_unicode_kind2(in);
case PyUnicode_4BYTE_KIND:
return escape_unicode_kind4(in);
}
assert(0); /* shouldn't happen */
return NULL;
}
static PyObject*
escape(PyObject *self, PyObject *text)
{
static PyObject *id_html;
PyObject *s = NULL, *rv = NULL, *html;
if (id_html == NULL) {
id_html = PyUnicode_InternFromString("__html__");
if (id_html == NULL) {
return NULL;
}
}
/* we don't have to escape integers, bools or floats */
if (PyLong_CheckExact(text) ||
PyFloat_CheckExact(text) || PyBool_Check(text) ||
text == Py_None)
return PyObject_CallFunctionObjArgs(markup, text, NULL);
/* if the object has an __html__ method that performs the escaping */
html = PyObject_GetAttr(text ,id_html);
if (html) {
s = PyObject_CallObject(html, NULL);
Py_DECREF(html);
if (s == NULL) {
return NULL;
}
/* Convert to Markup object */
rv = PyObject_CallFunctionObjArgs(markup, (PyObject*)s, NULL);
Py_DECREF(s);
return rv;
}
/* otherwise make the object unicode if it isn't, then escape */
PyErr_Clear();
if (!PyUnicode_Check(text)) {
PyObject *unicode = PyObject_Str(text);
if (!unicode)
return NULL;
s = escape_unicode((PyUnicodeObject*)unicode);
Py_DECREF(unicode);
}
else
s = escape_unicode((PyUnicodeObject*)text);
/* convert the unicode string into a markup object. */
rv = PyObject_CallFunctionObjArgs(markup, (PyObject*)s, NULL);
Py_DECREF(s);
return rv;
}
static PyObject*
escape_silent(PyObject *self, PyObject *text)
{
if (text != Py_None)
return escape(self, text);
return PyObject_CallFunctionObjArgs(markup, NULL);
}
static PyObject*
soft_str(PyObject *self, PyObject *s)
{
if (!PyUnicode_Check(s))
return PyObject_Str(s);
Py_INCREF(s);
return s;
}
static PyMethodDef module_methods[] = {
{
"escape",
(PyCFunction)escape,
METH_O,
"Replace the characters ``&``, ``<``, ``>``, ``'``, and ``\"`` in"
" the string with HTML-safe sequences. Use this if you need to display"
" text that might contain such characters in HTML.\n\n"
"If the object has an ``__html__`` method, it is called and the"
" return value is assumed to already be safe for HTML.\n\n"
":param s: An object to be converted to a string and escaped.\n"
":return: A :class:`Markup` string with the escaped text.\n"
},
{
"escape_silent",
(PyCFunction)escape_silent,
METH_O,
"Like :func:`escape` but treats ``None`` as the empty string."
" Useful with optional values, as otherwise you get the string"
" ``'None'`` when the value is ``None``.\n\n"
">>> escape(None)\n"
"Markup('None')\n"
">>> escape_silent(None)\n"
"Markup('')\n"
},
{
"soft_str",
(PyCFunction)soft_str,
METH_O,
"Convert an object to a string if it isn't already. This preserves"
" a :class:`Markup` string rather than converting it back to a basic"
" string, so it will still be marked as safe and won't be escaped"
" again.\n\n"
">>> value = escape(\"<User 1>\")\n"
">>> value\n"
"Markup('&lt;User 1&gt;')\n"
">>> escape(str(value))\n"
"Markup('&amp;lt;User 1&amp;gt;')\n"
">>> escape(soft_str(value))\n"
"Markup('&lt;User 1&gt;')\n"
},
{NULL, NULL, 0, NULL} /* Sentinel */
};
static struct PyModuleDef module_definition = {
PyModuleDef_HEAD_INIT,
"markupsafe._speedups",
NULL,
-1,
module_methods,
NULL,
NULL,
NULL,
NULL
};
PyMODINIT_FUNC
PyInit__speedups(void)
{
if (!init_constants())
return NULL;
return PyModule_Create(&module_definition);
}

View File

@ -1,9 +0,0 @@
from typing import Any
from typing import Optional
from . import Markup
def escape(s: Any) -> Markup: ...
def escape_silent(s: Optional[Any]) -> Markup: ...
def soft_str(s: Any) -> str: ...
def soft_unicode(s: Any) -> str: ...

View File

@ -1,20 +0,0 @@
Copyright (c) 2017-2021 Ingy döt Net
Copyright (c) 2006-2016 Kirill Simonov
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@ -1,390 +0,0 @@
from .error import *
from .tokens import *
from .events import *
from .nodes import *
from .loader import *
from .dumper import *
__version__ = '6.0'
try:
from .cyaml import *
__with_libyaml__ = True
except ImportError:
__with_libyaml__ = False
import io
#------------------------------------------------------------------------------
# XXX "Warnings control" is now deprecated. Leaving in the API function to not
# break code that uses it.
#------------------------------------------------------------------------------
def warnings(settings=None):
if settings is None:
return {}
#------------------------------------------------------------------------------
def scan(stream, Loader=Loader):
"""
Scan a YAML stream and produce scanning tokens.
"""
loader = Loader(stream)
try:
while loader.check_token():
yield loader.get_token()
finally:
loader.dispose()
def parse(stream, Loader=Loader):
"""
Parse a YAML stream and produce parsing events.
"""
loader = Loader(stream)
try:
while loader.check_event():
yield loader.get_event()
finally:
loader.dispose()
def compose(stream, Loader=Loader):
"""
Parse the first YAML document in a stream
and produce the corresponding representation tree.
"""
loader = Loader(stream)
try:
return loader.get_single_node()
finally:
loader.dispose()
def compose_all(stream, Loader=Loader):
"""
Parse all YAML documents in a stream
and produce corresponding representation trees.
"""
loader = Loader(stream)
try:
while loader.check_node():
yield loader.get_node()
finally:
loader.dispose()
def load(stream, Loader):
"""
Parse the first YAML document in a stream
and produce the corresponding Python object.
"""
loader = Loader(stream)
try:
return loader.get_single_data()
finally:
loader.dispose()
def load_all(stream, Loader):
"""
Parse all YAML documents in a stream
and produce corresponding Python objects.
"""
loader = Loader(stream)
try:
while loader.check_data():
yield loader.get_data()
finally:
loader.dispose()
def full_load(stream):
"""
Parse the first YAML document in a stream
and produce the corresponding Python object.
Resolve all tags except those known to be
unsafe on untrusted input.
"""
return load(stream, FullLoader)
def full_load_all(stream):
"""
Parse all YAML documents in a stream
and produce corresponding Python objects.
Resolve all tags except those known to be
unsafe on untrusted input.
"""
return load_all(stream, FullLoader)
def safe_load(stream):
"""
Parse the first YAML document in a stream
and produce the corresponding Python object.
Resolve only basic YAML tags. This is known
to be safe for untrusted input.
"""
return load(stream, SafeLoader)
def safe_load_all(stream):
"""
Parse all YAML documents in a stream
and produce corresponding Python objects.
Resolve only basic YAML tags. This is known
to be safe for untrusted input.
"""
return load_all(stream, SafeLoader)
def unsafe_load(stream):
"""
Parse the first YAML document in a stream
and produce the corresponding Python object.
Resolve all tags, even those known to be
unsafe on untrusted input.
"""
return load(stream, UnsafeLoader)
def unsafe_load_all(stream):
"""
Parse all YAML documents in a stream
and produce corresponding Python objects.
Resolve all tags, even those known to be
unsafe on untrusted input.
"""
return load_all(stream, UnsafeLoader)
def emit(events, stream=None, Dumper=Dumper,
canonical=None, indent=None, width=None,
allow_unicode=None, line_break=None):
"""
Emit YAML parsing events into a stream.
If stream is None, return the produced string instead.
"""
getvalue = None
if stream is None:
stream = io.StringIO()
getvalue = stream.getvalue
dumper = Dumper(stream, canonical=canonical, indent=indent, width=width,
allow_unicode=allow_unicode, line_break=line_break)
try:
for event in events:
dumper.emit(event)
finally:
dumper.dispose()
if getvalue:
return getvalue()
def serialize_all(nodes, stream=None, Dumper=Dumper,
canonical=None, indent=None, width=None,
allow_unicode=None, line_break=None,
encoding=None, explicit_start=None, explicit_end=None,
version=None, tags=None):
"""
Serialize a sequence of representation trees into a YAML stream.
If stream is None, return the produced string instead.
"""
getvalue = None
if stream is None:
if encoding is None:
stream = io.StringIO()
else:
stream = io.BytesIO()
getvalue = stream.getvalue
dumper = Dumper(stream, canonical=canonical, indent=indent, width=width,
allow_unicode=allow_unicode, line_break=line_break,
encoding=encoding, version=version, tags=tags,
explicit_start=explicit_start, explicit_end=explicit_end)
try:
dumper.open()
for node in nodes:
dumper.serialize(node)
dumper.close()
finally:
dumper.dispose()
if getvalue:
return getvalue()
def serialize(node, stream=None, Dumper=Dumper, **kwds):
"""
Serialize a representation tree into a YAML stream.
If stream is None, return the produced string instead.
"""
return serialize_all([node], stream, Dumper=Dumper, **kwds)
def dump_all(documents, stream=None, Dumper=Dumper,
default_style=None, default_flow_style=False,
canonical=None, indent=None, width=None,
allow_unicode=None, line_break=None,
encoding=None, explicit_start=None, explicit_end=None,
version=None, tags=None, sort_keys=True):
"""
Serialize a sequence of Python objects into a YAML stream.
If stream is None, return the produced string instead.
"""
getvalue = None
if stream is None:
if encoding is None:
stream = io.StringIO()
else:
stream = io.BytesIO()
getvalue = stream.getvalue
dumper = Dumper(stream, default_style=default_style,
default_flow_style=default_flow_style,
canonical=canonical, indent=indent, width=width,
allow_unicode=allow_unicode, line_break=line_break,
encoding=encoding, version=version, tags=tags,
explicit_start=explicit_start, explicit_end=explicit_end, sort_keys=sort_keys)
try:
dumper.open()
for data in documents:
dumper.represent(data)
dumper.close()
finally:
dumper.dispose()
if getvalue:
return getvalue()
def dump(data, stream=None, Dumper=Dumper, **kwds):
"""
Serialize a Python object into a YAML stream.
If stream is None, return the produced string instead.
"""
return dump_all([data], stream, Dumper=Dumper, **kwds)
def safe_dump_all(documents, stream=None, **kwds):
"""
Serialize a sequence of Python objects into a YAML stream.
Produce only basic YAML tags.
If stream is None, return the produced string instead.
"""
return dump_all(documents, stream, Dumper=SafeDumper, **kwds)
def safe_dump(data, stream=None, **kwds):
"""
Serialize a Python object into a YAML stream.
Produce only basic YAML tags.
If stream is None, return the produced string instead.
"""
return dump_all([data], stream, Dumper=SafeDumper, **kwds)
def add_implicit_resolver(tag, regexp, first=None,
Loader=None, Dumper=Dumper):
"""
Add an implicit scalar detector.
If an implicit scalar value matches the given regexp,
the corresponding tag is assigned to the scalar.
first is a sequence of possible initial characters or None.
"""
if Loader is None:
loader.Loader.add_implicit_resolver(tag, regexp, first)
loader.FullLoader.add_implicit_resolver(tag, regexp, first)
loader.UnsafeLoader.add_implicit_resolver(tag, regexp, first)
else:
Loader.add_implicit_resolver(tag, regexp, first)
Dumper.add_implicit_resolver(tag, regexp, first)
def add_path_resolver(tag, path, kind=None, Loader=None, Dumper=Dumper):
"""
Add a path based resolver for the given tag.
A path is a list of keys that forms a path
to a node in the representation tree.
Keys can be string values, integers, or None.
"""
if Loader is None:
loader.Loader.add_path_resolver(tag, path, kind)
loader.FullLoader.add_path_resolver(tag, path, kind)
loader.UnsafeLoader.add_path_resolver(tag, path, kind)
else:
Loader.add_path_resolver(tag, path, kind)
Dumper.add_path_resolver(tag, path, kind)
def add_constructor(tag, constructor, Loader=None):
"""
Add a constructor for the given tag.
Constructor is a function that accepts a Loader instance
and a node object and produces the corresponding Python object.
"""
if Loader is None:
loader.Loader.add_constructor(tag, constructor)
loader.FullLoader.add_constructor(tag, constructor)
loader.UnsafeLoader.add_constructor(tag, constructor)
else:
Loader.add_constructor(tag, constructor)
def add_multi_constructor(tag_prefix, multi_constructor, Loader=None):
"""
Add a multi-constructor for the given tag prefix.
Multi-constructor is called for a node if its tag starts with tag_prefix.
Multi-constructor accepts a Loader instance, a tag suffix,
and a node object and produces the corresponding Python object.
"""
if Loader is None:
loader.Loader.add_multi_constructor(tag_prefix, multi_constructor)
loader.FullLoader.add_multi_constructor(tag_prefix, multi_constructor)
loader.UnsafeLoader.add_multi_constructor(tag_prefix, multi_constructor)
else:
Loader.add_multi_constructor(tag_prefix, multi_constructor)
def add_representer(data_type, representer, Dumper=Dumper):
"""
Add a representer for the given type.
Representer is a function accepting a Dumper instance
and an instance of the given data type
and producing the corresponding representation node.
"""
Dumper.add_representer(data_type, representer)
def add_multi_representer(data_type, multi_representer, Dumper=Dumper):
"""
Add a representer for the given type.
Multi-representer is a function accepting a Dumper instance
and an instance of the given data type or subtype
and producing the corresponding representation node.
"""
Dumper.add_multi_representer(data_type, multi_representer)
class YAMLObjectMetaclass(type):
"""
The metaclass for YAMLObject.
"""
def __init__(cls, name, bases, kwds):
super(YAMLObjectMetaclass, cls).__init__(name, bases, kwds)
if 'yaml_tag' in kwds and kwds['yaml_tag'] is not None:
if isinstance(cls.yaml_loader, list):
for loader in cls.yaml_loader:
loader.add_constructor(cls.yaml_tag, cls.from_yaml)
else:
cls.yaml_loader.add_constructor(cls.yaml_tag, cls.from_yaml)
cls.yaml_dumper.add_representer(cls, cls.to_yaml)
class YAMLObject(metaclass=YAMLObjectMetaclass):
"""
An object that can dump itself to a YAML stream
and load itself from a YAML stream.
"""
__slots__ = () # no direct instantiation, so allow immutable subclasses
yaml_loader = [Loader, FullLoader, UnsafeLoader]
yaml_dumper = Dumper
yaml_tag = None
yaml_flow_style = None
@classmethod
def from_yaml(cls, loader, node):
"""
Convert a representation node to a Python object.
"""
return loader.construct_yaml_object(node, cls)
@classmethod
def to_yaml(cls, dumper, data):
"""
Convert a Python object to a representation node.
"""
return dumper.represent_yaml_object(cls.yaml_tag, data, cls,
flow_style=cls.yaml_flow_style)

View File

@ -1,139 +0,0 @@
__all__ = ['Composer', 'ComposerError']
from .error import MarkedYAMLError
from .events import *
from .nodes import *
class ComposerError(MarkedYAMLError):
pass
class Composer:
def __init__(self):
self.anchors = {}
def check_node(self):
# Drop the STREAM-START event.
if self.check_event(StreamStartEvent):
self.get_event()
# If there are more documents available?
return not self.check_event(StreamEndEvent)
def get_node(self):
# Get the root node of the next document.
if not self.check_event(StreamEndEvent):
return self.compose_document()
def get_single_node(self):
# Drop the STREAM-START event.
self.get_event()
# Compose a document if the stream is not empty.
document = None
if not self.check_event(StreamEndEvent):
document = self.compose_document()
# Ensure that the stream contains no more documents.
if not self.check_event(StreamEndEvent):
event = self.get_event()
raise ComposerError("expected a single document in the stream",
document.start_mark, "but found another document",
event.start_mark)
# Drop the STREAM-END event.
self.get_event()
return document
def compose_document(self):
# Drop the DOCUMENT-START event.
self.get_event()
# Compose the root node.
node = self.compose_node(None, None)
# Drop the DOCUMENT-END event.
self.get_event()
self.anchors = {}
return node
def compose_node(self, parent, index):
if self.check_event(AliasEvent):
event = self.get_event()
anchor = event.anchor
if anchor not in self.anchors:
raise ComposerError(None, None, "found undefined alias %r"
% anchor, event.start_mark)
return self.anchors[anchor]
event = self.peek_event()
anchor = event.anchor
if anchor is not None:
if anchor in self.anchors:
raise ComposerError("found duplicate anchor %r; first occurrence"
% anchor, self.anchors[anchor].start_mark,
"second occurrence", event.start_mark)
self.descend_resolver(parent, index)
if self.check_event(ScalarEvent):
node = self.compose_scalar_node(anchor)
elif self.check_event(SequenceStartEvent):
node = self.compose_sequence_node(anchor)
elif self.check_event(MappingStartEvent):
node = self.compose_mapping_node(anchor)
self.ascend_resolver()
return node
def compose_scalar_node(self, anchor):
event = self.get_event()
tag = event.tag
if tag is None or tag == '!':
tag = self.resolve(ScalarNode, event.value, event.implicit)
node = ScalarNode(tag, event.value,
event.start_mark, event.end_mark, style=event.style)
if anchor is not None:
self.anchors[anchor] = node
return node
def compose_sequence_node(self, anchor):
start_event = self.get_event()
tag = start_event.tag
if tag is None or tag == '!':
tag = self.resolve(SequenceNode, None, start_event.implicit)
node = SequenceNode(tag, [],
start_event.start_mark, None,
flow_style=start_event.flow_style)
if anchor is not None:
self.anchors[anchor] = node
index = 0
while not self.check_event(SequenceEndEvent):
node.value.append(self.compose_node(node, index))
index += 1
end_event = self.get_event()
node.end_mark = end_event.end_mark
return node
def compose_mapping_node(self, anchor):
start_event = self.get_event()
tag = start_event.tag
if tag is None or tag == '!':
tag = self.resolve(MappingNode, None, start_event.implicit)
node = MappingNode(tag, [],
start_event.start_mark, None,
flow_style=start_event.flow_style)
if anchor is not None:
self.anchors[anchor] = node
while not self.check_event(MappingEndEvent):
#key_event = self.peek_event()
item_key = self.compose_node(node, None)
#if item_key in node.value:
# raise ComposerError("while composing a mapping", start_event.start_mark,
# "found duplicate key", key_event.start_mark)
item_value = self.compose_node(node, item_key)
#node.value[item_key] = item_value
node.value.append((item_key, item_value))
end_event = self.get_event()
node.end_mark = end_event.end_mark
return node

View File

@ -1,748 +0,0 @@
__all__ = [
'BaseConstructor',
'SafeConstructor',
'FullConstructor',
'UnsafeConstructor',
'Constructor',
'ConstructorError'
]
from .error import *
from .nodes import *
import collections.abc, datetime, base64, binascii, re, sys, types
class ConstructorError(MarkedYAMLError):
pass
class BaseConstructor:
yaml_constructors = {}
yaml_multi_constructors = {}
def __init__(self):
self.constructed_objects = {}
self.recursive_objects = {}
self.state_generators = []
self.deep_construct = False
def check_data(self):
# If there are more documents available?
return self.check_node()
def check_state_key(self, key):
"""Block special attributes/methods from being set in a newly created
object, to prevent user-controlled methods from being called during
deserialization"""
if self.get_state_keys_blacklist_regexp().match(key):
raise ConstructorError(None, None,
"blacklisted key '%s' in instance state found" % (key,), None)
def get_data(self):
# Construct and return the next document.
if self.check_node():
return self.construct_document(self.get_node())
def get_single_data(self):
# Ensure that the stream contains a single document and construct it.
node = self.get_single_node()
if node is not None:
return self.construct_document(node)
return None
def construct_document(self, node):
data = self.construct_object(node)
while self.state_generators:
state_generators = self.state_generators
self.state_generators = []
for generator in state_generators:
for dummy in generator:
pass
self.constructed_objects = {}
self.recursive_objects = {}
self.deep_construct = False
return data
def construct_object(self, node, deep=False):
if node in self.constructed_objects:
return self.constructed_objects[node]
if deep:
old_deep = self.deep_construct
self.deep_construct = True
if node in self.recursive_objects:
raise ConstructorError(None, None,
"found unconstructable recursive node", node.start_mark)
self.recursive_objects[node] = None
constructor = None
tag_suffix = None
if node.tag in self.yaml_constructors:
constructor = self.yaml_constructors[node.tag]
else:
for tag_prefix in self.yaml_multi_constructors:
if tag_prefix is not None and node.tag.startswith(tag_prefix):
tag_suffix = node.tag[len(tag_prefix):]
constructor = self.yaml_multi_constructors[tag_prefix]
break
else:
if None in self.yaml_multi_constructors:
tag_suffix = node.tag
constructor = self.yaml_multi_constructors[None]
elif None in self.yaml_constructors:
constructor = self.yaml_constructors[None]
elif isinstance(node, ScalarNode):
constructor = self.__class__.construct_scalar
elif isinstance(node, SequenceNode):
constructor = self.__class__.construct_sequence
elif isinstance(node, MappingNode):
constructor = self.__class__.construct_mapping
if tag_suffix is None:
data = constructor(self, node)
else:
data = constructor(self, tag_suffix, node)
if isinstance(data, types.GeneratorType):
generator = data
data = next(generator)
if self.deep_construct:
for dummy in generator:
pass
else:
self.state_generators.append(generator)
self.constructed_objects[node] = data
del self.recursive_objects[node]
if deep:
self.deep_construct = old_deep
return data
def construct_scalar(self, node):
if not isinstance(node, ScalarNode):
raise ConstructorError(None, None,
"expected a scalar node, but found %s" % node.id,
node.start_mark)
return node.value
def construct_sequence(self, node, deep=False):
if not isinstance(node, SequenceNode):
raise ConstructorError(None, None,
"expected a sequence node, but found %s" % node.id,
node.start_mark)
return [self.construct_object(child, deep=deep)
for child in node.value]
def construct_mapping(self, node, deep=False):
if not isinstance(node, MappingNode):
raise ConstructorError(None, None,
"expected a mapping node, but found %s" % node.id,
node.start_mark)
mapping = {}
for key_node, value_node in node.value:
key = self.construct_object(key_node, deep=deep)
if not isinstance(key, collections.abc.Hashable):
raise ConstructorError("while constructing a mapping", node.start_mark,
"found unhashable key", key_node.start_mark)
value = self.construct_object(value_node, deep=deep)
mapping[key] = value
return mapping
def construct_pairs(self, node, deep=False):
if not isinstance(node, MappingNode):
raise ConstructorError(None, None,
"expected a mapping node, but found %s" % node.id,
node.start_mark)
pairs = []
for key_node, value_node in node.value:
key = self.construct_object(key_node, deep=deep)
value = self.construct_object(value_node, deep=deep)
pairs.append((key, value))
return pairs
@classmethod
def add_constructor(cls, tag, constructor):
if not 'yaml_constructors' in cls.__dict__:
cls.yaml_constructors = cls.yaml_constructors.copy()
cls.yaml_constructors[tag] = constructor
@classmethod
def add_multi_constructor(cls, tag_prefix, multi_constructor):
if not 'yaml_multi_constructors' in cls.__dict__:
cls.yaml_multi_constructors = cls.yaml_multi_constructors.copy()
cls.yaml_multi_constructors[tag_prefix] = multi_constructor
class SafeConstructor(BaseConstructor):
def construct_scalar(self, node):
if isinstance(node, MappingNode):
for key_node, value_node in node.value:
if key_node.tag == 'tag:yaml.org,2002:value':
return self.construct_scalar(value_node)
return super().construct_scalar(node)
def flatten_mapping(self, node):
merge = []
index = 0
while index < len(node.value):
key_node, value_node = node.value[index]
if key_node.tag == 'tag:yaml.org,2002:merge':
del node.value[index]
if isinstance(value_node, MappingNode):
self.flatten_mapping(value_node)
merge.extend(value_node.value)
elif isinstance(value_node, SequenceNode):
submerge = []
for subnode in value_node.value:
if not isinstance(subnode, MappingNode):
raise ConstructorError("while constructing a mapping",
node.start_mark,
"expected a mapping for merging, but found %s"
% subnode.id, subnode.start_mark)
self.flatten_mapping(subnode)
submerge.append(subnode.value)
submerge.reverse()
for value in submerge:
merge.extend(value)
else:
raise ConstructorError("while constructing a mapping", node.start_mark,
"expected a mapping or list of mappings for merging, but found %s"
% value_node.id, value_node.start_mark)
elif key_node.tag == 'tag:yaml.org,2002:value':
key_node.tag = 'tag:yaml.org,2002:str'
index += 1
else:
index += 1
if merge:
node.value = merge + node.value
def construct_mapping(self, node, deep=False):
if isinstance(node, MappingNode):
self.flatten_mapping(node)
return super().construct_mapping(node, deep=deep)
def construct_yaml_null(self, node):
self.construct_scalar(node)
return None
bool_values = {
'yes': True,
'no': False,
'true': True,
'false': False,
'on': True,
'off': False,
}
def construct_yaml_bool(self, node):
value = self.construct_scalar(node)
return self.bool_values[value.lower()]
def construct_yaml_int(self, node):
value = self.construct_scalar(node)
value = value.replace('_', '')
sign = +1
if value[0] == '-':
sign = -1
if value[0] in '+-':
value = value[1:]
if value == '0':
return 0
elif value.startswith('0b'):
return sign*int(value[2:], 2)
elif value.startswith('0x'):
return sign*int(value[2:], 16)
elif value[0] == '0':
return sign*int(value, 8)
elif ':' in value:
digits = [int(part) for part in value.split(':')]
digits.reverse()
base = 1
value = 0
for digit in digits:
value += digit*base
base *= 60
return sign*value
else:
return sign*int(value)
inf_value = 1e300
while inf_value != inf_value*inf_value:
inf_value *= inf_value
nan_value = -inf_value/inf_value # Trying to make a quiet NaN (like C99).
def construct_yaml_float(self, node):
value = self.construct_scalar(node)
value = value.replace('_', '').lower()
sign = +1
if value[0] == '-':
sign = -1
if value[0] in '+-':
value = value[1:]
if value == '.inf':
return sign*self.inf_value
elif value == '.nan':
return self.nan_value
elif ':' in value:
digits = [float(part) for part in value.split(':')]
digits.reverse()
base = 1
value = 0.0
for digit in digits:
value += digit*base
base *= 60
return sign*value
else:
return sign*float(value)
def construct_yaml_binary(self, node):
try:
value = self.construct_scalar(node).encode('ascii')
except UnicodeEncodeError as exc:
raise ConstructorError(None, None,
"failed to convert base64 data into ascii: %s" % exc,
node.start_mark)
try:
if hasattr(base64, 'decodebytes'):
return base64.decodebytes(value)
else:
return base64.decodestring(value)
except binascii.Error as exc:
raise ConstructorError(None, None,
"failed to decode base64 data: %s" % exc, node.start_mark)
timestamp_regexp = re.compile(
r'''^(?P<year>[0-9][0-9][0-9][0-9])
-(?P<month>[0-9][0-9]?)
-(?P<day>[0-9][0-9]?)
(?:(?:[Tt]|[ \t]+)
(?P<hour>[0-9][0-9]?)
:(?P<minute>[0-9][0-9])
:(?P<second>[0-9][0-9])
(?:\.(?P<fraction>[0-9]*))?
(?:[ \t]*(?P<tz>Z|(?P<tz_sign>[-+])(?P<tz_hour>[0-9][0-9]?)
(?::(?P<tz_minute>[0-9][0-9]))?))?)?$''', re.X)
def construct_yaml_timestamp(self, node):
value = self.construct_scalar(node)
match = self.timestamp_regexp.match(node.value)
values = match.groupdict()
year = int(values['year'])
month = int(values['month'])
day = int(values['day'])
if not values['hour']:
return datetime.date(year, month, day)
hour = int(values['hour'])
minute = int(values['minute'])
second = int(values['second'])
fraction = 0
tzinfo = None
if values['fraction']:
fraction = values['fraction'][:6]
while len(fraction) < 6:
fraction += '0'
fraction = int(fraction)
if values['tz_sign']:
tz_hour = int(values['tz_hour'])
tz_minute = int(values['tz_minute'] or 0)
delta = datetime.timedelta(hours=tz_hour, minutes=tz_minute)
if values['tz_sign'] == '-':
delta = -delta
tzinfo = datetime.timezone(delta)
elif values['tz']:
tzinfo = datetime.timezone.utc
return datetime.datetime(year, month, day, hour, minute, second, fraction,
tzinfo=tzinfo)
def construct_yaml_omap(self, node):
# Note: we do not check for duplicate keys, because it's too
# CPU-expensive.
omap = []
yield omap
if not isinstance(node, SequenceNode):
raise ConstructorError("while constructing an ordered map", node.start_mark,
"expected a sequence, but found %s" % node.id, node.start_mark)
for subnode in node.value:
if not isinstance(subnode, MappingNode):
raise ConstructorError("while constructing an ordered map", node.start_mark,
"expected a mapping of length 1, but found %s" % subnode.id,
subnode.start_mark)
if len(subnode.value) != 1:
raise ConstructorError("while constructing an ordered map", node.start_mark,
"expected a single mapping item, but found %d items" % len(subnode.value),
subnode.start_mark)
key_node, value_node = subnode.value[0]
key = self.construct_object(key_node)
value = self.construct_object(value_node)
omap.append((key, value))
def construct_yaml_pairs(self, node):
# Note: the same code as `construct_yaml_omap`.
pairs = []
yield pairs
if not isinstance(node, SequenceNode):
raise ConstructorError("while constructing pairs", node.start_mark,
"expected a sequence, but found %s" % node.id, node.start_mark)
for subnode in node.value:
if not isinstance(subnode, MappingNode):
raise ConstructorError("while constructing pairs", node.start_mark,
"expected a mapping of length 1, but found %s" % subnode.id,
subnode.start_mark)
if len(subnode.value) != 1:
raise ConstructorError("while constructing pairs", node.start_mark,
"expected a single mapping item, but found %d items" % len(subnode.value),
subnode.start_mark)
key_node, value_node = subnode.value[0]
key = self.construct_object(key_node)
value = self.construct_object(value_node)
pairs.append((key, value))
def construct_yaml_set(self, node):
data = set()
yield data
value = self.construct_mapping(node)
data.update(value)
def construct_yaml_str(self, node):
return self.construct_scalar(node)
def construct_yaml_seq(self, node):
data = []
yield data
data.extend(self.construct_sequence(node))
def construct_yaml_map(self, node):
data = {}
yield data
value = self.construct_mapping(node)
data.update(value)
def construct_yaml_object(self, node, cls):
data = cls.__new__(cls)
yield data
if hasattr(data, '__setstate__'):
state = self.construct_mapping(node, deep=True)
data.__setstate__(state)
else:
state = self.construct_mapping(node)
data.__dict__.update(state)
def construct_undefined(self, node):
raise ConstructorError(None, None,
"could not determine a constructor for the tag %r" % node.tag,
node.start_mark)
SafeConstructor.add_constructor(
'tag:yaml.org,2002:null',
SafeConstructor.construct_yaml_null)
SafeConstructor.add_constructor(
'tag:yaml.org,2002:bool',
SafeConstructor.construct_yaml_bool)
SafeConstructor.add_constructor(
'tag:yaml.org,2002:int',
SafeConstructor.construct_yaml_int)
SafeConstructor.add_constructor(
'tag:yaml.org,2002:float',
SafeConstructor.construct_yaml_float)
SafeConstructor.add_constructor(
'tag:yaml.org,2002:binary',
SafeConstructor.construct_yaml_binary)
SafeConstructor.add_constructor(
'tag:yaml.org,2002:timestamp',
SafeConstructor.construct_yaml_timestamp)
SafeConstructor.add_constructor(
'tag:yaml.org,2002:omap',
SafeConstructor.construct_yaml_omap)
SafeConstructor.add_constructor(
'tag:yaml.org,2002:pairs',
SafeConstructor.construct_yaml_pairs)
SafeConstructor.add_constructor(
'tag:yaml.org,2002:set',
SafeConstructor.construct_yaml_set)
SafeConstructor.add_constructor(
'tag:yaml.org,2002:str',
SafeConstructor.construct_yaml_str)
SafeConstructor.add_constructor(
'tag:yaml.org,2002:seq',
SafeConstructor.construct_yaml_seq)
SafeConstructor.add_constructor(
'tag:yaml.org,2002:map',
SafeConstructor.construct_yaml_map)
SafeConstructor.add_constructor(None,
SafeConstructor.construct_undefined)
class FullConstructor(SafeConstructor):
# 'extend' is blacklisted because it is used by
# construct_python_object_apply to add `listitems` to a newly generate
# python instance
def get_state_keys_blacklist(self):
return ['^extend$', '^__.*__$']
def get_state_keys_blacklist_regexp(self):
if not hasattr(self, 'state_keys_blacklist_regexp'):
self.state_keys_blacklist_regexp = re.compile('(' + '|'.join(self.get_state_keys_blacklist()) + ')')
return self.state_keys_blacklist_regexp
def construct_python_str(self, node):
return self.construct_scalar(node)
def construct_python_unicode(self, node):
return self.construct_scalar(node)
def construct_python_bytes(self, node):
try:
value = self.construct_scalar(node).encode('ascii')
except UnicodeEncodeError as exc:
raise ConstructorError(None, None,
"failed to convert base64 data into ascii: %s" % exc,
node.start_mark)
try:
if hasattr(base64, 'decodebytes'):
return base64.decodebytes(value)
else:
return base64.decodestring(value)
except binascii.Error as exc:
raise ConstructorError(None, None,
"failed to decode base64 data: %s" % exc, node.start_mark)
def construct_python_long(self, node):
return self.construct_yaml_int(node)
def construct_python_complex(self, node):
return complex(self.construct_scalar(node))
def construct_python_tuple(self, node):
return tuple(self.construct_sequence(node))
def find_python_module(self, name, mark, unsafe=False):
if not name:
raise ConstructorError("while constructing a Python module", mark,
"expected non-empty name appended to the tag", mark)
if unsafe:
try:
__import__(name)
except ImportError as exc:
raise ConstructorError("while constructing a Python module", mark,
"cannot find module %r (%s)" % (name, exc), mark)
if name not in sys.modules:
raise ConstructorError("while constructing a Python module", mark,
"module %r is not imported" % name, mark)
return sys.modules[name]
def find_python_name(self, name, mark, unsafe=False):
if not name:
raise ConstructorError("while constructing a Python object", mark,
"expected non-empty name appended to the tag", mark)
if '.' in name:
module_name, object_name = name.rsplit('.', 1)
else:
module_name = 'builtins'
object_name = name
if unsafe:
try:
__import__(module_name)
except ImportError as exc:
raise ConstructorError("while constructing a Python object", mark,
"cannot find module %r (%s)" % (module_name, exc), mark)
if module_name not in sys.modules:
raise ConstructorError("while constructing a Python object", mark,
"module %r is not imported" % module_name, mark)
module = sys.modules[module_name]
if not hasattr(module, object_name):
raise ConstructorError("while constructing a Python object", mark,
"cannot find %r in the module %r"
% (object_name, module.__name__), mark)
return getattr(module, object_name)
def construct_python_name(self, suffix, node):
value = self.construct_scalar(node)
if value:
raise ConstructorError("while constructing a Python name", node.start_mark,
"expected the empty value, but found %r" % value, node.start_mark)
return self.find_python_name(suffix, node.start_mark)
def construct_python_module(self, suffix, node):
value = self.construct_scalar(node)
if value:
raise ConstructorError("while constructing a Python module", node.start_mark,
"expected the empty value, but found %r" % value, node.start_mark)
return self.find_python_module(suffix, node.start_mark)
def make_python_instance(self, suffix, node,
args=None, kwds=None, newobj=False, unsafe=False):
if not args:
args = []
if not kwds:
kwds = {}
cls = self.find_python_name(suffix, node.start_mark)
if not (unsafe or isinstance(cls, type)):
raise ConstructorError("while constructing a Python instance", node.start_mark,
"expected a class, but found %r" % type(cls),
node.start_mark)
if newobj and isinstance(cls, type):
return cls.__new__(cls, *args, **kwds)
else:
return cls(*args, **kwds)
def set_python_instance_state(self, instance, state, unsafe=False):
if hasattr(instance, '__setstate__'):
instance.__setstate__(state)
else:
slotstate = {}
if isinstance(state, tuple) and len(state) == 2:
state, slotstate = state
if hasattr(instance, '__dict__'):
if not unsafe and state:
for key in state.keys():
self.check_state_key(key)
instance.__dict__.update(state)
elif state:
slotstate.update(state)
for key, value in slotstate.items():
if not unsafe:
self.check_state_key(key)
setattr(instance, key, value)
def construct_python_object(self, suffix, node):
# Format:
# !!python/object:module.name { ... state ... }
instance = self.make_python_instance(suffix, node, newobj=True)
yield instance
deep = hasattr(instance, '__setstate__')
state = self.construct_mapping(node, deep=deep)
self.set_python_instance_state(instance, state)
def construct_python_object_apply(self, suffix, node, newobj=False):
# Format:
# !!python/object/apply # (or !!python/object/new)
# args: [ ... arguments ... ]
# kwds: { ... keywords ... }
# state: ... state ...
# listitems: [ ... listitems ... ]
# dictitems: { ... dictitems ... }
# or short format:
# !!python/object/apply [ ... arguments ... ]
# The difference between !!python/object/apply and !!python/object/new
# is how an object is created, check make_python_instance for details.
if isinstance(node, SequenceNode):
args = self.construct_sequence(node, deep=True)
kwds = {}
state = {}
listitems = []
dictitems = {}
else:
value = self.construct_mapping(node, deep=True)
args = value.get('args', [])
kwds = value.get('kwds', {})
state = value.get('state', {})
listitems = value.get('listitems', [])
dictitems = value.get('dictitems', {})
instance = self.make_python_instance(suffix, node, args, kwds, newobj)
if state:
self.set_python_instance_state(instance, state)
if listitems:
instance.extend(listitems)
if dictitems:
for key in dictitems:
instance[key] = dictitems[key]
return instance
def construct_python_object_new(self, suffix, node):
return self.construct_python_object_apply(suffix, node, newobj=True)
FullConstructor.add_constructor(
'tag:yaml.org,2002:python/none',
FullConstructor.construct_yaml_null)
FullConstructor.add_constructor(
'tag:yaml.org,2002:python/bool',
FullConstructor.construct_yaml_bool)
FullConstructor.add_constructor(
'tag:yaml.org,2002:python/str',
FullConstructor.construct_python_str)
FullConstructor.add_constructor(
'tag:yaml.org,2002:python/unicode',
FullConstructor.construct_python_unicode)
FullConstructor.add_constructor(
'tag:yaml.org,2002:python/bytes',
FullConstructor.construct_python_bytes)
FullConstructor.add_constructor(
'tag:yaml.org,2002:python/int',
FullConstructor.construct_yaml_int)
FullConstructor.add_constructor(
'tag:yaml.org,2002:python/long',
FullConstructor.construct_python_long)
FullConstructor.add_constructor(
'tag:yaml.org,2002:python/float',
FullConstructor.construct_yaml_float)
FullConstructor.add_constructor(
'tag:yaml.org,2002:python/complex',
FullConstructor.construct_python_complex)
FullConstructor.add_constructor(
'tag:yaml.org,2002:python/list',
FullConstructor.construct_yaml_seq)
FullConstructor.add_constructor(
'tag:yaml.org,2002:python/tuple',
FullConstructor.construct_python_tuple)
FullConstructor.add_constructor(
'tag:yaml.org,2002:python/dict',
FullConstructor.construct_yaml_map)
FullConstructor.add_multi_constructor(
'tag:yaml.org,2002:python/name:',
FullConstructor.construct_python_name)
class UnsafeConstructor(FullConstructor):
def find_python_module(self, name, mark):
return super(UnsafeConstructor, self).find_python_module(name, mark, unsafe=True)
def find_python_name(self, name, mark):
return super(UnsafeConstructor, self).find_python_name(name, mark, unsafe=True)
def make_python_instance(self, suffix, node, args=None, kwds=None, newobj=False):
return super(UnsafeConstructor, self).make_python_instance(
suffix, node, args, kwds, newobj, unsafe=True)
def set_python_instance_state(self, instance, state):
return super(UnsafeConstructor, self).set_python_instance_state(
instance, state, unsafe=True)
UnsafeConstructor.add_multi_constructor(
'tag:yaml.org,2002:python/module:',
UnsafeConstructor.construct_python_module)
UnsafeConstructor.add_multi_constructor(
'tag:yaml.org,2002:python/object:',
UnsafeConstructor.construct_python_object)
UnsafeConstructor.add_multi_constructor(
'tag:yaml.org,2002:python/object/new:',
UnsafeConstructor.construct_python_object_new)
UnsafeConstructor.add_multi_constructor(
'tag:yaml.org,2002:python/object/apply:',
UnsafeConstructor.construct_python_object_apply)
# Constructor is same as UnsafeConstructor. Need to leave this in place in case
# people have extended it directly.
class Constructor(UnsafeConstructor):
pass

View File

@ -1,101 +0,0 @@
__all__ = [
'CBaseLoader', 'CSafeLoader', 'CFullLoader', 'CUnsafeLoader', 'CLoader',
'CBaseDumper', 'CSafeDumper', 'CDumper'
]
from yaml._yaml import CParser, CEmitter
from .constructor import *
from .serializer import *
from .representer import *
from .resolver import *
class CBaseLoader(CParser, BaseConstructor, BaseResolver):
def __init__(self, stream):
CParser.__init__(self, stream)
BaseConstructor.__init__(self)
BaseResolver.__init__(self)
class CSafeLoader(CParser, SafeConstructor, Resolver):
def __init__(self, stream):
CParser.__init__(self, stream)
SafeConstructor.__init__(self)
Resolver.__init__(self)
class CFullLoader(CParser, FullConstructor, Resolver):
def __init__(self, stream):
CParser.__init__(self, stream)
FullConstructor.__init__(self)
Resolver.__init__(self)
class CUnsafeLoader(CParser, UnsafeConstructor, Resolver):
def __init__(self, stream):
CParser.__init__(self, stream)
UnsafeConstructor.__init__(self)
Resolver.__init__(self)
class CLoader(CParser, Constructor, Resolver):
def __init__(self, stream):
CParser.__init__(self, stream)
Constructor.__init__(self)
Resolver.__init__(self)
class CBaseDumper(CEmitter, BaseRepresenter, BaseResolver):
def __init__(self, stream,
default_style=None, default_flow_style=False,
canonical=None, indent=None, width=None,
allow_unicode=None, line_break=None,
encoding=None, explicit_start=None, explicit_end=None,
version=None, tags=None, sort_keys=True):
CEmitter.__init__(self, stream, canonical=canonical,
indent=indent, width=width, encoding=encoding,
allow_unicode=allow_unicode, line_break=line_break,
explicit_start=explicit_start, explicit_end=explicit_end,
version=version, tags=tags)
Representer.__init__(self, default_style=default_style,
default_flow_style=default_flow_style, sort_keys=sort_keys)
Resolver.__init__(self)
class CSafeDumper(CEmitter, SafeRepresenter, Resolver):
def __init__(self, stream,
default_style=None, default_flow_style=False,
canonical=None, indent=None, width=None,
allow_unicode=None, line_break=None,
encoding=None, explicit_start=None, explicit_end=None,
version=None, tags=None, sort_keys=True):
CEmitter.__init__(self, stream, canonical=canonical,
indent=indent, width=width, encoding=encoding,
allow_unicode=allow_unicode, line_break=line_break,
explicit_start=explicit_start, explicit_end=explicit_end,
version=version, tags=tags)
SafeRepresenter.__init__(self, default_style=default_style,
default_flow_style=default_flow_style, sort_keys=sort_keys)
Resolver.__init__(self)
class CDumper(CEmitter, Serializer, Representer, Resolver):
def __init__(self, stream,
default_style=None, default_flow_style=False,
canonical=None, indent=None, width=None,
allow_unicode=None, line_break=None,
encoding=None, explicit_start=None, explicit_end=None,
version=None, tags=None, sort_keys=True):
CEmitter.__init__(self, stream, canonical=canonical,
indent=indent, width=width, encoding=encoding,
allow_unicode=allow_unicode, line_break=line_break,
explicit_start=explicit_start, explicit_end=explicit_end,
version=version, tags=tags)
Representer.__init__(self, default_style=default_style,
default_flow_style=default_flow_style, sort_keys=sort_keys)
Resolver.__init__(self)

View File

@ -1,62 +0,0 @@
__all__ = ['BaseDumper', 'SafeDumper', 'Dumper']
from .emitter import *
from .serializer import *
from .representer import *
from .resolver import *
class BaseDumper(Emitter, Serializer, BaseRepresenter, BaseResolver):
def __init__(self, stream,
default_style=None, default_flow_style=False,
canonical=None, indent=None, width=None,
allow_unicode=None, line_break=None,
encoding=None, explicit_start=None, explicit_end=None,
version=None, tags=None, sort_keys=True):
Emitter.__init__(self, stream, canonical=canonical,
indent=indent, width=width,
allow_unicode=allow_unicode, line_break=line_break)
Serializer.__init__(self, encoding=encoding,
explicit_start=explicit_start, explicit_end=explicit_end,
version=version, tags=tags)
Representer.__init__(self, default_style=default_style,
default_flow_style=default_flow_style, sort_keys=sort_keys)
Resolver.__init__(self)
class SafeDumper(Emitter, Serializer, SafeRepresenter, Resolver):
def __init__(self, stream,
default_style=None, default_flow_style=False,
canonical=None, indent=None, width=None,
allow_unicode=None, line_break=None,
encoding=None, explicit_start=None, explicit_end=None,
version=None, tags=None, sort_keys=True):
Emitter.__init__(self, stream, canonical=canonical,
indent=indent, width=width,
allow_unicode=allow_unicode, line_break=line_break)
Serializer.__init__(self, encoding=encoding,
explicit_start=explicit_start, explicit_end=explicit_end,
version=version, tags=tags)
SafeRepresenter.__init__(self, default_style=default_style,
default_flow_style=default_flow_style, sort_keys=sort_keys)
Resolver.__init__(self)
class Dumper(Emitter, Serializer, Representer, Resolver):
def __init__(self, stream,
default_style=None, default_flow_style=False,
canonical=None, indent=None, width=None,
allow_unicode=None, line_break=None,
encoding=None, explicit_start=None, explicit_end=None,
version=None, tags=None, sort_keys=True):
Emitter.__init__(self, stream, canonical=canonical,
indent=indent, width=width,
allow_unicode=allow_unicode, line_break=line_break)
Serializer.__init__(self, encoding=encoding,
explicit_start=explicit_start, explicit_end=explicit_end,
version=version, tags=tags)
Representer.__init__(self, default_style=default_style,
default_flow_style=default_flow_style, sort_keys=sort_keys)
Resolver.__init__(self)

File diff suppressed because it is too large Load Diff

View File

@ -1,75 +0,0 @@
__all__ = ['Mark', 'YAMLError', 'MarkedYAMLError']
class Mark:
def __init__(self, name, index, line, column, buffer, pointer):
self.name = name
self.index = index
self.line = line
self.column = column
self.buffer = buffer
self.pointer = pointer
def get_snippet(self, indent=4, max_length=75):
if self.buffer is None:
return None
head = ''
start = self.pointer
while start > 0 and self.buffer[start-1] not in '\0\r\n\x85\u2028\u2029':
start -= 1
if self.pointer-start > max_length/2-1:
head = ' ... '
start += 5
break
tail = ''
end = self.pointer
while end < len(self.buffer) and self.buffer[end] not in '\0\r\n\x85\u2028\u2029':
end += 1
if end-self.pointer > max_length/2-1:
tail = ' ... '
end -= 5
break
snippet = self.buffer[start:end]
return ' '*indent + head + snippet + tail + '\n' \
+ ' '*(indent+self.pointer-start+len(head)) + '^'
def __str__(self):
snippet = self.get_snippet()
where = " in \"%s\", line %d, column %d" \
% (self.name, self.line+1, self.column+1)
if snippet is not None:
where += ":\n"+snippet
return where
class YAMLError(Exception):
pass
class MarkedYAMLError(YAMLError):
def __init__(self, context=None, context_mark=None,
problem=None, problem_mark=None, note=None):
self.context = context
self.context_mark = context_mark
self.problem = problem
self.problem_mark = problem_mark
self.note = note
def __str__(self):
lines = []
if self.context is not None:
lines.append(self.context)
if self.context_mark is not None \
and (self.problem is None or self.problem_mark is None
or self.context_mark.name != self.problem_mark.name
or self.context_mark.line != self.problem_mark.line
or self.context_mark.column != self.problem_mark.column):
lines.append(str(self.context_mark))
if self.problem is not None:
lines.append(self.problem)
if self.problem_mark is not None:
lines.append(str(self.problem_mark))
if self.note is not None:
lines.append(self.note)
return '\n'.join(lines)

View File

@ -1,86 +0,0 @@
# Abstract classes.
class Event(object):
def __init__(self, start_mark=None, end_mark=None):
self.start_mark = start_mark
self.end_mark = end_mark
def __repr__(self):
attributes = [key for key in ['anchor', 'tag', 'implicit', 'value']
if hasattr(self, key)]
arguments = ', '.join(['%s=%r' % (key, getattr(self, key))
for key in attributes])
return '%s(%s)' % (self.__class__.__name__, arguments)
class NodeEvent(Event):
def __init__(self, anchor, start_mark=None, end_mark=None):
self.anchor = anchor
self.start_mark = start_mark
self.end_mark = end_mark
class CollectionStartEvent(NodeEvent):
def __init__(self, anchor, tag, implicit, start_mark=None, end_mark=None,
flow_style=None):
self.anchor = anchor
self.tag = tag
self.implicit = implicit
self.start_mark = start_mark
self.end_mark = end_mark
self.flow_style = flow_style
class CollectionEndEvent(Event):
pass
# Implementations.
class StreamStartEvent(Event):
def __init__(self, start_mark=None, end_mark=None, encoding=None):
self.start_mark = start_mark
self.end_mark = end_mark
self.encoding = encoding
class StreamEndEvent(Event):
pass
class DocumentStartEvent(Event):
def __init__(self, start_mark=None, end_mark=None,
explicit=None, version=None, tags=None):
self.start_mark = start_mark
self.end_mark = end_mark
self.explicit = explicit
self.version = version
self.tags = tags
class DocumentEndEvent(Event):
def __init__(self, start_mark=None, end_mark=None,
explicit=None):
self.start_mark = start_mark
self.end_mark = end_mark
self.explicit = explicit
class AliasEvent(NodeEvent):
pass
class ScalarEvent(NodeEvent):
def __init__(self, anchor, tag, implicit, value,
start_mark=None, end_mark=None, style=None):
self.anchor = anchor
self.tag = tag
self.implicit = implicit
self.value = value
self.start_mark = start_mark
self.end_mark = end_mark
self.style = style
class SequenceStartEvent(CollectionStartEvent):
pass
class SequenceEndEvent(CollectionEndEvent):
pass
class MappingStartEvent(CollectionStartEvent):
pass
class MappingEndEvent(CollectionEndEvent):
pass

View File

@ -1,63 +0,0 @@
__all__ = ['BaseLoader', 'FullLoader', 'SafeLoader', 'Loader', 'UnsafeLoader']
from .reader import *
from .scanner import *
from .parser import *
from .composer import *
from .constructor import *
from .resolver import *
class BaseLoader(Reader, Scanner, Parser, Composer, BaseConstructor, BaseResolver):
def __init__(self, stream):
Reader.__init__(self, stream)
Scanner.__init__(self)
Parser.__init__(self)
Composer.__init__(self)
BaseConstructor.__init__(self)
BaseResolver.__init__(self)
class FullLoader(Reader, Scanner, Parser, Composer, FullConstructor, Resolver):
def __init__(self, stream):
Reader.__init__(self, stream)
Scanner.__init__(self)
Parser.__init__(self)
Composer.__init__(self)
FullConstructor.__init__(self)
Resolver.__init__(self)
class SafeLoader(Reader, Scanner, Parser, Composer, SafeConstructor, Resolver):
def __init__(self, stream):
Reader.__init__(self, stream)
Scanner.__init__(self)
Parser.__init__(self)
Composer.__init__(self)
SafeConstructor.__init__(self)
Resolver.__init__(self)
class Loader(Reader, Scanner, Parser, Composer, Constructor, Resolver):
def __init__(self, stream):
Reader.__init__(self, stream)
Scanner.__init__(self)
Parser.__init__(self)
Composer.__init__(self)
Constructor.__init__(self)
Resolver.__init__(self)
# UnsafeLoader is the same as Loader (which is and was always unsafe on
# untrusted input). Use of either Loader or UnsafeLoader should be rare, since
# FullLoad should be able to load almost all YAML safely. Loader is left intact
# to ensure backwards compatibility.
class UnsafeLoader(Reader, Scanner, Parser, Composer, Constructor, Resolver):
def __init__(self, stream):
Reader.__init__(self, stream)
Scanner.__init__(self)
Parser.__init__(self)
Composer.__init__(self)
Constructor.__init__(self)
Resolver.__init__(self)

View File

@ -1,49 +0,0 @@
class Node(object):
def __init__(self, tag, value, start_mark, end_mark):
self.tag = tag
self.value = value
self.start_mark = start_mark
self.end_mark = end_mark
def __repr__(self):
value = self.value
#if isinstance(value, list):
# if len(value) == 0:
# value = '<empty>'
# elif len(value) == 1:
# value = '<1 item>'
# else:
# value = '<%d items>' % len(value)
#else:
# if len(value) > 75:
# value = repr(value[:70]+u' ... ')
# else:
# value = repr(value)
value = repr(value)
return '%s(tag=%r, value=%s)' % (self.__class__.__name__, self.tag, value)
class ScalarNode(Node):
id = 'scalar'
def __init__(self, tag, value,
start_mark=None, end_mark=None, style=None):
self.tag = tag
self.value = value
self.start_mark = start_mark
self.end_mark = end_mark
self.style = style
class CollectionNode(Node):
def __init__(self, tag, value,
start_mark=None, end_mark=None, flow_style=None):
self.tag = tag
self.value = value
self.start_mark = start_mark
self.end_mark = end_mark
self.flow_style = flow_style
class SequenceNode(CollectionNode):
id = 'sequence'
class MappingNode(CollectionNode):
id = 'mapping'

View File

@ -1,589 +0,0 @@
# The following YAML grammar is LL(1) and is parsed by a recursive descent
# parser.
#
# stream ::= STREAM-START implicit_document? explicit_document* STREAM-END
# implicit_document ::= block_node DOCUMENT-END*
# explicit_document ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END*
# block_node_or_indentless_sequence ::=
# ALIAS
# | properties (block_content | indentless_block_sequence)?
# | block_content
# | indentless_block_sequence
# block_node ::= ALIAS
# | properties block_content?
# | block_content
# flow_node ::= ALIAS
# | properties flow_content?
# | flow_content
# properties ::= TAG ANCHOR? | ANCHOR TAG?
# block_content ::= block_collection | flow_collection | SCALAR
# flow_content ::= flow_collection | SCALAR
# block_collection ::= block_sequence | block_mapping
# flow_collection ::= flow_sequence | flow_mapping
# block_sequence ::= BLOCK-SEQUENCE-START (BLOCK-ENTRY block_node?)* BLOCK-END
# indentless_sequence ::= (BLOCK-ENTRY block_node?)+
# block_mapping ::= BLOCK-MAPPING_START
# ((KEY block_node_or_indentless_sequence?)?
# (VALUE block_node_or_indentless_sequence?)?)*
# BLOCK-END
# flow_sequence ::= FLOW-SEQUENCE-START
# (flow_sequence_entry FLOW-ENTRY)*
# flow_sequence_entry?
# FLOW-SEQUENCE-END
# flow_sequence_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)?
# flow_mapping ::= FLOW-MAPPING-START
# (flow_mapping_entry FLOW-ENTRY)*
# flow_mapping_entry?
# FLOW-MAPPING-END
# flow_mapping_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)?
#
# FIRST sets:
#
# stream: { STREAM-START }
# explicit_document: { DIRECTIVE DOCUMENT-START }
# implicit_document: FIRST(block_node)
# block_node: { ALIAS TAG ANCHOR SCALAR BLOCK-SEQUENCE-START BLOCK-MAPPING-START FLOW-SEQUENCE-START FLOW-MAPPING-START }
# flow_node: { ALIAS ANCHOR TAG SCALAR FLOW-SEQUENCE-START FLOW-MAPPING-START }
# block_content: { BLOCK-SEQUENCE-START BLOCK-MAPPING-START FLOW-SEQUENCE-START FLOW-MAPPING-START SCALAR }
# flow_content: { FLOW-SEQUENCE-START FLOW-MAPPING-START SCALAR }
# block_collection: { BLOCK-SEQUENCE-START BLOCK-MAPPING-START }
# flow_collection: { FLOW-SEQUENCE-START FLOW-MAPPING-START }
# block_sequence: { BLOCK-SEQUENCE-START }
# block_mapping: { BLOCK-MAPPING-START }
# block_node_or_indentless_sequence: { ALIAS ANCHOR TAG SCALAR BLOCK-SEQUENCE-START BLOCK-MAPPING-START FLOW-SEQUENCE-START FLOW-MAPPING-START BLOCK-ENTRY }
# indentless_sequence: { ENTRY }
# flow_collection: { FLOW-SEQUENCE-START FLOW-MAPPING-START }
# flow_sequence: { FLOW-SEQUENCE-START }
# flow_mapping: { FLOW-MAPPING-START }
# flow_sequence_entry: { ALIAS ANCHOR TAG SCALAR FLOW-SEQUENCE-START FLOW-MAPPING-START KEY }
# flow_mapping_entry: { ALIAS ANCHOR TAG SCALAR FLOW-SEQUENCE-START FLOW-MAPPING-START KEY }
__all__ = ['Parser', 'ParserError']
from .error import MarkedYAMLError
from .tokens import *
from .events import *
from .scanner import *
class ParserError(MarkedYAMLError):
pass
class Parser:
# Since writing a recursive-descendant parser is a straightforward task, we
# do not give many comments here.
DEFAULT_TAGS = {
'!': '!',
'!!': 'tag:yaml.org,2002:',
}
def __init__(self):
self.current_event = None
self.yaml_version = None
self.tag_handles = {}
self.states = []
self.marks = []
self.state = self.parse_stream_start
def dispose(self):
# Reset the state attributes (to clear self-references)
self.states = []
self.state = None
def check_event(self, *choices):
# Check the type of the next event.
if self.current_event is None:
if self.state:
self.current_event = self.state()
if self.current_event is not None:
if not choices:
return True
for choice in choices:
if isinstance(self.current_event, choice):
return True
return False
def peek_event(self):
# Get the next event.
if self.current_event is None:
if self.state:
self.current_event = self.state()
return self.current_event
def get_event(self):
# Get the next event and proceed further.
if self.current_event is None:
if self.state:
self.current_event = self.state()
value = self.current_event
self.current_event = None
return value
# stream ::= STREAM-START implicit_document? explicit_document* STREAM-END
# implicit_document ::= block_node DOCUMENT-END*
# explicit_document ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END*
def parse_stream_start(self):
# Parse the stream start.
token = self.get_token()
event = StreamStartEvent(token.start_mark, token.end_mark,
encoding=token.encoding)
# Prepare the next state.
self.state = self.parse_implicit_document_start
return event
def parse_implicit_document_start(self):
# Parse an implicit document.
if not self.check_token(DirectiveToken, DocumentStartToken,
StreamEndToken):
self.tag_handles = self.DEFAULT_TAGS
token = self.peek_token()
start_mark = end_mark = token.start_mark
event = DocumentStartEvent(start_mark, end_mark,
explicit=False)
# Prepare the next state.
self.states.append(self.parse_document_end)
self.state = self.parse_block_node
return event
else:
return self.parse_document_start()
def parse_document_start(self):
# Parse any extra document end indicators.
while self.check_token(DocumentEndToken):
self.get_token()
# Parse an explicit document.
if not self.check_token(StreamEndToken):
token = self.peek_token()
start_mark = token.start_mark
version, tags = self.process_directives()
if not self.check_token(DocumentStartToken):
raise ParserError(None, None,
"expected '<document start>', but found %r"
% self.peek_token().id,
self.peek_token().start_mark)
token = self.get_token()
end_mark = token.end_mark
event = DocumentStartEvent(start_mark, end_mark,
explicit=True, version=version, tags=tags)
self.states.append(self.parse_document_end)
self.state = self.parse_document_content
else:
# Parse the end of the stream.
token = self.get_token()
event = StreamEndEvent(token.start_mark, token.end_mark)
assert not self.states
assert not self.marks
self.state = None
return event
def parse_document_end(self):
# Parse the document end.
token = self.peek_token()
start_mark = end_mark = token.start_mark
explicit = False
if self.check_token(DocumentEndToken):
token = self.get_token()
end_mark = token.end_mark
explicit = True
event = DocumentEndEvent(start_mark, end_mark,
explicit=explicit)
# Prepare the next state.
self.state = self.parse_document_start
return event
def parse_document_content(self):
if self.check_token(DirectiveToken,
DocumentStartToken, DocumentEndToken, StreamEndToken):
event = self.process_empty_scalar(self.peek_token().start_mark)
self.state = self.states.pop()
return event
else:
return self.parse_block_node()
def process_directives(self):
self.yaml_version = None
self.tag_handles = {}
while self.check_token(DirectiveToken):
token = self.get_token()
if token.name == 'YAML':
if self.yaml_version is not None:
raise ParserError(None, None,
"found duplicate YAML directive", token.start_mark)
major, minor = token.value
if major != 1:
raise ParserError(None, None,
"found incompatible YAML document (version 1.* is required)",
token.start_mark)
self.yaml_version = token.value
elif token.name == 'TAG':
handle, prefix = token.value
if handle in self.tag_handles:
raise ParserError(None, None,
"duplicate tag handle %r" % handle,
token.start_mark)
self.tag_handles[handle] = prefix
if self.tag_handles:
value = self.yaml_version, self.tag_handles.copy()
else:
value = self.yaml_version, None
for key in self.DEFAULT_TAGS:
if key not in self.tag_handles:
self.tag_handles[key] = self.DEFAULT_TAGS[key]
return value
# block_node_or_indentless_sequence ::= ALIAS
# | properties (block_content | indentless_block_sequence)?
# | block_content
# | indentless_block_sequence
# block_node ::= ALIAS
# | properties block_content?
# | block_content
# flow_node ::= ALIAS
# | properties flow_content?
# | flow_content
# properties ::= TAG ANCHOR? | ANCHOR TAG?
# block_content ::= block_collection | flow_collection | SCALAR
# flow_content ::= flow_collection | SCALAR
# block_collection ::= block_sequence | block_mapping
# flow_collection ::= flow_sequence | flow_mapping
def parse_block_node(self):
return self.parse_node(block=True)
def parse_flow_node(self):
return self.parse_node()
def parse_block_node_or_indentless_sequence(self):
return self.parse_node(block=True, indentless_sequence=True)
def parse_node(self, block=False, indentless_sequence=False):
if self.check_token(AliasToken):
token = self.get_token()
event = AliasEvent(token.value, token.start_mark, token.end_mark)
self.state = self.states.pop()
else:
anchor = None
tag = None
start_mark = end_mark = tag_mark = None
if self.check_token(AnchorToken):
token = self.get_token()
start_mark = token.start_mark
end_mark = token.end_mark
anchor = token.value
if self.check_token(TagToken):
token = self.get_token()
tag_mark = token.start_mark
end_mark = token.end_mark
tag = token.value
elif self.check_token(TagToken):
token = self.get_token()
start_mark = tag_mark = token.start_mark
end_mark = token.end_mark
tag = token.value
if self.check_token(AnchorToken):
token = self.get_token()
end_mark = token.end_mark
anchor = token.value
if tag is not None:
handle, suffix = tag
if handle is not None:
if handle not in self.tag_handles:
raise ParserError("while parsing a node", start_mark,
"found undefined tag handle %r" % handle,
tag_mark)
tag = self.tag_handles[handle]+suffix
else:
tag = suffix
#if tag == '!':
# raise ParserError("while parsing a node", start_mark,
# "found non-specific tag '!'", tag_mark,
# "Please check 'http://pyyaml.org/wiki/YAMLNonSpecificTag' and share your opinion.")
if start_mark is None:
start_mark = end_mark = self.peek_token().start_mark
event = None
implicit = (tag is None or tag == '!')
if indentless_sequence and self.check_token(BlockEntryToken):
end_mark = self.peek_token().end_mark
event = SequenceStartEvent(anchor, tag, implicit,
start_mark, end_mark)
self.state = self.parse_indentless_sequence_entry
else:
if self.check_token(ScalarToken):
token = self.get_token()
end_mark = token.end_mark
if (token.plain and tag is None) or tag == '!':
implicit = (True, False)
elif tag is None:
implicit = (False, True)
else:
implicit = (False, False)
event = ScalarEvent(anchor, tag, implicit, token.value,
start_mark, end_mark, style=token.style)
self.state = self.states.pop()
elif self.check_token(FlowSequenceStartToken):
end_mark = self.peek_token().end_mark
event = SequenceStartEvent(anchor, tag, implicit,
start_mark, end_mark, flow_style=True)
self.state = self.parse_flow_sequence_first_entry
elif self.check_token(FlowMappingStartToken):
end_mark = self.peek_token().end_mark
event = MappingStartEvent(anchor, tag, implicit,
start_mark, end_mark, flow_style=True)
self.state = self.parse_flow_mapping_first_key
elif block and self.check_token(BlockSequenceStartToken):
end_mark = self.peek_token().start_mark
event = SequenceStartEvent(anchor, tag, implicit,
start_mark, end_mark, flow_style=False)
self.state = self.parse_block_sequence_first_entry
elif block and self.check_token(BlockMappingStartToken):
end_mark = self.peek_token().start_mark
event = MappingStartEvent(anchor, tag, implicit,
start_mark, end_mark, flow_style=False)
self.state = self.parse_block_mapping_first_key
elif anchor is not None or tag is not None:
# Empty scalars are allowed even if a tag or an anchor is
# specified.
event = ScalarEvent(anchor, tag, (implicit, False), '',
start_mark, end_mark)
self.state = self.states.pop()
else:
if block:
node = 'block'
else:
node = 'flow'
token = self.peek_token()
raise ParserError("while parsing a %s node" % node, start_mark,
"expected the node content, but found %r" % token.id,
token.start_mark)
return event
# block_sequence ::= BLOCK-SEQUENCE-START (BLOCK-ENTRY block_node?)* BLOCK-END
def parse_block_sequence_first_entry(self):
token = self.get_token()
self.marks.append(token.start_mark)
return self.parse_block_sequence_entry()
def parse_block_sequence_entry(self):
if self.check_token(BlockEntryToken):
token = self.get_token()
if not self.check_token(BlockEntryToken, BlockEndToken):
self.states.append(self.parse_block_sequence_entry)
return self.parse_block_node()
else:
self.state = self.parse_block_sequence_entry
return self.process_empty_scalar(token.end_mark)
if not self.check_token(BlockEndToken):
token = self.peek_token()
raise ParserError("while parsing a block collection", self.marks[-1],
"expected <block end>, but found %r" % token.id, token.start_mark)
token = self.get_token()
event = SequenceEndEvent(token.start_mark, token.end_mark)
self.state = self.states.pop()
self.marks.pop()
return event
# indentless_sequence ::= (BLOCK-ENTRY block_node?)+
def parse_indentless_sequence_entry(self):
if self.check_token(BlockEntryToken):
token = self.get_token()
if not self.check_token(BlockEntryToken,
KeyToken, ValueToken, BlockEndToken):
self.states.append(self.parse_indentless_sequence_entry)
return self.parse_block_node()
else:
self.state = self.parse_indentless_sequence_entry
return self.process_empty_scalar(token.end_mark)
token = self.peek_token()
event = SequenceEndEvent(token.start_mark, token.start_mark)
self.state = self.states.pop()
return event
# block_mapping ::= BLOCK-MAPPING_START
# ((KEY block_node_or_indentless_sequence?)?
# (VALUE block_node_or_indentless_sequence?)?)*
# BLOCK-END
def parse_block_mapping_first_key(self):
token = self.get_token()
self.marks.append(token.start_mark)
return self.parse_block_mapping_key()
def parse_block_mapping_key(self):
if self.check_token(KeyToken):
token = self.get_token()
if not self.check_token(KeyToken, ValueToken, BlockEndToken):
self.states.append(self.parse_block_mapping_value)
return self.parse_block_node_or_indentless_sequence()
else:
self.state = self.parse_block_mapping_value
return self.process_empty_scalar(token.end_mark)
if not self.check_token(BlockEndToken):
token = self.peek_token()
raise ParserError("while parsing a block mapping", self.marks[-1],
"expected <block end>, but found %r" % token.id, token.start_mark)
token = self.get_token()
event = MappingEndEvent(token.start_mark, token.end_mark)
self.state = self.states.pop()
self.marks.pop()
return event
def parse_block_mapping_value(self):
if self.check_token(ValueToken):
token = self.get_token()
if not self.check_token(KeyToken, ValueToken, BlockEndToken):
self.states.append(self.parse_block_mapping_key)
return self.parse_block_node_or_indentless_sequence()
else:
self.state = self.parse_block_mapping_key
return self.process_empty_scalar(token.end_mark)
else:
self.state = self.parse_block_mapping_key
token = self.peek_token()
return self.process_empty_scalar(token.start_mark)
# flow_sequence ::= FLOW-SEQUENCE-START
# (flow_sequence_entry FLOW-ENTRY)*
# flow_sequence_entry?
# FLOW-SEQUENCE-END
# flow_sequence_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)?
#
# Note that while production rules for both flow_sequence_entry and
# flow_mapping_entry are equal, their interpretations are different.
# For `flow_sequence_entry`, the part `KEY flow_node? (VALUE flow_node?)?`
# generate an inline mapping (set syntax).
def parse_flow_sequence_first_entry(self):
token = self.get_token()
self.marks.append(token.start_mark)
return self.parse_flow_sequence_entry(first=True)
def parse_flow_sequence_entry(self, first=False):
if not self.check_token(FlowSequenceEndToken):
if not first:
if self.check_token(FlowEntryToken):
self.get_token()
else:
token = self.peek_token()
raise ParserError("while parsing a flow sequence", self.marks[-1],
"expected ',' or ']', but got %r" % token.id, token.start_mark)
if self.check_token(KeyToken):
token = self.peek_token()
event = MappingStartEvent(None, None, True,
token.start_mark, token.end_mark,
flow_style=True)
self.state = self.parse_flow_sequence_entry_mapping_key
return event
elif not self.check_token(FlowSequenceEndToken):
self.states.append(self.parse_flow_sequence_entry)
return self.parse_flow_node()
token = self.get_token()
event = SequenceEndEvent(token.start_mark, token.end_mark)
self.state = self.states.pop()
self.marks.pop()
return event
def parse_flow_sequence_entry_mapping_key(self):
token = self.get_token()
if not self.check_token(ValueToken,
FlowEntryToken, FlowSequenceEndToken):
self.states.append(self.parse_flow_sequence_entry_mapping_value)
return self.parse_flow_node()
else:
self.state = self.parse_flow_sequence_entry_mapping_value
return self.process_empty_scalar(token.end_mark)
def parse_flow_sequence_entry_mapping_value(self):
if self.check_token(ValueToken):
token = self.get_token()
if not self.check_token(FlowEntryToken, FlowSequenceEndToken):
self.states.append(self.parse_flow_sequence_entry_mapping_end)
return self.parse_flow_node()
else:
self.state = self.parse_flow_sequence_entry_mapping_end
return self.process_empty_scalar(token.end_mark)
else:
self.state = self.parse_flow_sequence_entry_mapping_end
token = self.peek_token()
return self.process_empty_scalar(token.start_mark)
def parse_flow_sequence_entry_mapping_end(self):
self.state = self.parse_flow_sequence_entry
token = self.peek_token()
return MappingEndEvent(token.start_mark, token.start_mark)
# flow_mapping ::= FLOW-MAPPING-START
# (flow_mapping_entry FLOW-ENTRY)*
# flow_mapping_entry?
# FLOW-MAPPING-END
# flow_mapping_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)?
def parse_flow_mapping_first_key(self):
token = self.get_token()
self.marks.append(token.start_mark)
return self.parse_flow_mapping_key(first=True)
def parse_flow_mapping_key(self, first=False):
if not self.check_token(FlowMappingEndToken):
if not first:
if self.check_token(FlowEntryToken):
self.get_token()
else:
token = self.peek_token()
raise ParserError("while parsing a flow mapping", self.marks[-1],
"expected ',' or '}', but got %r" % token.id, token.start_mark)
if self.check_token(KeyToken):
token = self.get_token()
if not self.check_token(ValueToken,
FlowEntryToken, FlowMappingEndToken):
self.states.append(self.parse_flow_mapping_value)
return self.parse_flow_node()
else:
self.state = self.parse_flow_mapping_value
return self.process_empty_scalar(token.end_mark)
elif not self.check_token(FlowMappingEndToken):
self.states.append(self.parse_flow_mapping_empty_value)
return self.parse_flow_node()
token = self.get_token()
event = MappingEndEvent(token.start_mark, token.end_mark)
self.state = self.states.pop()
self.marks.pop()
return event
def parse_flow_mapping_value(self):
if self.check_token(ValueToken):
token = self.get_token()
if not self.check_token(FlowEntryToken, FlowMappingEndToken):
self.states.append(self.parse_flow_mapping_key)
return self.parse_flow_node()
else:
self.state = self.parse_flow_mapping_key
return self.process_empty_scalar(token.end_mark)
else:
self.state = self.parse_flow_mapping_key
token = self.peek_token()
return self.process_empty_scalar(token.start_mark)
def parse_flow_mapping_empty_value(self):
self.state = self.parse_flow_mapping_key
return self.process_empty_scalar(self.peek_token().start_mark)
def process_empty_scalar(self, mark):
return ScalarEvent(None, None, (True, False), '', mark, mark)

View File

@ -1,185 +0,0 @@
# This module contains abstractions for the input stream. You don't have to
# looks further, there are no pretty code.
#
# We define two classes here.
#
# Mark(source, line, column)
# It's just a record and its only use is producing nice error messages.
# Parser does not use it for any other purposes.
#
# Reader(source, data)
# Reader determines the encoding of `data` and converts it to unicode.
# Reader provides the following methods and attributes:
# reader.peek(length=1) - return the next `length` characters
# reader.forward(length=1) - move the current position to `length` characters.
# reader.index - the number of the current character.
# reader.line, stream.column - the line and the column of the current character.
__all__ = ['Reader', 'ReaderError']
from .error import YAMLError, Mark
import codecs, re
class ReaderError(YAMLError):
def __init__(self, name, position, character, encoding, reason):
self.name = name
self.character = character
self.position = position
self.encoding = encoding
self.reason = reason
def __str__(self):
if isinstance(self.character, bytes):
return "'%s' codec can't decode byte #x%02x: %s\n" \
" in \"%s\", position %d" \
% (self.encoding, ord(self.character), self.reason,
self.name, self.position)
else:
return "unacceptable character #x%04x: %s\n" \
" in \"%s\", position %d" \
% (self.character, self.reason,
self.name, self.position)
class Reader(object):
# Reader:
# - determines the data encoding and converts it to a unicode string,
# - checks if characters are in allowed range,
# - adds '\0' to the end.
# Reader accepts
# - a `bytes` object,
# - a `str` object,
# - a file-like object with its `read` method returning `str`,
# - a file-like object with its `read` method returning `unicode`.
# Yeah, it's ugly and slow.
def __init__(self, stream):
self.name = None
self.stream = None
self.stream_pointer = 0
self.eof = True
self.buffer = ''
self.pointer = 0
self.raw_buffer = None
self.raw_decode = None
self.encoding = None
self.index = 0
self.line = 0
self.column = 0
if isinstance(stream, str):
self.name = "<unicode string>"
self.check_printable(stream)
self.buffer = stream+'\0'
elif isinstance(stream, bytes):
self.name = "<byte string>"
self.raw_buffer = stream
self.determine_encoding()
else:
self.stream = stream
self.name = getattr(stream, 'name', "<file>")
self.eof = False
self.raw_buffer = None
self.determine_encoding()
def peek(self, index=0):
try:
return self.buffer[self.pointer+index]
except IndexError:
self.update(index+1)
return self.buffer[self.pointer+index]
def prefix(self, length=1):
if self.pointer+length >= len(self.buffer):
self.update(length)
return self.buffer[self.pointer:self.pointer+length]
def forward(self, length=1):
if self.pointer+length+1 >= len(self.buffer):
self.update(length+1)
while length:
ch = self.buffer[self.pointer]
self.pointer += 1
self.index += 1
if ch in '\n\x85\u2028\u2029' \
or (ch == '\r' and self.buffer[self.pointer] != '\n'):
self.line += 1
self.column = 0
elif ch != '\uFEFF':
self.column += 1
length -= 1
def get_mark(self):
if self.stream is None:
return Mark(self.name, self.index, self.line, self.column,
self.buffer, self.pointer)
else:
return Mark(self.name, self.index, self.line, self.column,
None, None)
def determine_encoding(self):
while not self.eof and (self.raw_buffer is None or len(self.raw_buffer) < 2):
self.update_raw()
if isinstance(self.raw_buffer, bytes):
if self.raw_buffer.startswith(codecs.BOM_UTF16_LE):
self.raw_decode = codecs.utf_16_le_decode
self.encoding = 'utf-16-le'
elif self.raw_buffer.startswith(codecs.BOM_UTF16_BE):
self.raw_decode = codecs.utf_16_be_decode
self.encoding = 'utf-16-be'
else:
self.raw_decode = codecs.utf_8_decode
self.encoding = 'utf-8'
self.update(1)
NON_PRINTABLE = re.compile('[^\x09\x0A\x0D\x20-\x7E\x85\xA0-\uD7FF\uE000-\uFFFD\U00010000-\U0010ffff]')
def check_printable(self, data):
match = self.NON_PRINTABLE.search(data)
if match:
character = match.group()
position = self.index+(len(self.buffer)-self.pointer)+match.start()
raise ReaderError(self.name, position, ord(character),
'unicode', "special characters are not allowed")
def update(self, length):
if self.raw_buffer is None:
return
self.buffer = self.buffer[self.pointer:]
self.pointer = 0
while len(self.buffer) < length:
if not self.eof:
self.update_raw()
if self.raw_decode is not None:
try:
data, converted = self.raw_decode(self.raw_buffer,
'strict', self.eof)
except UnicodeDecodeError as exc:
character = self.raw_buffer[exc.start]
if self.stream is not None:
position = self.stream_pointer-len(self.raw_buffer)+exc.start
else:
position = exc.start
raise ReaderError(self.name, position, character,
exc.encoding, exc.reason)
else:
data = self.raw_buffer
converted = len(data)
self.check_printable(data)
self.buffer += data
self.raw_buffer = self.raw_buffer[converted:]
if self.eof:
self.buffer += '\0'
self.raw_buffer = None
break
def update_raw(self, size=4096):
data = self.stream.read(size)
if self.raw_buffer is None:
self.raw_buffer = data
else:
self.raw_buffer += data
self.stream_pointer += len(data)
if not data:
self.eof = True

View File

@ -1,389 +0,0 @@
__all__ = ['BaseRepresenter', 'SafeRepresenter', 'Representer',
'RepresenterError']
from .error import *
from .nodes import *
import datetime, copyreg, types, base64, collections
class RepresenterError(YAMLError):
pass
class BaseRepresenter:
yaml_representers = {}
yaml_multi_representers = {}
def __init__(self, default_style=None, default_flow_style=False, sort_keys=True):
self.default_style = default_style
self.sort_keys = sort_keys
self.default_flow_style = default_flow_style
self.represented_objects = {}
self.object_keeper = []
self.alias_key = None
def represent(self, data):
node = self.represent_data(data)
self.serialize(node)
self.represented_objects = {}
self.object_keeper = []
self.alias_key = None
def represent_data(self, data):
if self.ignore_aliases(data):
self.alias_key = None
else:
self.alias_key = id(data)
if self.alias_key is not None:
if self.alias_key in self.represented_objects:
node = self.represented_objects[self.alias_key]
#if node is None:
# raise RepresenterError("recursive objects are not allowed: %r" % data)
return node
#self.represented_objects[alias_key] = None
self.object_keeper.append(data)
data_types = type(data).__mro__
if data_types[0] in self.yaml_representers:
node = self.yaml_representers[data_types[0]](self, data)
else:
for data_type in data_types:
if data_type in self.yaml_multi_representers:
node = self.yaml_multi_representers[data_type](self, data)
break
else:
if None in self.yaml_multi_representers:
node = self.yaml_multi_representers[None](self, data)
elif None in self.yaml_representers:
node = self.yaml_representers[None](self, data)
else:
node = ScalarNode(None, str(data))
#if alias_key is not None:
# self.represented_objects[alias_key] = node
return node
@classmethod
def add_representer(cls, data_type, representer):
if not 'yaml_representers' in cls.__dict__:
cls.yaml_representers = cls.yaml_representers.copy()
cls.yaml_representers[data_type] = representer
@classmethod
def add_multi_representer(cls, data_type, representer):
if not 'yaml_multi_representers' in cls.__dict__:
cls.yaml_multi_representers = cls.yaml_multi_representers.copy()
cls.yaml_multi_representers[data_type] = representer
def represent_scalar(self, tag, value, style=None):
if style is None:
style = self.default_style
node = ScalarNode(tag, value, style=style)
if self.alias_key is not None:
self.represented_objects[self.alias_key] = node
return node
def represent_sequence(self, tag, sequence, flow_style=None):
value = []
node = SequenceNode(tag, value, flow_style=flow_style)
if self.alias_key is not None:
self.represented_objects[self.alias_key] = node
best_style = True
for item in sequence:
node_item = self.represent_data(item)
if not (isinstance(node_item, ScalarNode) and not node_item.style):
best_style = False
value.append(node_item)
if flow_style is None:
if self.default_flow_style is not None:
node.flow_style = self.default_flow_style
else:
node.flow_style = best_style
return node
def represent_mapping(self, tag, mapping, flow_style=None):
value = []
node = MappingNode(tag, value, flow_style=flow_style)
if self.alias_key is not None:
self.represented_objects[self.alias_key] = node
best_style = True
if hasattr(mapping, 'items'):
mapping = list(mapping.items())
if self.sort_keys:
try:
mapping = sorted(mapping)
except TypeError:
pass
for item_key, item_value in mapping:
node_key = self.represent_data(item_key)
node_value = self.represent_data(item_value)
if not (isinstance(node_key, ScalarNode) and not node_key.style):
best_style = False
if not (isinstance(node_value, ScalarNode) and not node_value.style):
best_style = False
value.append((node_key, node_value))
if flow_style is None:
if self.default_flow_style is not None:
node.flow_style = self.default_flow_style
else:
node.flow_style = best_style
return node
def ignore_aliases(self, data):
return False
class SafeRepresenter(BaseRepresenter):
def ignore_aliases(self, data):
if data is None:
return True
if isinstance(data, tuple) and data == ():
return True
if isinstance(data, (str, bytes, bool, int, float)):
return True
def represent_none(self, data):
return self.represent_scalar('tag:yaml.org,2002:null', 'null')
def represent_str(self, data):
return self.represent_scalar('tag:yaml.org,2002:str', data)
def represent_binary(self, data):
if hasattr(base64, 'encodebytes'):
data = base64.encodebytes(data).decode('ascii')
else:
data = base64.encodestring(data).decode('ascii')
return self.represent_scalar('tag:yaml.org,2002:binary', data, style='|')
def represent_bool(self, data):
if data:
value = 'true'
else:
value = 'false'
return self.represent_scalar('tag:yaml.org,2002:bool', value)
def represent_int(self, data):
return self.represent_scalar('tag:yaml.org,2002:int', str(data))
inf_value = 1e300
while repr(inf_value) != repr(inf_value*inf_value):
inf_value *= inf_value
def represent_float(self, data):
if data != data or (data == 0.0 and data == 1.0):
value = '.nan'
elif data == self.inf_value:
value = '.inf'
elif data == -self.inf_value:
value = '-.inf'
else:
value = repr(data).lower()
# Note that in some cases `repr(data)` represents a float number
# without the decimal parts. For instance:
# >>> repr(1e17)
# '1e17'
# Unfortunately, this is not a valid float representation according
# to the definition of the `!!float` tag. We fix this by adding
# '.0' before the 'e' symbol.
if '.' not in value and 'e' in value:
value = value.replace('e', '.0e', 1)
return self.represent_scalar('tag:yaml.org,2002:float', value)
def represent_list(self, data):
#pairs = (len(data) > 0 and isinstance(data, list))
#if pairs:
# for item in data:
# if not isinstance(item, tuple) or len(item) != 2:
# pairs = False
# break
#if not pairs:
return self.represent_sequence('tag:yaml.org,2002:seq', data)
#value = []
#for item_key, item_value in data:
# value.append(self.represent_mapping(u'tag:yaml.org,2002:map',
# [(item_key, item_value)]))
#return SequenceNode(u'tag:yaml.org,2002:pairs', value)
def represent_dict(self, data):
return self.represent_mapping('tag:yaml.org,2002:map', data)
def represent_set(self, data):
value = {}
for key in data:
value[key] = None
return self.represent_mapping('tag:yaml.org,2002:set', value)
def represent_date(self, data):
value = data.isoformat()
return self.represent_scalar('tag:yaml.org,2002:timestamp', value)
def represent_datetime(self, data):
value = data.isoformat(' ')
return self.represent_scalar('tag:yaml.org,2002:timestamp', value)
def represent_yaml_object(self, tag, data, cls, flow_style=None):
if hasattr(data, '__getstate__'):
state = data.__getstate__()
else:
state = data.__dict__.copy()
return self.represent_mapping(tag, state, flow_style=flow_style)
def represent_undefined(self, data):
raise RepresenterError("cannot represent an object", data)
SafeRepresenter.add_representer(type(None),
SafeRepresenter.represent_none)
SafeRepresenter.add_representer(str,
SafeRepresenter.represent_str)
SafeRepresenter.add_representer(bytes,
SafeRepresenter.represent_binary)
SafeRepresenter.add_representer(bool,
SafeRepresenter.represent_bool)
SafeRepresenter.add_representer(int,
SafeRepresenter.represent_int)
SafeRepresenter.add_representer(float,
SafeRepresenter.represent_float)
SafeRepresenter.add_representer(list,
SafeRepresenter.represent_list)
SafeRepresenter.add_representer(tuple,
SafeRepresenter.represent_list)
SafeRepresenter.add_representer(dict,
SafeRepresenter.represent_dict)
SafeRepresenter.add_representer(set,
SafeRepresenter.represent_set)
SafeRepresenter.add_representer(datetime.date,
SafeRepresenter.represent_date)
SafeRepresenter.add_representer(datetime.datetime,
SafeRepresenter.represent_datetime)
SafeRepresenter.add_representer(None,
SafeRepresenter.represent_undefined)
class Representer(SafeRepresenter):
def represent_complex(self, data):
if data.imag == 0.0:
data = '%r' % data.real
elif data.real == 0.0:
data = '%rj' % data.imag
elif data.imag > 0:
data = '%r+%rj' % (data.real, data.imag)
else:
data = '%r%rj' % (data.real, data.imag)
return self.represent_scalar('tag:yaml.org,2002:python/complex', data)
def represent_tuple(self, data):
return self.represent_sequence('tag:yaml.org,2002:python/tuple', data)
def represent_name(self, data):
name = '%s.%s' % (data.__module__, data.__name__)
return self.represent_scalar('tag:yaml.org,2002:python/name:'+name, '')
def represent_module(self, data):
return self.represent_scalar(
'tag:yaml.org,2002:python/module:'+data.__name__, '')
def represent_object(self, data):
# We use __reduce__ API to save the data. data.__reduce__ returns
# a tuple of length 2-5:
# (function, args, state, listitems, dictitems)
# For reconstructing, we calls function(*args), then set its state,
# listitems, and dictitems if they are not None.
# A special case is when function.__name__ == '__newobj__'. In this
# case we create the object with args[0].__new__(*args).
# Another special case is when __reduce__ returns a string - we don't
# support it.
# We produce a !!python/object, !!python/object/new or
# !!python/object/apply node.
cls = type(data)
if cls in copyreg.dispatch_table:
reduce = copyreg.dispatch_table[cls](data)
elif hasattr(data, '__reduce_ex__'):
reduce = data.__reduce_ex__(2)
elif hasattr(data, '__reduce__'):
reduce = data.__reduce__()
else:
raise RepresenterError("cannot represent an object", data)
reduce = (list(reduce)+[None]*5)[:5]
function, args, state, listitems, dictitems = reduce
args = list(args)
if state is None:
state = {}
if listitems is not None:
listitems = list(listitems)
if dictitems is not None:
dictitems = dict(dictitems)
if function.__name__ == '__newobj__':
function = args[0]
args = args[1:]
tag = 'tag:yaml.org,2002:python/object/new:'
newobj = True
else:
tag = 'tag:yaml.org,2002:python/object/apply:'
newobj = False
function_name = '%s.%s' % (function.__module__, function.__name__)
if not args and not listitems and not dictitems \
and isinstance(state, dict) and newobj:
return self.represent_mapping(
'tag:yaml.org,2002:python/object:'+function_name, state)
if not listitems and not dictitems \
and isinstance(state, dict) and not state:
return self.represent_sequence(tag+function_name, args)
value = {}
if args:
value['args'] = args
if state or not isinstance(state, dict):
value['state'] = state
if listitems:
value['listitems'] = listitems
if dictitems:
value['dictitems'] = dictitems
return self.represent_mapping(tag+function_name, value)
def represent_ordered_dict(self, data):
# Provide uniform representation across different Python versions.
data_type = type(data)
tag = 'tag:yaml.org,2002:python/object/apply:%s.%s' \
% (data_type.__module__, data_type.__name__)
items = [[key, value] for key, value in data.items()]
return self.represent_sequence(tag, [items])
Representer.add_representer(complex,
Representer.represent_complex)
Representer.add_representer(tuple,
Representer.represent_tuple)
Representer.add_multi_representer(type,
Representer.represent_name)
Representer.add_representer(collections.OrderedDict,
Representer.represent_ordered_dict)
Representer.add_representer(types.FunctionType,
Representer.represent_name)
Representer.add_representer(types.BuiltinFunctionType,
Representer.represent_name)
Representer.add_representer(types.ModuleType,
Representer.represent_module)
Representer.add_multi_representer(object,
Representer.represent_object)

View File

@ -1,227 +0,0 @@
__all__ = ['BaseResolver', 'Resolver']
from .error import *
from .nodes import *
import re
class ResolverError(YAMLError):
pass
class BaseResolver:
DEFAULT_SCALAR_TAG = 'tag:yaml.org,2002:str'
DEFAULT_SEQUENCE_TAG = 'tag:yaml.org,2002:seq'
DEFAULT_MAPPING_TAG = 'tag:yaml.org,2002:map'
yaml_implicit_resolvers = {}
yaml_path_resolvers = {}
def __init__(self):
self.resolver_exact_paths = []
self.resolver_prefix_paths = []
@classmethod
def add_implicit_resolver(cls, tag, regexp, first):
if not 'yaml_implicit_resolvers' in cls.__dict__:
implicit_resolvers = {}
for key in cls.yaml_implicit_resolvers:
implicit_resolvers[key] = cls.yaml_implicit_resolvers[key][:]
cls.yaml_implicit_resolvers = implicit_resolvers
if first is None:
first = [None]
for ch in first:
cls.yaml_implicit_resolvers.setdefault(ch, []).append((tag, regexp))
@classmethod
def add_path_resolver(cls, tag, path, kind=None):
# Note: `add_path_resolver` is experimental. The API could be changed.
# `new_path` is a pattern that is matched against the path from the
# root to the node that is being considered. `node_path` elements are
# tuples `(node_check, index_check)`. `node_check` is a node class:
# `ScalarNode`, `SequenceNode`, `MappingNode` or `None`. `None`
# matches any kind of a node. `index_check` could be `None`, a boolean
# value, a string value, or a number. `None` and `False` match against
# any _value_ of sequence and mapping nodes. `True` matches against
# any _key_ of a mapping node. A string `index_check` matches against
# a mapping value that corresponds to a scalar key which content is
# equal to the `index_check` value. An integer `index_check` matches
# against a sequence value with the index equal to `index_check`.
if not 'yaml_path_resolvers' in cls.__dict__:
cls.yaml_path_resolvers = cls.yaml_path_resolvers.copy()
new_path = []
for element in path:
if isinstance(element, (list, tuple)):
if len(element) == 2:
node_check, index_check = element
elif len(element) == 1:
node_check = element[0]
index_check = True
else:
raise ResolverError("Invalid path element: %s" % element)
else:
node_check = None
index_check = element
if node_check is str:
node_check = ScalarNode
elif node_check is list:
node_check = SequenceNode
elif node_check is dict:
node_check = MappingNode
elif node_check not in [ScalarNode, SequenceNode, MappingNode] \
and not isinstance(node_check, str) \
and node_check is not None:
raise ResolverError("Invalid node checker: %s" % node_check)
if not isinstance(index_check, (str, int)) \
and index_check is not None:
raise ResolverError("Invalid index checker: %s" % index_check)
new_path.append((node_check, index_check))
if kind is str:
kind = ScalarNode
elif kind is list:
kind = SequenceNode
elif kind is dict:
kind = MappingNode
elif kind not in [ScalarNode, SequenceNode, MappingNode] \
and kind is not None:
raise ResolverError("Invalid node kind: %s" % kind)
cls.yaml_path_resolvers[tuple(new_path), kind] = tag
def descend_resolver(self, current_node, current_index):
if not self.yaml_path_resolvers:
return
exact_paths = {}
prefix_paths = []
if current_node:
depth = len(self.resolver_prefix_paths)
for path, kind in self.resolver_prefix_paths[-1]:
if self.check_resolver_prefix(depth, path, kind,
current_node, current_index):
if len(path) > depth:
prefix_paths.append((path, kind))
else:
exact_paths[kind] = self.yaml_path_resolvers[path, kind]
else:
for path, kind in self.yaml_path_resolvers:
if not path:
exact_paths[kind] = self.yaml_path_resolvers[path, kind]
else:
prefix_paths.append((path, kind))
self.resolver_exact_paths.append(exact_paths)
self.resolver_prefix_paths.append(prefix_paths)
def ascend_resolver(self):
if not self.yaml_path_resolvers:
return
self.resolver_exact_paths.pop()
self.resolver_prefix_paths.pop()
def check_resolver_prefix(self, depth, path, kind,
current_node, current_index):
node_check, index_check = path[depth-1]
if isinstance(node_check, str):
if current_node.tag != node_check:
return
elif node_check is not None:
if not isinstance(current_node, node_check):
return
if index_check is True and current_index is not None:
return
if (index_check is False or index_check is None) \
and current_index is None:
return
if isinstance(index_check, str):
if not (isinstance(current_index, ScalarNode)
and index_check == current_index.value):
return
elif isinstance(index_check, int) and not isinstance(index_check, bool):
if index_check != current_index:
return
return True
def resolve(self, kind, value, implicit):
if kind is ScalarNode and implicit[0]:
if value == '':
resolvers = self.yaml_implicit_resolvers.get('', [])
else:
resolvers = self.yaml_implicit_resolvers.get(value[0], [])
wildcard_resolvers = self.yaml_implicit_resolvers.get(None, [])
for tag, regexp in resolvers + wildcard_resolvers:
if regexp.match(value):
return tag
implicit = implicit[1]
if self.yaml_path_resolvers:
exact_paths = self.resolver_exact_paths[-1]
if kind in exact_paths:
return exact_paths[kind]
if None in exact_paths:
return exact_paths[None]
if kind is ScalarNode:
return self.DEFAULT_SCALAR_TAG
elif kind is SequenceNode:
return self.DEFAULT_SEQUENCE_TAG
elif kind is MappingNode:
return self.DEFAULT_MAPPING_TAG
class Resolver(BaseResolver):
pass
Resolver.add_implicit_resolver(
'tag:yaml.org,2002:bool',
re.compile(r'''^(?:yes|Yes|YES|no|No|NO
|true|True|TRUE|false|False|FALSE
|on|On|ON|off|Off|OFF)$''', re.X),
list('yYnNtTfFoO'))
Resolver.add_implicit_resolver(
'tag:yaml.org,2002:float',
re.compile(r'''^(?:[-+]?(?:[0-9][0-9_]*)\.[0-9_]*(?:[eE][-+][0-9]+)?
|\.[0-9][0-9_]*(?:[eE][-+][0-9]+)?
|[-+]?[0-9][0-9_]*(?::[0-5]?[0-9])+\.[0-9_]*
|[-+]?\.(?:inf|Inf|INF)
|\.(?:nan|NaN|NAN))$''', re.X),
list('-+0123456789.'))
Resolver.add_implicit_resolver(
'tag:yaml.org,2002:int',
re.compile(r'''^(?:[-+]?0b[0-1_]+
|[-+]?0[0-7_]+
|[-+]?(?:0|[1-9][0-9_]*)
|[-+]?0x[0-9a-fA-F_]+
|[-+]?[1-9][0-9_]*(?::[0-5]?[0-9])+)$''', re.X),
list('-+0123456789'))
Resolver.add_implicit_resolver(
'tag:yaml.org,2002:merge',
re.compile(r'^(?:<<)$'),
['<'])
Resolver.add_implicit_resolver(
'tag:yaml.org,2002:null',
re.compile(r'''^(?: ~
|null|Null|NULL
| )$''', re.X),
['~', 'n', 'N', ''])
Resolver.add_implicit_resolver(
'tag:yaml.org,2002:timestamp',
re.compile(r'''^(?:[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]
|[0-9][0-9][0-9][0-9] -[0-9][0-9]? -[0-9][0-9]?
(?:[Tt]|[ \t]+)[0-9][0-9]?
:[0-9][0-9] :[0-9][0-9] (?:\.[0-9]*)?
(?:[ \t]*(?:Z|[-+][0-9][0-9]?(?::[0-9][0-9])?))?)$''', re.X),
list('0123456789'))
Resolver.add_implicit_resolver(
'tag:yaml.org,2002:value',
re.compile(r'^(?:=)$'),
['='])
# The following resolver is only for documentation purposes. It cannot work
# because plain scalars cannot start with '!', '&', or '*'.
Resolver.add_implicit_resolver(
'tag:yaml.org,2002:yaml',
re.compile(r'^(?:!|&|\*)$'),
list('!&*'))

File diff suppressed because it is too large Load Diff

View File

@ -1,111 +0,0 @@
__all__ = ['Serializer', 'SerializerError']
from .error import YAMLError
from .events import *
from .nodes import *
class SerializerError(YAMLError):
pass
class Serializer:
ANCHOR_TEMPLATE = 'id%03d'
def __init__(self, encoding=None,
explicit_start=None, explicit_end=None, version=None, tags=None):
self.use_encoding = encoding
self.use_explicit_start = explicit_start
self.use_explicit_end = explicit_end
self.use_version = version
self.use_tags = tags
self.serialized_nodes = {}
self.anchors = {}
self.last_anchor_id = 0
self.closed = None
def open(self):
if self.closed is None:
self.emit(StreamStartEvent(encoding=self.use_encoding))
self.closed = False
elif self.closed:
raise SerializerError("serializer is closed")
else:
raise SerializerError("serializer is already opened")
def close(self):
if self.closed is None:
raise SerializerError("serializer is not opened")
elif not self.closed:
self.emit(StreamEndEvent())
self.closed = True
#def __del__(self):
# self.close()
def serialize(self, node):
if self.closed is None:
raise SerializerError("serializer is not opened")
elif self.closed:
raise SerializerError("serializer is closed")
self.emit(DocumentStartEvent(explicit=self.use_explicit_start,
version=self.use_version, tags=self.use_tags))
self.anchor_node(node)
self.serialize_node(node, None, None)
self.emit(DocumentEndEvent(explicit=self.use_explicit_end))
self.serialized_nodes = {}
self.anchors = {}
self.last_anchor_id = 0
def anchor_node(self, node):
if node in self.anchors:
if self.anchors[node] is None:
self.anchors[node] = self.generate_anchor(node)
else:
self.anchors[node] = None
if isinstance(node, SequenceNode):
for item in node.value:
self.anchor_node(item)
elif isinstance(node, MappingNode):
for key, value in node.value:
self.anchor_node(key)
self.anchor_node(value)
def generate_anchor(self, node):
self.last_anchor_id += 1
return self.ANCHOR_TEMPLATE % self.last_anchor_id
def serialize_node(self, node, parent, index):
alias = self.anchors[node]
if node in self.serialized_nodes:
self.emit(AliasEvent(alias))
else:
self.serialized_nodes[node] = True
self.descend_resolver(parent, index)
if isinstance(node, ScalarNode):
detected_tag = self.resolve(ScalarNode, node.value, (True, False))
default_tag = self.resolve(ScalarNode, node.value, (False, True))
implicit = (node.tag == detected_tag), (node.tag == default_tag)
self.emit(ScalarEvent(alias, node.tag, implicit, node.value,
style=node.style))
elif isinstance(node, SequenceNode):
implicit = (node.tag
== self.resolve(SequenceNode, node.value, True))
self.emit(SequenceStartEvent(alias, node.tag, implicit,
flow_style=node.flow_style))
index = 0
for item in node.value:
self.serialize_node(item, node, index)
index += 1
self.emit(SequenceEndEvent())
elif isinstance(node, MappingNode):
implicit = (node.tag
== self.resolve(MappingNode, node.value, True))
self.emit(MappingStartEvent(alias, node.tag, implicit,
flow_style=node.flow_style))
for key, value in node.value:
self.serialize_node(key, node, None)
self.serialize_node(value, node, key)
self.emit(MappingEndEvent())
self.ascend_resolver()

View File

@ -1,104 +0,0 @@
class Token(object):
def __init__(self, start_mark, end_mark):
self.start_mark = start_mark
self.end_mark = end_mark
def __repr__(self):
attributes = [key for key in self.__dict__
if not key.endswith('_mark')]
attributes.sort()
arguments = ', '.join(['%s=%r' % (key, getattr(self, key))
for key in attributes])
return '%s(%s)' % (self.__class__.__name__, arguments)
#class BOMToken(Token):
# id = '<byte order mark>'
class DirectiveToken(Token):
id = '<directive>'
def __init__(self, name, value, start_mark, end_mark):
self.name = name
self.value = value
self.start_mark = start_mark
self.end_mark = end_mark
class DocumentStartToken(Token):
id = '<document start>'
class DocumentEndToken(Token):
id = '<document end>'
class StreamStartToken(Token):
id = '<stream start>'
def __init__(self, start_mark=None, end_mark=None,
encoding=None):
self.start_mark = start_mark
self.end_mark = end_mark
self.encoding = encoding
class StreamEndToken(Token):
id = '<stream end>'
class BlockSequenceStartToken(Token):
id = '<block sequence start>'
class BlockMappingStartToken(Token):
id = '<block mapping start>'
class BlockEndToken(Token):
id = '<block end>'
class FlowSequenceStartToken(Token):
id = '['
class FlowMappingStartToken(Token):
id = '{'
class FlowSequenceEndToken(Token):
id = ']'
class FlowMappingEndToken(Token):
id = '}'
class KeyToken(Token):
id = '?'
class ValueToken(Token):
id = ':'
class BlockEntryToken(Token):
id = '-'
class FlowEntryToken(Token):
id = ','
class AliasToken(Token):
id = '<alias>'
def __init__(self, value, start_mark, end_mark):
self.value = value
self.start_mark = start_mark
self.end_mark = end_mark
class AnchorToken(Token):
id = '<anchor>'
def __init__(self, value, start_mark, end_mark):
self.value = value
self.start_mark = start_mark
self.end_mark = end_mark
class TagToken(Token):
id = '<tag>'
def __init__(self, value, start_mark, end_mark):
self.value = value
self.start_mark = start_mark
self.end_mark = end_mark
class ScalarToken(Token):
id = '<scalar>'
def __init__(self, value, plain, start_mark, end_mark, style=None):
self.value = value
self.plain = plain
self.start_mark = start_mark
self.end_mark = end_mark
self.style = style

View File

@ -15,5 +15,5 @@ class Base(Controller):
(['-v', '--version'], {'action': 'version', 'version': BANNER}),
]
def _default(self):
def _default(self) -> None:
self.app.args.print_help()

View File

@ -1,10 +1,9 @@
import os
import sys
sys.path.append(os.path.join(os.path.dirname(__file__), 'contrib'))
from __future__ import annotations
from typing import Optional, List
from cement import App, CaughtSignal # noqa: E402
from .controllers.base import Base # noqa: E402
from cement.core.exc import FrameworkError
class CementApp(App):
@ -29,20 +28,27 @@ class CementApp(App):
class CementTestApp(CementApp):
class Meta:
argv = []
config_files = []
argv: List[str] = []
config_files: List[str] = []
exit_on_close = False
def main(argv=None):
def main(argv: Optional[List[str]] = None) -> None:
# Issue #679: https://github.com/datafolklabs/cement/issues/679
try:
import yaml, jinja2 # type: ignore # noqa: F401 E401
except ModuleNotFoundError: # pragma: nocover
raise FrameworkError('Cement CLI Dependencies are missing! Install cement[cli] extras ' +
'package to resolve -> pip install cement[cli]')
with CementApp() as app:
try:
app.run()
except AssertionError as e: # pragma: nocover
print('AssertionError > %s' % e.args[0]) # pragma: nocover
print(f'AssertionError > {e.args[0]}') # pragma: nocover
app.exit_code = 1 # pragma: nocover
except CaughtSignal as e: # pragma: nocover
print('\n%s' % e) # pragma: nocover
print(f'\n{e}') # pragma: nocover
app.exit_code = 0 # pragma: nocover

View File

@ -43,7 +43,7 @@ class Base(Controller):
### do something with arguments
if self.app.pargs.foo is not None:
print('Foo Argument > %s' % self.app.pargs.foo)
print(f'Foo Argument > {self.app.pargs.foo}')
class MyApp(App):
@ -67,7 +67,7 @@ def main():
app.run()
except CaughtSignal as e:
# Default Cement signals are SIGINT and SIGTERM, exit 0 (non-error)
print('\n%s' % e)
print(f'\n{e}')
app.exit_code = 0

View File

@ -3,10 +3,10 @@ from cement import Controller, ex
from cement.utils.version import get_version_banner
from ..core.version import get_version
VERSION_BANNER = """
A Simple TODO Application %s
%s
""" % (get_version(), get_version_banner())
VERSION_BANNER = f"""
A Simple TODO Application {get_version()}
{get_version_banner()}
"""
class Base(Controller):

View File

@ -26,7 +26,7 @@ class Items(Controller):
def create(self):
text = self.app.pargs.item_text
now = strftime("%Y-%m-%d %H:%M:%S")
self.app.log.info('creating todo item: %s' % text)
self.app.log.info(f'creating todo item: {text}')
item = {
'timestamp': now,
@ -52,7 +52,7 @@ class Items(Controller):
id = int(self.app.pargs.item_id)
text = self.app.pargs.item_text
now = strftime("%Y-%m-%d %H:%M:%S")
self.app.log.info('updating todo item: %s - %s' % (id, text))
self.app.log.info(f'updating todo item: {id} - {text}')
item = {
'timestamp': now,
@ -76,14 +76,14 @@ class Items(Controller):
item['timestamp'] = now
item['state'] = 'complete'
self.app.log.info('completing item id: %s - %s' % (id, item['text']))
self.app.log.info(f"completing item id: {id} - {item['text']}")
self.app.db.update(item, doc_ids=[id])
msg = """
msg = f"""
Congratulations! The following item has been completed:
%s - %s
""" % (id, item['text'])
{id} - {item['text']}
"""
self.app.mail.send(msg,
subject='TODO Item Complete',
@ -101,5 +101,5 @@ class Items(Controller):
)
def delete(self):
id = int(self.app.pargs.item_id)
self.app.log.info('deleting todo item id: %s' % id)
self.app.log.info(f'deleting todo item id: {id}')
self.app.db.remove(doc_ids=[id])

View File

@ -10,4 +10,4 @@ class TodoError(Exception):
return self.msg
def __repr__(self):
return "<TodoError - %s>" % self.msg
return f"<TodoError - {self.msg}>"

View File

@ -22,7 +22,7 @@ def extend_tinydb(app):
# ensure that we expand the full path
db_file = fs.abspath(db_file)
app.log.info('tinydb database file is: %s' % db_file)
app.log.info(f'tinydb database file is: {db_file}')
# ensure our parent directory exists
db_dir = os.path.dirname(db_file)
@ -88,7 +88,7 @@ def main():
app.run()
except AssertionError as e:
print('AssertionError > %s' % e.args[0])
print(f'AssertionError > {e.args[0]}')
app.exit_code = 1
if app.debug is True:
@ -96,7 +96,7 @@ def main():
traceback.print_exc()
except TodoError:
print('TodoError > %s' % e.args[0])
print(f'TodoError > {e.args[0]}')
app.exit_code = 1
if app.debug is True:
@ -105,7 +105,7 @@ def main():
except CaughtSignal as e:
# Default Cement signals are SIGINT and SIGTERM, exit 0 (non-error)
print('\n%s' % e)
print(f'\n{e}')
app.exit_code = 0

View File

@ -4,6 +4,7 @@ Cement core argument module.
"""
from abc import abstractmethod
from typing import Any, List
from ..core.interface import Interface
from ..core.handler import Handler
from ..utils.misc import minimal_logger
@ -20,7 +21,7 @@ class ArgumentInterface(Interface):
:class:`ArgumentHandler` base class as a starting point.
"""
class Meta:
class Meta(Interface.Meta):
"""Interface meta-data options."""
@ -28,7 +29,7 @@ class ArgumentInterface(Interface):
interface = 'argument'
@abstractmethod
def add_argument(self, *args, **kw):
def add_argument(self, *args: str, **kw: Any) -> None:
"""Add arguments to the parser.
This should be ``-o/--option`` or positional. Note that the interface
@ -60,7 +61,7 @@ class ArgumentInterface(Interface):
pass # pragma: nocover
@abstractmethod
def parse(self, *args):
def parse(self, *args: List[str]) -> object:
"""
Parse the argument list (i.e. ``sys.argv``). Can return any object as
long as its' members contain those of the added arguments. For
@ -71,7 +72,7 @@ class ArgumentInterface(Interface):
args (list): A list of command line arguments
Returns:
object: A callable object whose member reprisent the available
object: A callable object whose members reprisent the available
arguments
"""
@ -82,4 +83,5 @@ class ArgumentHandler(ArgumentInterface, Handler):
"""Argument handler implementation"""
pass # pragma: nocover
class Meta(Handler.Meta):
pass # pragma: nocover

View File

@ -1,3 +1,3 @@
"""Cement core backend module."""
VERSION = (3, 0, 10, 'final', 0) # pragma: nocover
VERSION = (3, 0, 15, 'final', 0) # pragma: nocover

View File

@ -1,6 +1,7 @@
"""Cement core cache module."""
from abc import abstractmethod
from typing import Any, Optional
from ..core.interface import Interface
from ..core.handler import Handler
from ..utils.misc import minimal_logger
@ -17,7 +18,7 @@ class CacheInterface(Interface):
:class:`CacheHandler` base class as a starting point.
"""
class Meta:
class Meta(Interface.Meta):
"""Handler meta-data."""
@ -25,7 +26,7 @@ class CacheInterface(Interface):
interface = 'cache'
@abstractmethod
def get(self, key, fallback=None):
def get(self, key: str, fallback: Any = None) -> Any:
"""
Get the value for a key in the cache.
@ -47,7 +48,7 @@ class CacheInterface(Interface):
pass # pragma: nocover
@abstractmethod
def set(self, key, value, time=None):
def set(self, key: str, value: Any, time: Optional[int] = None) -> None:
"""
Set the key/value in the cache for a set amount of ``time``.
@ -66,7 +67,7 @@ class CacheInterface(Interface):
pass # pragma: nocover
@abstractmethod
def delete(self, key):
def delete(self, key: str) -> bool:
"""
Deletes a key/value from the cache.
@ -81,7 +82,7 @@ class CacheInterface(Interface):
pass # pragma: nocover
@abstractmethod
def purge(self):
def purge(self) -> None:
"""
Clears all data from the cache.
@ -95,4 +96,5 @@ class CacheHandler(CacheInterface, Handler):
Cache handler implementation.
"""
pass # pragma: nocover
class Meta(Handler.Meta):
pass # pragma: nocover

View File

@ -2,6 +2,7 @@
import os
from abc import abstractmethod
from typing import Any, Dict, List
from ..core.interface import Interface
from ..core.handler import Handler
from ..utils.fs import abspath
@ -19,7 +20,7 @@ class ConfigInterface(Interface):
:class:`ConfigHandler` base class as a starting point.
"""
class Meta:
class Meta(Interface.Meta):
"""Handler meta-data."""
@ -27,7 +28,7 @@ class ConfigInterface(Interface):
interface = 'config'
@abstractmethod
def parse_file(self, file_path):
def parse_file(self, file_path: str) -> bool:
"""
Parse config file settings from ``file_path``. Returns True if the
file existed, and was parsed successfully. Returns False otherwise.
@ -42,7 +43,7 @@ class ConfigInterface(Interface):
pass # pragma: nocover
@abstractmethod
def keys(self, section):
def keys(self, section: str) -> List[str]:
"""
Return a list of configuration keys from ``section``.
@ -56,7 +57,7 @@ class ConfigInterface(Interface):
pass # pragma: nocover
@abstractmethod
def get_sections(self):
def get_sections(self) -> List[str]:
"""
Return a list of configuration sections.
@ -67,7 +68,7 @@ class ConfigInterface(Interface):
pass # pragma: nocover
@abstractmethod
def get_dict(self):
def get_dict(self) -> Dict[str, Any]:
"""
Return a dict of the entire configuration.
@ -77,7 +78,7 @@ class ConfigInterface(Interface):
"""
@abstractmethod
def get_section_dict(self, section):
def get_section_dict(self, section: str) -> Dict[str, Any]:
"""
Return a dict of configuration parameters for ``section``.
@ -92,7 +93,7 @@ class ConfigInterface(Interface):
pass # pragma: nocover
@abstractmethod
def add_section(self, section):
def add_section(self, section: str) -> None:
"""
Add a new section if it doesn't already exist.
@ -106,7 +107,7 @@ class ConfigInterface(Interface):
pass # pragma: nocover
@abstractmethod
def get(self, section, key):
def get(self, section: str, key: str) -> Any:
"""
Return a configuration value based on ``section.key``. Must honor
environment variables if they exist to override the config... for
@ -129,7 +130,7 @@ class ConfigInterface(Interface):
pass # pragma: nocover
@abstractmethod
def set(self, section, key, value):
def set(self, section: str, key: str, value: Any) -> None:
"""
Set a configuration value based at ``section.key``.
@ -146,7 +147,7 @@ class ConfigInterface(Interface):
pass # pragma: nocover
@abstractmethod
def merge(self, dict_obj, override=True):
def merge(self, dict_obj: dict, override: bool = True) -> None:
"""
Merges a dict object into the configuration.
@ -161,7 +162,7 @@ class ConfigInterface(Interface):
pass # pragma: nocover
@abstractmethod
def has_section(self, section):
def has_section(self, section: str) -> bool:
"""
Returns whether or not the section exists.
@ -183,8 +184,11 @@ class ConfigHandler(ConfigInterface, Handler):
"""
class Meta(Handler.Meta):
pass # pragma: nocover
@abstractmethod
def _parse_file(self, file_path):
def _parse_file(self, file_path: str) -> bool:
"""
Parse a configuration file at ``file_path`` and store it. This
function must be provided by the handler implementation (that is
@ -199,7 +203,7 @@ class ConfigHandler(ConfigInterface, Handler):
"""
pass # pragma: nocover
def parse_file(self, file_path):
def parse_file(self, file_path: str) -> bool:
"""
Ensure we are using the absolute/expanded path to ``file_path``, and
then call ``self._parse_file`` to parse config file settings from it,
@ -219,10 +223,8 @@ class ConfigHandler(ConfigInterface, Handler):
"""
file_path = abspath(file_path)
if os.path.exists(file_path):
LOG.debug("config file '%s' exists, loading settings..." %
file_path)
LOG.debug(f"config file '{file_path}' exists, loading settings...")
return self._parse_file(file_path)
else:
LOG.debug("config file '%s' does not exist, skipping..." %
file_path)
LOG.debug(f"config file '{file_path}' does not exist, skipping...")
return False

View File

@ -1,6 +1,8 @@
"""Cement core controller module."""
from __future__ import annotations
from abc import abstractmethod
from typing import Any, Union
from ..core.interface import Interface
from ..core.handler import Handler
from ..utils.misc import minimal_logger
@ -17,7 +19,7 @@ class ControllerInterface(Interface):
:class:`ControllerHandler` base class as a starting point.
"""
class Meta:
class Meta(Interface.Meta):
"""Interface meta-data."""
@ -25,7 +27,7 @@ class ControllerInterface(Interface):
interface = 'controller'
@abstractmethod
def _dispatch(self):
def _dispatch(self) -> Union[Any | None]:
"""
Reads the application object's data to dispatch a command from this
controller. For example, reading ``self.app.pargs`` to determine what
@ -45,4 +47,5 @@ class ControllerInterface(Interface):
class ControllerHandler(ControllerInterface, Handler):
"""Controller handler implementation."""
pass # pragma: nocover
class Meta(Handler.Meta):
pass # pragma: nocover

View File

@ -8,7 +8,7 @@ DEPRECATIONS = {
}
def deprecate(deprecation_id: str):
def deprecate(deprecation_id: str) -> None:
deprecation_id = str(deprecation_id)
msg = DEPRECATIONS[deprecation_id]
total_msg = f"{msg}. See: https://docs.builtoncement.com/release-information/deprecations#{deprecation_id}" # noqa: E501

View File

@ -1,5 +1,7 @@
"""Cement core exceptions module."""
from typing import Any
class FrameworkError(Exception):
@ -11,11 +13,11 @@ class FrameworkError(Exception):
"""
def __init__(self, msg):
def __init__(self, msg: str) -> None:
Exception.__init__(self)
self.msg = msg
def __str__(self):
def __str__(self) -> str:
return self.msg
@ -38,8 +40,8 @@ class CaughtSignal(FrameworkError):
"""
def __init__(self, signum, frame):
msg = 'Caught signal %s' % signum
def __init__(self, signum: int, frame: Any) -> None:
msg = f'Caught signal {signum}'
super(CaughtSignal, self).__init__(msg)
self.signum = signum
self.frame = frame

View File

@ -1,7 +1,9 @@
"""Cement core extensions module."""
from __future__ import annotations
import sys
from abc import abstractmethod
from typing import Any, List, TYPE_CHECKING
from ..core import exc
from ..core.interface import Interface
from ..core.handler import Handler
@ -10,6 +12,10 @@ from ..utils.misc import minimal_logger
LOG = minimal_logger(__name__)
if TYPE_CHECKING:
from ..core.foundation import App # pragma: nocover
class ExtensionInterface(Interface):
"""
@ -19,15 +25,15 @@ class ExtensionInterface(Interface):
:class:`ExtensionHandler` base class as a starting point.
"""
class Meta:
class Meta(Interface.Meta):
"""Handler meta-data."""
#: The string identifier of the interface.
interface = 'extension'
interface: str = 'extension'
@abstractmethod
def load_extension(self, ext_module):
def load_extension(self, ext_module: str) -> None:
"""
Load an extension whose module is ``ext_module``. For example,
``cement.ext.ext_json``.
@ -39,7 +45,7 @@ class ExtensionInterface(Interface):
pass # pragma: no cover
@abstractmethod
def load_extensions(self, ext_list):
def load_extensions(self, ext_list: List[str]) -> None:
"""
Load all extensions from ``ext_list``.
@ -61,7 +67,7 @@ class ExtensionHandler(ExtensionInterface, Handler):
"""
class Meta:
class Meta(Handler.Meta):
"""
Handler meta-data (can be passed as keyword arguments to the parent
@ -69,14 +75,14 @@ class ExtensionHandler(ExtensionInterface, Handler):
"""
#: The string identifier of the handler.
label = 'cement'
label: str = 'cement'
def __init__(self, **kw):
def __init__(self, **kw: Any) -> None:
super().__init__(**kw)
self.app = None
self._loaded_extensions = []
self.app: App = None # type: ignore
self._loaded_extensions: List[str] = []
def get_loaded_extensions(self):
def get_loaded_extensions(self) -> List[str]:
"""
Get all loaded extensions.
@ -86,7 +92,7 @@ class ExtensionHandler(ExtensionInterface, Handler):
"""
return self._loaded_extensions
def list(self):
def list(self) -> List[str]:
"""
Synonymous with ``get_loaded_extensions()``.
@ -96,7 +102,7 @@ class ExtensionHandler(ExtensionInterface, Handler):
"""
return self._loaded_extensions
def load_extension(self, ext_module):
def load_extension(self, ext_module: str) -> None:
"""
Given an extension module name, load or in other-words ``import`` the
extension.
@ -110,15 +116,15 @@ class ExtensionHandler(ExtensionInterface, Handler):
loaded.
"""
# If its not a full module path then preppend our default path
# If it's not a full module path then preppend our default path
if ext_module.find('.') == -1:
ext_module = 'cement.ext.ext_%s' % ext_module
ext_module = f'cement.ext.ext_{ext_module}'
if ext_module in self._loaded_extensions:
LOG.debug("framework extension '%s' already loaded" % ext_module)
LOG.debug(f"framework extension '{ext_module}' already loaded")
return
LOG.debug("loading the '%s' framework extension" % ext_module)
LOG.debug(f"loading the '{ext_module}' framework extension")
try:
if ext_module not in sys.modules:
__import__(ext_module, globals(), locals(), [], 0)
@ -132,7 +138,7 @@ class ExtensionHandler(ExtensionInterface, Handler):
except ImportError as e:
raise exc.FrameworkError(e.args[0])
def load_extensions(self, ext_list):
def load_extensions(self, ext_list: List[str]) -> None:
"""
Given a list of extension modules, iterate over the list and pass
individually to ``self.load_extension()``.

View File

@ -1,22 +1,30 @@
"""Cement core foundation module."""
from __future__ import annotations
import os
import platform
import signal
import sys
from importlib import reload as reload_module
from time import sleep
from typing import (IO, Any, Callable, Dict, List, Optional, TextIO, Tuple,
Type, Union, TYPE_CHECKING)
from ..core import (arg, cache, config, controller, exc, extension, log, mail,
meta, output, plugin, template)
from ..core.deprecations import deprecate
from ..core.handler import HandlerManager
from ..core.handler import Handler, HandlerManager
from ..core.hook import HookManager
from ..core.interface import InterfaceManager
from ..core.interface import Interface, InterfaceManager
from ..ext.ext_argparse import ArgparseController as Controller
from ..utils import fs, misc
from ..utils.misc import is_true, minimal_logger
if TYPE_CHECKING:
from types import FrameType, ModuleType, TracebackType # pragma: nocover
ArgparseArgumentType = Tuple[List[str], Dict[str, Any]]
join = os.path.join
@ -27,7 +35,7 @@ else:
SIGNALS = [signal.SIGTERM, signal.SIGINT, signal.SIGHUP]
def add_handler_override_options(app):
def add_handler_override_options(app: App) -> None:
"""
This is a ``post_setup`` hook that adds the handler override options to
the argument parser
@ -41,7 +49,7 @@ def add_handler_override_options(app):
for i in app._meta.handler_override_options:
if i not in app.interface.list():
LOG.debug("interface '%s'" % i +
LOG.debug(f"interface '{i}'" +
" is not defined, can not override handlers")
continue
@ -57,12 +65,12 @@ def add_handler_override_options(app):
# don't display the option if no handlers are overridable
if not len(choices) > 0:
LOG.debug("no handlers are overridable within the " +
"%s interface" % i)
f"{i} interface")
continue
# override things that we need to control
argument_kw = app._meta.handler_override_options[i][1]
argument_kw['dest'] = '%s_handler_override' % i
argument_kw['dest'] = f'{i}_handler_override'
argument_kw['action'] = 'store'
argument_kw['choices'] = choices
@ -72,7 +80,7 @@ def add_handler_override_options(app):
)
def handler_override(app):
def handler_override(app: App) -> None:
"""
This is a ``post_argument_parsing`` hook that overrides a configured
handler if defined in ``App.Meta.handler_override_options`` and
@ -86,20 +94,20 @@ def handler_override(app):
return
for i in app._meta.handler_override_options.keys():
if not hasattr(app.pargs, '%s_handler_override' % i):
if not hasattr(app.pargs, f'{i}_handler_override'):
continue # pragma: nocover
elif getattr(app.pargs, '%s_handler_override' % i) is None:
elif getattr(app.pargs, f'{i}_handler_override') is None:
continue # pragma: nocover
else:
# get the argument value from command line
argument = getattr(app.pargs, '%s_handler_override' % i)
setattr(app._meta, '%s_handler' % i, argument)
argument = getattr(app.pargs, f'{i}_handler_override')
setattr(app._meta, f'{i}_handler', argument)
# and then re-setup the handler
getattr(app, '_setup_%s_handler' % i)()
getattr(app, f'_setup_{i}_handler')()
def cement_signal_handler(signum, frame):
def cement_signal_handler(signum: int, frame: Optional[FrameType]) -> Any:
"""
Catch a signal, run the ``signal`` hook, and then raise an exception
allowing the app to handle logic elsewhere.
@ -112,16 +120,17 @@ def cement_signal_handler(signum, frame):
cement.core.exc.CaughtSignal: Raised, passing ``signum``, and ``frame``
"""
LOG.debug('Caught signal %s' % signum)
LOG.debug(f'Caught signal {signum}')
# FIXME: Maybe this isn't ideal... purhaps make
# App.Meta.signal_handler a decorator that take the app object
# and wraps/returns the actually signal handler?
for f_global in frame.f_globals.values():
if isinstance(f_global, App):
app = f_global
for res in app.hook.run('signal', app, signum, frame):
pass # pragma: nocover
if frame:
for f_global in frame.f_globals.values():
if isinstance(f_global, App):
app = f_global
for res in app.hook.run('signal', app, signum, frame):
pass # pragma: nocover
raise exc.CaughtSignal(signum, frame)
@ -136,7 +145,7 @@ class App(meta.MetaMixin):
parent class).
"""
label = None
label: str = None # type: ignore
"""
The name of the application. This should be the common name as you
would see and use at the command line. For example ``helloworld``, or
@ -188,7 +197,7 @@ class App(meta.MetaMixin):
Extension used to identify application and plugin configuration files.
"""
config_files = None
config_files: list[str] = None # type: ignore
"""
List of config files to parse (appended to the builtin list of config
files defined by Cement).
@ -214,7 +223,7 @@ class App(meta.MetaMixin):
``App.Meta.config_file_suffix``.
"""
config_dirs = None
config_dirs: list[str] = None # type: ignore
"""
List of config directories to search config files (appended to the
builtin list of directories defined by Cement). For each directory
@ -242,14 +251,14 @@ class App(meta.MetaMixin):
``CementApp.Meta.config_file_suffix``.
"""
plugins = []
plugins: list[str] = []
"""
A list of plugins to load. This is generally considered bad practice
since plugins should be dynamically enabled/disabled via a plugin
config file.
"""
plugin_module = None
plugin_module: str = None # type: ignore
"""
A python package (dotted import path) where plugin code can be
loaded from. This is generally something like ``myapp.plugins``
@ -262,7 +271,7 @@ class App(meta.MetaMixin):
``<app_label>.plugins`` if not set.
"""
plugin_dirs = None
plugin_dirs: list[str] = None # type: ignore
"""
A list of directory paths where plugin code (modules) can be loaded
from (appended to the builtin list of directories defined by Cement).
@ -286,7 +295,7 @@ class App(meta.MetaMixin):
first has precedence.
"""
plugin_dir = None
plugin_dir: Optional[str] = None
"""
A directory path where plugin code (modules) can be loaded from.
By default, this setting is also overridden by the
@ -302,7 +311,7 @@ class App(meta.MetaMixin):
of default ``plugin_dirs`` defined by the app/developer.
"""
argv = None
argv: list[str] = None # type: ignore
"""
A list of arguments to use for parsing command line arguments
and options.
@ -312,7 +321,8 @@ class App(meta.MetaMixin):
``setup()``.
"""
core_handler_override_options = dict(
_choo_type = Dict[str, ArgparseArgumentType]
core_handler_override_options: _choo_type = dict(
output=(['-o'], dict(help='output handler')),
)
"""
@ -322,7 +332,7 @@ class App(meta.MetaMixin):
are merged together).
"""
handler_override_options = {}
handler_override_options: Dict[str, ArgparseArgumentType] = {}
"""
Dictionary of handler override options that will be added to the
argument parser, and allow the end-user to override handlers. Useful
@ -349,7 +359,7 @@ class App(meta.MetaMixin):
recommended as some extensions rely on this feature).
"""
config_section = None
config_section: str = None # type: ignore
"""
The base configuration section for the application.
@ -358,10 +368,10 @@ class App(meta.MetaMixin):
the name of the application).
"""
config_defaults = None
config_defaults: Dict[str, Any] = None # type: ignore
"""Default configuration dictionary. Must be of type ``dict``."""
meta_defaults = {}
meta_defaults: Dict[str, Any] = {}
"""
Default meta-data dictionary used to pass high level options from the
application down to handlers at the point they are registered by the
@ -395,7 +405,7 @@ class App(meta.MetaMixin):
for. Can be set to ``None`` to disable signal handling.
"""
signal_handler = cement_signal_handler
signal_handler: Callable = cement_signal_handler
"""A function that is called to handle any caught signals."""
config_handler = 'configparser'
@ -438,15 +448,15 @@ class App(meta.MetaMixin):
A handler class that implements the Template interface.
"""
cache_handler = None
cache_handler: Optional[str] = None
"""
A handler class that implements the Cache interface.
"""
extensions = []
extensions: List[str] = []
"""List of additional framework extensions to load."""
bootstrap = None
bootstrap: Optional[str] = None
"""
A bootstrapping module to load after app creation, and before
``app.setup()`` is called. This is useful for larger applications
@ -504,7 +514,7 @@ class App(meta.MetaMixin):
``App.Meta.meta_override``.
"""
meta_override = []
meta_override: List[str] = []
"""
List of meta options that can/will be overridden by config options
of the ``base`` config section (where ``base`` is the
@ -515,7 +525,7 @@ class App(meta.MetaMixin):
ignore_deprecation_warnings = False
"""Disable deprecation warnings from being logged by Cement."""
template_module = None
template_module: Optional[str] = None
"""
A python package (dotted import path) where template files can be
loaded from. This is generally something like ``myapp.templates``
@ -525,7 +535,7 @@ class App(meta.MetaMixin):
``template_dirs`` setting has presedence.
"""
template_dirs = None
template_dirs: List[str] = None # type: ignore
"""
A list of directory paths where template files can be loaded
from (appended to the builtin list of directories defined by Cement).
@ -547,7 +557,7 @@ class App(meta.MetaMixin):
once a template is successfully loaded from a directory.
"""
template_dir = None
template_dir: Optional[str] = None
"""
A directory path where template files can be loaded from. By default,
this setting is also overridden by the
@ -578,7 +588,7 @@ class App(meta.MetaMixin):
``True``.
"""
define_hooks = []
define_hooks: List[str] = []
"""
List of hook definitions (labels). Will be passed to
``self.hook.define(<hook_label>)``. Must be a list of strings.
@ -586,7 +596,7 @@ class App(meta.MetaMixin):
I.e. ``['my_custom_hook', 'some_other_hook']``
"""
hooks = []
hooks: List[Tuple[str, Callable]] = []
"""
List of hooks to register when the app is created. Will be passed to
``self.hook.register(<hook_label>, <hook_func>)``. Must be a list of
@ -595,7 +605,7 @@ class App(meta.MetaMixin):
I.e. ``[('post_argument_parsing', my_hook_func)]``.
"""
core_interfaces = [
core_interfaces: List[Type[Interface]] = [
extension.ExtensionInterface,
log.LogInterface,
config.ConfigInterface,
@ -614,7 +624,7 @@ class App(meta.MetaMixin):
``App.Meta.interfaces``.
"""
interfaces = []
interfaces: List[Type[Interface]] = []
"""
List of interfaces to be defined. Must be a list of
uninstantiated interface base classes.
@ -622,7 +632,7 @@ class App(meta.MetaMixin):
I.e. ``[MyCustomInterface, SomeOtherInterface]``
"""
handlers = []
handlers: List[Type[Handler]] = []
"""
List of handler classes to register. Will be passed to
``handler.register(<handler_class>)``. Must be a list of
@ -631,7 +641,7 @@ class App(meta.MetaMixin):
I.e. ``[MyCustomHandler, SomeOtherHandler]``
"""
alternative_module_mapping = {}
alternative_module_mapping: Dict[str, str] = {}
"""
EXPERIMENTAL FEATURE: This is an experimental feature added in Cement
2.9.x and may or may not be removed in future versions of Cement.
@ -641,10 +651,10 @@ class App(meta.MetaMixin):
extensions. Developers can optionally use the
``App.__import__()`` method to import simple modules, and if
that module exists in this mapping it will import the alternative
library in it's place.
library in its place.
This is a low-level feature, and may not produce the results you are
expecting. It's purpose is to allow the developer to replace specific
This is a low-level feature and may not produce the results you are
expecting. Its purpose is to allow the developer to replace specific
modules at a high level. Example: For an application wanting to use
``ujson`` in place of ``json``, the developer could set the following:
@ -732,7 +742,9 @@ class App(meta.MetaMixin):
List of builtin user level directories to scan for plugins.
"""
def __init__(self, label=None, **kw):
_meta: Meta # type: ignore
def __init__(self, label: Optional[str] = None, **kw: Any) -> None:
super(App, self).__init__(**kw)
# enable framework logging from environment?
@ -768,25 +780,25 @@ class App(meta.MetaMixin):
self._validate_label()
self._loaded_bootstrap = None
self._parsed_args = None
self._last_rendered = None
self._extended_members = []
self.__saved_stdout__ = None
self.__saved_stderr__ = None
self.__retry_hooks__ = []
self.handler = None
self.interface = None
self.hook = None
self._parsed_args: Any = None
self._last_rendered: Optional[Tuple[Any, Optional[str]]] = None
self._extended_members: List[str] = []
self.__saved_stdout__: TextIO = None # type: ignore
self.__saved_stderr__: TextIO = None # type: ignore
self.__retry_hooks__: List[Tuple[str, Callable]] = []
self.handler: HandlerManager = None # type: ignore
self.interface: InterfaceManager = None # type: ignore
self.hook: HookManager = None # type: ignore
self.exit_code = 0
self.ext = None
self.config = None
self.log = None
self.plugin = None
self.args = None
self.output = None
self.controller = None
self.cache = None
self.mail = None
self.ext: extension.ExtensionHandler = None # type: ignore
self.config: config.ConfigHandler = None # type: ignore
self.log: log.LogHandler = None # type: ignore
self.plugin: plugin.PluginHandler = None # type: ignore
self.args: arg.ArgumentHandler = None # type: ignore
self.output: output.OutputHandler = None # type: ignore
self.controller: controller.ControllerHandler = None # type: ignore
self.cache: cache.CacheHandler = None # type: ignore
self.mail: mail.MailHandler = None # type: ignore
# setup argv... this has to happen before lay_cement()
if self._meta.argv is None:
@ -809,11 +821,11 @@ class App(meta.MetaMixin):
# self.hook.register('pre_close', display_deprecation_warnings)
@property
def label(self):
def label(self) -> str:
return self._meta.label
@property
def debug(self):
def debug(self) -> bool:
"""
Application debug mode.
@ -822,7 +834,7 @@ class App(meta.MetaMixin):
return self._meta.debug
@property
def quiet(self):
def quiet(self) -> bool:
"""
Application quiet mode.
@ -831,11 +843,11 @@ class App(meta.MetaMixin):
return self._meta.quiet
@property
def argv(self):
def argv(self) -> List[str]:
"""The arguments list that will be used when self.run() is called."""
return self._meta.argv
def extend(self, member_name, member_object):
def extend(self, member_name: str, member_object: Any) -> None:
"""
Extend the ``App()`` object with additional functions/classes such
as ``app.my_custom_function()``, etc. It provides an interface for
@ -853,15 +865,13 @@ class App(meta.MetaMixin):
"""
if hasattr(self, member_name):
raise exc.FrameworkError("App member '%s' already exists!" %
member_name)
LOG.debug("extending appication with '.%s' (%s)" %
(member_name, member_object))
raise exc.FrameworkError(f"App member '{member_name}' already exists!")
LOG.debug(f"extending appication with '.{member_name}' ({member_object})")
setattr(self, member_name, member_object)
if member_name not in self._extended_members:
self._extended_members.append(member_name)
def _validate_label(self):
def _validate_label(self) -> None:
if not self._meta.label:
raise exc.FrameworkError("Application name missing.")
@ -877,7 +887,7 @@ class App(meta.MetaMixin):
"or underscores."
)
def setup(self):
def setup(self) -> None:
"""
This function wraps all ``_setup`` actons in one call. It is called
before ``self.run()``, allowing the application to be setup but not
@ -888,19 +898,17 @@ class App(meta.MetaMixin):
complete.
"""
LOG.debug("now setting up the '%s' application" % self._meta.label)
LOG.debug(f"now setting up the '{self._meta.label}' application")
if self._meta.bootstrap is not None:
LOG.debug("importing bootstrap code from %s" %
self._meta.bootstrap)
LOG.debug(f"importing bootstrap code from {self._meta.bootstrap}")
if self._meta.bootstrap not in sys.modules \
or self._loaded_bootstrap is None:
if self._meta.bootstrap not in sys.modules or self._loaded_bootstrap is None:
__import__(self._meta.bootstrap, globals(), locals(), [], 0)
if hasattr(sys.modules[self._meta.bootstrap], 'load'):
sys.modules[self._meta.bootstrap].load(self)
self._loaded_bootstrap = sys.modules[self._meta.bootstrap]
self._loaded_bootstrap = sys.modules[self._meta.bootstrap] # type: ignore
else:
reload_module(self._loaded_bootstrap)
@ -925,7 +933,7 @@ class App(meta.MetaMixin):
for res in self.hook.run('post_setup', self):
pass
def run(self):
def run(self) -> Union[None, Any]:
"""
This function wraps everything together (after ``self._setup()`` is
called) to run the application.
@ -955,7 +963,7 @@ class App(meta.MetaMixin):
return return_val
def run_forever(self, interval=1, tb=True):
def run_forever(self, interval: int = 1, tb: bool = True) -> None:
"""
This function wraps ``self.run()`` with an endless while loop. If any
exception is encountered it will be logged and then the application
@ -976,7 +984,7 @@ class App(meta.MetaMixin):
try:
self.run()
except Exception as e:
self.log.fatal('Caught Exception: %s' % e)
self.log.fatal(f'Caught Exception: {e}')
if tb is True:
exc_type, exc_value, exc_traceback = sys.exc_info()
@ -987,24 +995,24 @@ class App(meta.MetaMixin):
sleep(interval)
self.reload()
def reload(self):
def reload(self) -> None:
"""
This function is useful for reloading a running applications, for
example to reload configuration settings, etc.
"""
LOG.debug('reloading the %s application' % self._meta.label)
LOG.debug(f'reloading the {self._meta.label} application')
self._unlay_cement()
self._lay_cement()
self.setup()
def _unlay_cement(self):
def _unlay_cement(self) -> None:
for member in self._extended_members:
delattr(self, member)
self._extended_members = []
self.handler.__handlers__ = {}
self.hook.__hooks__ = {}
def close(self, code=None):
def close(self, code: Optional[int] = None) -> None:
"""
Close the application. This runs the ``pre_close`` and ``post_close``
hooks allowing plugins/extensions/etc to cleanup at the end of
@ -1019,7 +1027,7 @@ class App(meta.MetaMixin):
for res in self.hook.run('pre_close', self):
pass
LOG.debug("closing the %s application" % self._meta.label)
LOG.debug(f"closing the {self._meta.label} application")
# reattach our stdout if in quiet mode to avoid lingering file handles
# resolves: https://github.com/datafolklabs/cement/issues/653
@ -1041,7 +1049,11 @@ class App(meta.MetaMixin):
if self._meta.exit_on_close is True:
sys.exit(self.exit_code)
def render(self, data, template=None, out=sys.stdout, handler=None, **kw):
def render(self, data: Any,
template: Optional[str] = None,
out: IO = sys.stdout,
handler: Optional[str] = None,
**kw: Any) -> str:
"""
This is a simple wrapper around ``self.output.render()`` which simply
returns an empty string if no output handler is defined.
@ -1065,7 +1077,7 @@ class App(meta.MetaMixin):
"""
for res in self.hook.run('pre_render', self, data):
if not type(res) is dict:
if type(res) is not dict:
LOG.debug("pre_render hook did not return a dict().")
else:
data = res
@ -1073,13 +1085,13 @@ class App(meta.MetaMixin):
# Issue #636: override sys.stdout if in quiet mode
stdouts = [sys.stdout, self.__saved_stdout__]
if self._meta.quiet is True and out in stdouts:
out = None
out = None # type: ignore
kw['template'] = template
if handler is not None:
oh = self.handler.resolve('output', handler)
oh._setup(self)
oh._setup(self) # type: ignore
else:
oh = self.output
@ -1090,7 +1102,7 @@ class App(meta.MetaMixin):
out_text = oh.render(data, **kw)
for res in self.hook.run('post_render', self, out_text):
if not type(res) is str:
if type(res) is not str:
LOG.debug('post_render hook did not return a str()')
else:
out_text = str(res)
@ -1106,7 +1118,7 @@ class App(meta.MetaMixin):
return out_text
@property
def last_rendered(self):
def last_rendered(self) -> Optional[Tuple[Dict[str, Any], Optional[str]]]:
"""
Return the ``(data, output_text)`` tuple of the last time
``self.render()`` was called.
@ -1118,18 +1130,18 @@ class App(meta.MetaMixin):
return self._last_rendered
@property
def pargs(self):
def pargs(self) -> Any:
"""
Returns the ``parsed_args`` object as returned by
``self.args.parse()``.
"""
return self._parsed_args
def add_arg(self, *args, **kw):
def add_arg(self, *args: Any, **kw: Any) -> None:
"""A shortcut for ``self.args.add_argument``."""
self.args.add_argument(*args, **kw)
def _suppress_output(self):
def _suppress_output(self) -> None:
if self._meta.debug is True:
LOG.debug('not suppressing console output because of debug mode')
return
@ -1144,7 +1156,7 @@ class App(meta.MetaMixin):
if self.log is not None:
self._setup_log_handler()
def _unsuppress_output(self):
def _unsuppress_output(self) -> None:
LOG.debug('unsuppressing all console output')
# don't accidentally close the actual <stdout>/<stderr>
@ -1159,10 +1171,9 @@ class App(meta.MetaMixin):
if self.log is not None:
self._setup_log_handler()
def _lay_cement(self):
def _lay_cement(self) -> None:
"""Initialize the framework."""
LOG.debug("laying cement for the '%s' application" %
self._meta.label)
LOG.debug(f"laying cement for the '{self._meta.label}' application")
self.interface = InterfaceManager(self)
self.handler = HandlerManager(self)
@ -1199,8 +1210,7 @@ class App(meta.MetaMixin):
self.__retry_hooks__ = []
for hook_spec in self._meta.hooks:
if not self.hook.defined(hook_spec[0]):
LOG.debug('hook %s not defined, will retry after setup' %
hook_spec[0])
LOG.debug(f'hook {hook_spec[0]} not defined, will retry after setup')
self.__retry_hooks__.append(hook_spec)
else:
self.hook.register(*hook_spec)
@ -1220,7 +1230,7 @@ class App(meta.MetaMixin):
for handler_class in self._meta.handlers:
self.handler.register(handler_class)
def _parse_args(self):
def _parse_args(self) -> None:
for res in self.hook.run('pre_argument_parsing', self):
pass
@ -1229,7 +1239,7 @@ class App(meta.MetaMixin):
for res in self.hook.run('post_argument_parsing', self):
pass
def catch_signal(self, signum):
def catch_signal(self, signum: int) -> None:
"""
Add ``signum`` to the list of signals to catch and handle by Cement.
@ -1243,7 +1253,7 @@ class App(meta.MetaMixin):
)
signal.signal(signum, self._meta.signal_handler)
def _setup_signals(self):
def _setup_signals(self) -> None:
if self._meta.catch_signals is None:
LOG.debug("catch_signals=None... not handling any signals")
return
@ -1251,7 +1261,10 @@ class App(meta.MetaMixin):
for signum in self._meta.catch_signals:
self.catch_signal(signum)
def _resolve_handler(self, handler_type, handler_def, raise_error=True):
def _resolve_handler(self,
handler_type: str,
handler_def: Union[str, Type[Handler], Handler],
raise_error: bool = True) -> Handler:
# meta_defaults = {}
# if type(handler_def) == str:
# _meta_label = "%s.%s" % (handler_type, handler_def)
@ -1260,20 +1273,21 @@ class App(meta.MetaMixin):
# _meta_label = "%s.%s" % (handler_type, handler_def.Meta.label)
# meta_defaults = self._meta.meta_defaults.get(_meta_label, {})
han = self.handler.resolve(handler_type,
han: Handler
han = self.handler.resolve(handler_type, # type: ignore
handler_def,
raise_error=raise_error,
setup=True)
return han
def _setup_extension_handler(self):
LOG.debug("setting up %s.extension handler" % self._meta.label)
self.ext = self._resolve_handler('extension',
def _setup_extension_handler(self) -> None:
LOG.debug(f"setting up {self._meta.label}.extension handler")
self.ext = self._resolve_handler('extension', # type: ignore
self._meta.extension_handler)
self.ext.load_extensions(self._meta.core_extensions)
self.ext.load_extensions(self._meta.extensions)
def _find_config_files(self, path):
def _find_config_files(self, path: str) -> List[str]:
found_files = []
if not os.path.isdir(path):
return []
@ -1284,11 +1298,11 @@ class App(meta.MetaMixin):
found_files.append(fs.join(path, f))
return found_files
def _setup_config_handler(self):
LOG.debug("setting up %s.config handler" % self._meta.label)
def _setup_config_handler(self) -> None:
LOG.debug(f"setting up {self._meta.label}.config handler")
label = self._meta.label
ext = self._meta.config_file_suffix
self.config = self._resolve_handler('config',
self.config = self._resolve_handler('config', # type: ignore
self._meta.config_handler)
if self._meta.config_section is None:
self._meta.config_section = label
@ -1410,17 +1424,17 @@ class App(meta.MetaMixin):
# add to meta data
self._meta.extensions.append(ext)
def _setup_mail_handler(self):
LOG.debug("setting up %s.mail handler" % self._meta.label)
self.mail = self._resolve_handler('mail',
def _setup_mail_handler(self) -> None:
LOG.debug(f"setting up {self._meta.label}.mail handler")
self.mail = self._resolve_handler('mail', # type: ignore
self._meta.mail_handler)
def _setup_log_handler(self):
LOG.debug("setting up %s.log handler" % self._meta.label)
self.log = self._resolve_handler('log', self._meta.log_handler)
def _setup_log_handler(self) -> None:
LOG.debug(f"setting up {self._meta.label}.log handler")
self.log = self._resolve_handler('log', self._meta.log_handler) # type: ignore
def _setup_plugin_handler(self):
LOG.debug("setting up %s.plugin handler" % self._meta.label)
def _setup_plugin_handler(self) -> None:
LOG.debug(f"setting up {self._meta.label}.plugin handler")
# plugin dirs
if self._meta.plugin_dirs is None:
@ -1466,36 +1480,36 @@ class App(meta.MetaMixin):
# plugin bootstrap
if self._meta.plugin_module is None:
self._meta.plugin_module = '%s.plugins' % self._meta.label
self._meta.plugin_module = f'{self._meta.label}.plugins'
self.plugin = self._resolve_handler('plugin',
self.plugin = self._resolve_handler('plugin', # type: ignore
self._meta.plugin_handler)
self.plugin.load_plugins(self._meta.plugins)
self.plugin.load_plugins(self.plugin.get_enabled_plugins())
def _setup_output_handler(self):
def _setup_output_handler(self) -> None:
if self._meta.output_handler is None:
LOG.debug("no output handler defined, skipping.")
return
LOG.debug("setting up %s.output handler" % self._meta.label)
self.output = self._resolve_handler('output',
LOG.debug(f"setting up {self._meta.label}.output handler")
self.output = self._resolve_handler('output', # type: ignore
self._meta.output_handler,
raise_error=False)
def _setup_template_handler(self):
def _setup_template_handler(self) -> None:
if self._meta.template_handler is None:
LOG.debug("no template handler defined, skipping.")
return
label = self._meta.label
LOG.debug("setting up %s.template handler" % self._meta.label)
LOG.debug(f"setting up {self._meta.label}.template handler")
self.template = self._resolve_handler('template',
self._meta.template_handler,
raise_error=False)
# template module
if self._meta.template_module is None:
self._meta.template_module = '%s.templates' % label
self._meta.template_module = f'{label}.templates'
# template dirs
if self._meta.template_dirs is None:
@ -1538,19 +1552,19 @@ class App(meta.MetaMixin):
for path in template_dirs:
self.add_template_dir(path)
def _setup_cache_handler(self):
def _setup_cache_handler(self) -> None:
if self._meta.cache_handler is None:
LOG.debug("no cache handler defined, skipping.")
return
LOG.debug("setting up %s.cache handler" % self._meta.label)
self.cache = self._resolve_handler('cache',
LOG.debug(f"setting up {self._meta.label}.cache handler")
self.cache = self._resolve_handler('cache', # type: ignore
self._meta.cache_handler,
raise_error=False)
def _setup_arg_handler(self):
LOG.debug("setting up %s.arg handler" % self._meta.label)
self.args = self._resolve_handler('argument',
def _setup_arg_handler(self) -> None:
LOG.debug(f"setting up {self._meta.label}.arg handler")
self.args = self._resolve_handler('argument', # type: ignore
self._meta.argument_handler)
self.args.prog = self._meta.label
@ -1575,27 +1589,27 @@ class App(meta.MetaMixin):
self._meta.handler_override_options = core
def _setup_controllers(self):
def _setup_controllers(self) -> None:
LOG.debug("setting up application controllers")
if self.handler.registered('controller', 'base'):
self.controller = self._resolve_handler('controller', 'base')
self.controller = self._resolve_handler('controller', 'base') # type: ignore
else:
class DefaultBaseController(Controller):
class Meta:
label = 'base'
def _default(self):
def _default(self) -> None:
# don't enforce anything cause developer might not be
# using controllers... if they are, they should define
# a base controller.
pass
self.handler.register(DefaultBaseController)
self.controller = self._resolve_handler('controller', 'base')
self.controller = self._resolve_handler('controller', 'base') # type: ignore
def validate_config(self):
def validate_config(self) -> None:
"""
Validate application config settings.
@ -1622,7 +1636,7 @@ class App(meta.MetaMixin):
"""
pass
def add_config_dir(self, path):
def add_config_dir(self, path: str) -> None:
"""
Append a directory ``path`` to the list of directories to parse for
config files.
@ -1641,7 +1655,7 @@ class App(meta.MetaMixin):
if path not in self._meta.config_dirs:
self._meta.config_dirs.append(path)
def add_config_file(self, path):
def add_config_file(self, path: str) -> None:
"""
Append a file ``path`` to the list of configuration files to parse.
@ -1659,7 +1673,7 @@ class App(meta.MetaMixin):
if path not in self._meta.config_files:
self._meta.config_files.append(path)
def add_plugin_dir(self, path):
def add_plugin_dir(self, path: str) -> None:
"""
Append a directory ``path`` to the list of directories to scan for
plugins.
@ -1678,7 +1692,7 @@ class App(meta.MetaMixin):
if path not in self._meta.plugin_dirs:
self._meta.plugin_dirs.append(path)
def add_template_dir(self, path):
def add_template_dir(self, path: str) -> None:
"""
Append a directory ``path`` to the list of template directories to
parse for templates.
@ -1697,7 +1711,7 @@ class App(meta.MetaMixin):
if path not in self._meta.template_dirs:
self._meta.template_dirs.append(path)
def remove_template_dir(self, path):
def remove_template_dir(self, path: str) -> None:
"""
Remove a directory ``path`` from the list of template directories to
parse for templates.
@ -1716,25 +1730,28 @@ class App(meta.MetaMixin):
if path in self._meta.template_dirs:
self._meta.template_dirs.remove(path)
def __import__(self, obj, from_module=None):
def __import__(self, obj: Any, from_module: Optional[str] = None) -> ModuleType:
# EXPERIMENTAL == UNDOCUMENTED
mapping = self._meta.alternative_module_mapping
if from_module is not None:
_from = mapping.get(from_module, from_module)
_loaded = __import__(_from, globals(), locals(), [obj], 0)
return getattr(_loaded, obj)
return getattr(_loaded, obj) # type: ignore
else:
obj = mapping.get(obj, obj)
_loaded = __import__(obj, globals(), locals(), [], 0)
return _loaded
def __enter__(self):
def __enter__(self) -> App:
self.setup()
return self
def __exit__(self, exc_type, exc_value, exc_traceback):
def __exit__(self,
exc_type: type[BaseException] | None,
exc_value: BaseException | None,
exc_traceback: TracebackType | None) -> None:
# only close the app if there are no unhandled exceptions
if exc_type is None:
self.close()
@ -1751,17 +1768,17 @@ class TestApp(App):
__test__ = False
class Meta:
label = "app-%s" % misc.rando()[:12]
argv = []
core_system_config_files = []
core_user_config_files = []
config_files = []
core_system_config_dirs = []
core_user_config_dirs = []
config_dirs = []
core_system_template_dirs = []
core_user_template_dirs = []
core_system_plugin_dirs = []
core_user_plugin_dirs = []
plugin_dirs = []
exit_on_close = False
label: str = f"app-{misc.rando()[:12]}"
argv: List[str] = []
core_system_config_files: List[str] = []
core_user_config_files: List[str] = []
config_files: List[str] = []
core_system_config_dirs: List[str] = []
core_user_config_dirs: List[str] = []
config_dirs: List[str] = []
core_system_template_dirs: List[str] = []
core_user_template_dirs: List[str] = []
core_system_plugin_dirs: List[str] = []
core_user_plugin_dirs: List[str] = []
plugin_dirs: List[str] = []
exit_on_close: bool = False

View File

@ -3,15 +3,22 @@ Cement core handler module.
"""
from __future__ import annotations
import re
from abc import ABC
from ..core import exc, meta
from typing import Any, List, Dict, Optional, Type, Union, TYPE_CHECKING
from ..core import exc
from ..core.meta import MetaMixin
from ..utils.misc import minimal_logger
LOG = minimal_logger(__name__)
class Handler(ABC, meta.MetaMixin):
if TYPE_CHECKING:
from ..core.foundation import App # pragma: nocover
class Handler(ABC, MetaMixin):
"""Base handler class that all Cement Handlers should subclass from."""
@ -23,13 +30,13 @@ class Handler(ABC, meta.MetaMixin):
"""
label = NotImplemented
label: str = NotImplemented
"""The string identifier of this handler."""
interface = NotImplemented
interface: str = NotImplemented
"""The interface that this class implements."""
config_section = None
config_section: str = None # type: ignore
"""
A config section to merge config_defaults with.
@ -38,14 +45,14 @@ class Handler(ABC, meta.MetaMixin):
no section is set by the user/developer.
"""
config_defaults = None
config_defaults: Optional[Dict[str, Any]] = None
"""
A config dictionary that is merged into the applications config
in the ``[<config_section>]`` block. These are defaults and do not
override any existing defaults under that section.
"""
overridable = False
overridable: bool = False
"""
Whether or not handler can be overridden by
``App.Meta.handler_override_options``. Will be listed as an
@ -53,19 +60,19 @@ class Handler(ABC, meta.MetaMixin):
``App.Meta.output_handler``, etc).
"""
def __init__(self, **kw):
def __init__(self, **kw: Any) -> None:
super(Handler, self).__init__(**kw)
try:
assert self._meta.label, \
"%s.Meta.label undefined." % self.__class__.__name__
f"{self.__class__.__name__}.Meta.label undefined."
assert self._meta.interface, \
"%s.Meta.interface undefined." % self.__class__.__name__
f"{self.__class__.__name__}.Meta.interface undefined."
except AssertionError as e:
raise exc.FrameworkError(e.args[0])
self.app = None
self.app: App = None # type: ignore
def _setup(self, app):
def _setup(self, app: App) -> None:
"""
Called during application initialization and must ``setup`` the handler
object making it ready for the framework or the application to make
@ -79,19 +86,18 @@ class Handler(ABC, meta.MetaMixin):
self.app = app
if self._meta.config_section is None:
self._meta.config_section = "%s.%s" % \
(self._meta.interface, self._meta.label)
self._meta.config_section = f"{self._meta.interface}.{self._meta.label}"
if self._meta.config_defaults is not None:
LOG.debug("merging config defaults from '%s' " % self +
"into section '%s'" % self._meta.config_section)
LOG.debug(f"merging config defaults from '{self}' " +
f"into section '{self._meta.config_section}'")
dict_obj = dict()
dict_obj[self._meta.config_section] = self._meta.config_defaults
self.app.config.merge(dict_obj, override=False)
self._validate()
def _validate(self):
def _validate(self) -> None:
"""
Perform any validation to ensure proper data, meta-data, etc.
"""
@ -105,11 +111,15 @@ class HandlerManager(object):
"""
def __init__(self, app):
def __init__(self, app: App):
self.app = app
self.__handlers__ = {}
self.__handlers__: Dict[str, dict[str, Type[Handler]]] = {}
def get(self, interface, handler_label, fallback=None, **kwargs):
def get(self,
interface: str,
handler_label: str,
fallback: Optional[Type[Handler]] = None,
**kwargs: Any) -> Union[Handler, Type[Handler]]:
"""
Get a handler object.
@ -144,8 +154,7 @@ class HandlerManager(object):
setup = kwargs.get('setup', False)
if interface not in self.app.interface.list():
raise exc.InterfaceError("Interface '%s' does not exist!" %
interface)
raise exc.InterfaceError(f"Interface '{interface}' does not exist!")
if handler_label in self.__handlers__[interface]:
if setup is True:
@ -159,7 +168,7 @@ class HandlerManager(object):
raise exc.InterfaceError("handlers['%s']['%s'] does not exist!" %
(interface, handler_label))
def list(self, interface):
def list(self, interface: str) -> List[Type[Handler]]:
"""
Return a list of handlers for a given ``interface``.
@ -181,15 +190,16 @@ class HandlerManager(object):
"""
if not self.app.interface.defined(interface):
raise exc.InterfaceError("Interface '%s' does not exist!" %
interface)
raise exc.InterfaceError(f"Interface '{interface}' does not exist!")
res = []
for label in self.__handlers__[interface]:
res.append(self.__handlers__[interface][label])
return res
def register(self, handler_class, force=False):
def register(self,
handler_class: Type[Handler],
force: bool = False) -> None:
"""
Register a handler class to an interface. If the same object is
already registered then no exception is raised, however if a different
@ -229,7 +239,7 @@ class HandlerManager(object):
# for checks
if not issubclass(handler_class, Handler):
raise exc.InterfaceError("Class %s " % handler_class +
raise exc.InterfaceError(f"Class {handler_class} " +
"does not implement Handler")
obj = handler_class()
@ -243,8 +253,7 @@ class HandlerManager(object):
(handler_class, interface, obj._meta.label))
if interface not in self.app.interface.list():
raise exc.InterfaceError("Handler interface '%s' doesn't exist." %
interface)
raise exc.InterfaceError(f"Handler interface '{interface}' doesn't exist.")
elif interface not in self.__handlers__.keys():
self.__handlers__[interface] = {}
@ -253,26 +262,23 @@ class HandlerManager(object):
if force is True:
LOG.debug(
"handlers['%s']['%s'] already exists" %
(interface, obj._meta.label) +
f"handlers['{interface}']['{obj._meta.label}'] already exists" +
", but `force==True`"
)
else:
raise exc.InterfaceError(
"handlers['%s']['%s'] already exists" %
(interface, obj._meta.label)
f"handlers['{interface}']['{obj._meta.label}'] already exists"
)
interface_class = self.app.interface.get(interface)
if not issubclass(handler_class, interface_class):
raise exc.InterfaceError("Handler %s " % handler_class.__name__ +
"does not sub-class %s" %
interface_class.__name__)
raise exc.InterfaceError(f"Handler {handler_class.__name__} " +
f"does not sub-class {interface_class.__name__}")
self.__handlers__[interface][obj._meta.label] = handler_class
def registered(self, interface, handler_label):
def registered(self, interface: str, handler_label: str) -> bool:
"""
Check if a handler is registered.
@ -297,7 +303,7 @@ class HandlerManager(object):
return False
def setup(self, handler_class):
def setup(self, handler_class: Type[Handler]) -> Handler:
"""
Setup a handler class so that it can be used.
@ -318,7 +324,10 @@ class HandlerManager(object):
h._setup(self.app)
return h
def resolve(self, interface, handler_def, **kwargs):
def resolve(self,
interface: str,
handler_def: Union[str, Handler, Type[Handler]],
**kwargs: Any) -> Union[Handler, Optional[Handler]]:
"""
Resolves the actual handler, as it can be either a string identifying
the handler to load from ``self.__handlers__``, or it can be an
@ -361,11 +370,11 @@ class HandlerManager(object):
if meta_defaults is None:
meta_defaults = {}
if type(handler_def) is str:
_meta_label = "%s.%s" % (interface, handler_def)
_meta_label = f"{interface}.{handler_def}"
meta_defaults = self.app._meta.meta_defaults.get(_meta_label,
{})
elif hasattr(handler_def, 'Meta'):
_meta_label = "%s.%s" % (interface, handler_def.Meta.label)
_meta_label = f"{interface}.{handler_def.Meta.label}"
meta_defaults = self.app._meta.meta_defaults.get(_meta_label,
{})
@ -373,18 +382,17 @@ class HandlerManager(object):
han = None
if type(handler_def) is str:
han = self.get(interface, handler_def)(**meta_defaults)
han = self.get(interface, handler_def)(**meta_defaults) # type: ignore
elif hasattr(handler_def, '_meta'):
if not self.registered(interface, handler_def._meta.label):
self.register(handler_def.__class__)
if not self.registered(interface, handler_def._meta.label): # type: ignore
self.register(handler_def.__class__) # type: ignore
han = handler_def
elif hasattr(handler_def, 'Meta'):
han = handler_def(**meta_defaults)
han = handler_def(**meta_defaults) # type: ignore
if not self.registered(interface, han._meta.label):
self.register(handler_def)
self.register(handler_def) # type: ignore
msg = "Unable to resolve handler '%s' of interface '%s'" % \
(handler_def, interface)
msg = f"Unable to resolve handler '{handler_def}' of interface '{interface}'"
if han is not None:
if setup is True:
han._setup(self.app)

View File

@ -1,10 +1,15 @@
"""Cement core hooks module."""
from __future__ import annotations
import operator
import types
from typing import Any, Callable, Dict, List, Generator, TYPE_CHECKING
from ..core import exc
from ..utils.misc import minimal_logger
if TYPE_CHECKING:
from ..core.foundation import App # pragma: nocover
LOG = minimal_logger(__name__)
@ -15,11 +20,11 @@ class HookManager(object):
"""
def __init__(self, app):
def __init__(self, app: App) -> None:
self.app = app
self.__hooks__ = {}
self.__hooks__: Dict[str, list] = {}
def list(self):
def list(self) -> List[str]:
"""
List all defined hooks.
@ -28,7 +33,7 @@ class HookManager(object):
"""
return list(self.__hooks__.keys())
def define(self, name):
def define(self, name: str) -> None:
"""
Define a hook namespace that the application and plugins can register
hooks in.
@ -50,12 +55,12 @@ class HookManager(object):
app.hook.define('my_hook_name')
"""
LOG.debug("defining hook '%s'" % name)
LOG.debug(f"defining hook '{name}'")
if name in self.__hooks__:
raise exc.FrameworkError("Hook name '%s' already defined!" % name)
raise exc.FrameworkError(f"Hook name '{name}' already defined!")
self.__hooks__[name] = []
def defined(self, hook_name):
def defined(self, hook_name: str) -> bool:
"""
Test whether a hook name is defined.
@ -83,7 +88,7 @@ class HookManager(object):
else:
return False
def register(self, name, func, weight=0):
def register(self, name: str, func: Callable, weight: int = 0) -> bool:
"""
Register a function to a hook. The function will be called, in order
of weight, when the hook is run.
@ -97,6 +102,9 @@ class HookManager(object):
Keywork Args:
weight (int): The weight in which to order the hook function.
Returns:
bool: ``True`` if hook is registered successfully, ``False`` otherwise.
Example:
.. code-block:: python
@ -113,7 +121,7 @@ class HookManager(object):
"""
if name not in self.__hooks__:
LOG.debug("hook name '%s' is not defined! ignoring..." % name)
LOG.debug(f"hook name '{name}' is not defined! ignoring...")
return False
LOG.debug("registering hook '%s' from %s into hooks['%s']" %
@ -121,8 +129,9 @@ class HookManager(object):
# Hooks are as follows: (weight, name, func)
self.__hooks__[name].append((int(weight), func.__name__, func))
return True
def run(self, name, *args, **kwargs):
def run(self, name: str, *args: Any, **kwargs: Any) -> Generator:
"""
Run all defined hooks in the namespace.
@ -159,13 +168,12 @@ class HookManager(object):
"""
if name not in self.__hooks__:
raise exc.FrameworkError("Hook name '%s' is not defined!" % name)
raise exc.FrameworkError(f"Hook name '{name}' is not defined!")
# Will order based on weight (the first item in the tuple)
self.__hooks__[name].sort(key=operator.itemgetter(0))
for hook in self.__hooks__[name]:
LOG.debug("running hook '%s' (%s) from %s" %
(name, hook[2], hook[2].__module__))
LOG.debug(f"running hook '{name}' ({hook[2]}) from {hook[2].__module__}")
res = hook[2](*args, **kwargs)
# Check if result is a nested generator - needed to support e.g.

View File

@ -2,13 +2,19 @@
Cement core interface module.
"""
from __future__ import annotations
from abc import ABC
from typing import Any, Dict, Optional, Type, TYPE_CHECKING
from ..core import exc, meta
from ..utils.misc import minimal_logger
LOG = minimal_logger(__name__)
if TYPE_CHECKING:
from ..core.foundation import App # pragma: nocover
class Interface(ABC, meta.MetaMixin):
"""Base interface class that all Cement Interfaces should subclass from."""
@ -21,18 +27,18 @@ class Interface(ABC, meta.MetaMixin):
"""
interface = NotImplemented
interface: str = NotImplemented
"""The string identifier of this interface."""
def __init__(self, **kw):
def __init__(self, **kw: Any) -> None:
super(Interface, self).__init__(**kw)
try:
assert self._meta.interface, \
"%s.Meta.interface undefined." % self.__class__.__name__
f"{self.__class__.__name__}.Meta.interface undefined."
except AssertionError as e:
raise exc.InterfaceError(e.args[0])
def _validate(self):
def _validate(self) -> None:
"""
Perform any validation to ensure proper data, meta-data, etc.
"""
@ -46,11 +52,16 @@ class InterfaceManager(object):
"""
def __init__(self, app):
__interfaces__: Dict[str, Type[Interface]]
def __init__(self, app: App) -> None:
self.app = app
self.__interfaces__ = {}
def get(self, interface, fallback=None, **kwargs):
def get(self,
interface: str,
fallback: Optional[Type[Interface]] = None,
**kwargs: Any) -> Type[Interface]:
"""
Get an interface class.
@ -79,10 +90,9 @@ class InterfaceManager(object):
elif fallback is not None:
return fallback
else:
raise exc.InterfaceError("interface '%s' does not exist!" %
interface)
raise exc.InterfaceError(f"interface '{interface}' does not exist!")
def list(self):
def list(self) -> list[str]:
"""
Return a list of defined interfaces.
@ -98,7 +108,7 @@ class InterfaceManager(object):
"""
return list(self.__interfaces__.keys())
def define(self, ibc):
def define(self, ibc: Type[Interface]) -> None:
"""
Define an ``ibc`` (interface base class).
@ -117,15 +127,14 @@ class InterfaceManager(object):
"""
LOG.debug("defining interface '%s' (%s)" %
(ibc.Meta.interface, ibc.__name__))
LOG.debug(f"defining interface '{ibc.Meta.interface}' ({ibc.__name__})")
if ibc.Meta.interface in self.__interfaces__:
msg = "interface '%s' already defined!" % ibc.Meta.interface
msg = f"interface '{ibc.Meta.interface}' already defined!"
raise exc.InterfaceError(msg)
self.__interfaces__[ibc.Meta.interface] = ibc
def defined(self, interface):
def defined(self, interface: str) -> bool:
"""
Test whether ``interface`` is defined.

View File

@ -18,7 +18,7 @@ class LogInterface(Interface):
:class:`LogHandler` base class as a starting point.
"""
class Meta:
class Meta(Interface.Meta):
"""Handler meta-data."""
@ -26,7 +26,7 @@ class LogInterface(Interface):
interface = 'log'
@abstractmethod
def set_level(self):
def set_level(self, level: str) -> None:
"""
Set the log level. Must except atleast one of:
``['INFO', 'WARNING', 'ERROR', 'DEBUG', or 'FATAL']``.
@ -35,12 +35,12 @@ class LogInterface(Interface):
pass # pragma: nocover
@abstractmethod
def get_level(self):
def get_level(self) -> str:
"""Return a string representation of the log level."""
pass # pragma: nocover
@abstractmethod
def info(self, msg):
def info(self, msg: str) -> None:
"""
Log to the ``INFO`` facility.
@ -51,7 +51,7 @@ class LogInterface(Interface):
pass # pragma: nocover
@abstractmethod
def warning(self, msg):
def warning(self, msg: str) -> None:
"""
Log to the ``WARNING`` facility.
@ -62,7 +62,7 @@ class LogInterface(Interface):
pass # pragma: nocover
@abstractmethod
def error(self, msg):
def error(self, msg: str) -> None:
"""
Log to the ``ERROR`` facility.
@ -73,7 +73,7 @@ class LogInterface(Interface):
pass # pragma: nocover
@abstractmethod
def fatal(self, msg):
def fatal(self, msg: str) -> None:
"""
Log to the ``FATAL`` facility.
@ -84,7 +84,7 @@ class LogInterface(Interface):
pass # pragma: nocover
@abstractmethod
def debug(self, msg):
def debug(self, msg: str) -> None:
"""
Log to the ``DEBUG`` facility.
@ -102,4 +102,5 @@ class LogHandler(LogInterface, Handler):
"""
pass # pragma: nocover
class Meta(Handler.Meta):
pass # pragma: nocover

View File

@ -1,10 +1,15 @@
"""Cement core mail module."""
from __future__ import annotations
from abc import abstractmethod
from typing import Any, Dict, TYPE_CHECKING
from ..core.interface import Interface
from ..core.handler import Handler
from ..utils.misc import minimal_logger
if TYPE_CHECKING:
from ..core.foundation import App # pragma: nocover
LOG = minimal_logger(__name__)
@ -17,7 +22,7 @@ class MailInterface(Interface):
:class:`MailHandler` base class as a starting point.
"""
class Meta:
class Meta(Interface.Meta):
"""Handler meta-data."""
@ -25,7 +30,7 @@ class MailInterface(Interface):
"""The label identifier of the interface."""
@abstractmethod
def send(self, body, **kwargs):
def send(self, body: str, **kwargs: Any) -> bool:
"""
Send a mail message. Keyword arguments override configuration
defaults (cc, bcc, etc).
@ -84,7 +89,7 @@ class MailHandler(MailInterface, Handler):
"""
class Meta:
class Meta(Handler.Meta):
"""
Handler meta-data (can be passed as keyword arguments to the parent
@ -92,7 +97,7 @@ class MailHandler(MailInterface, Handler):
"""
#: Configuration default values
config_defaults = {
config_defaults: Dict[str, Any] = {
'to': [],
'from_addr': 'noreply@example.com',
'cc': [],
@ -101,11 +106,11 @@ class MailHandler(MailInterface, Handler):
'subject_prefix': '',
}
def _setup(self, app_obj):
def _setup(self, app_obj: App) -> None:
super()._setup(app_obj)
self._validate_config()
def _validate_config(self):
def _validate_config(self) -> None:
# convert comma separated strings to lists (ConfigParser)
for item in ['to', 'cc', 'bcc']:
if item in self.app.config.keys(self._meta.config_section):

View File

@ -1,5 +1,7 @@
"""Cement core meta functionality."""
from typing import Any, Dict
class Meta(object):
@ -9,10 +11,10 @@ class Meta(object):
"""
def __init__(self, **kwargs):
def __init__(self, **kwargs: Any) -> None:
self._merge(kwargs)
def _merge(self, dict_obj):
def _merge(self, dict_obj: Dict[str, Any]) -> None:
for key in dict_obj.keys():
setattr(self, key, dict_obj[key])
@ -25,7 +27,7 @@ class MetaMixin(object):
"""
def __init__(self, *args, **kwargs):
def __init__(self, *args: Any, **kwargs: Any) -> None:
# Get a List of all the Classes we in our MRO, find any attribute named
# Meta on them, and then merge them together in order of MRO
metas = reversed([x.Meta for x in self.__class__.mro()

View File

@ -1,6 +1,7 @@
"""Cement core output module."""
from abc import abstractmethod
from typing import Any, Dict, Union
from ..core.interface import Interface
from ..core.handler import Handler
from ..utils.misc import minimal_logger
@ -17,7 +18,7 @@ class OutputInterface(Interface):
:class:`OutputHandler` base class as a starting point.
"""
class Meta:
class Meta(Interface.Meta):
"""Handler meta-data."""
@ -25,7 +26,7 @@ class OutputInterface(Interface):
interface = 'output'
@abstractmethod
def render(self, data, *args, **kwargs):
def render(self, data: Dict[str, Any], *args: Any, **kwargs: Any) -> Union[str, None]:
"""
Render the ``data`` dict into output in some fashion. This function
must accept both ``*args`` and ``**kwargs`` to allow an application to
@ -47,4 +48,5 @@ class OutputHandler(OutputInterface, Handler):
"""Output handler implementation."""
pass # pragma: nocover
class Meta(Handler.Meta):
pass # pragma: nocover

View File

@ -1,6 +1,7 @@
"""Cement core plugins module."""
from abc import abstractmethod
from typing import List
from ..core.interface import Interface
from ..core.handler import Handler
from ..utils.misc import minimal_logger
@ -17,13 +18,13 @@ class PluginInterface(Interface):
:class:`PluginHandler` base class as a starting point.
"""
class Meta:
class Meta(Interface.Meta):
#: String identifier of the interface.
interface = 'plugin'
@abstractmethod
def load_plugin(plugin_name):
def load_plugin(self, plugin_name: str) -> None:
"""
Load a plugin whose name is ``plugin_name``.
@ -34,7 +35,7 @@ class PluginInterface(Interface):
pass # pragma: nocover
@abstractmethod
def load_plugins(self, plugins):
def load_plugins(self, plugins: List[str]) -> None:
"""
Load all plugins from ``plugins``.
@ -45,17 +46,17 @@ class PluginInterface(Interface):
pass # pragma: nocover
@abstractmethod
def get_loaded_plugins(self):
def get_loaded_plugins(self) -> List[str]:
"""Returns a list of plugins that have been loaded."""
pass # pragma: nocover
@abstractmethod
def get_enabled_plugins(self):
def get_enabled_plugins(self) -> List[str]:
"""Returns a list of plugins that are enabled in the config."""
pass # pragma: nocover
@abstractmethod
def get_disabled_plugins(self):
def get_disabled_plugins(self) -> List[str]:
"""Returns a list of plugins that are disabled in the config."""
pass # pragma: nocover
@ -67,4 +68,5 @@ class PluginHandler(PluginInterface, Handler):
"""
pass # pragma: nocover
class Meta(Handler.Meta):
pass # pragma: nocover

View File

@ -6,6 +6,7 @@ import pkgutil
import re
import shutil
from abc import abstractmethod
from typing import Any, List, Dict, Optional, Tuple, Union
from ..core import exc
from ..core.interface import Interface
from ..core.handler import Handler
@ -14,6 +15,8 @@ from ..utils import fs
LOG = minimal_logger(__name__)
LoadTemplateReturnType = Tuple[Union[bytes, str, None], Union[str, None]]
class TemplateInterface(Interface):
@ -24,7 +27,7 @@ class TemplateInterface(Interface):
:class:`TemplateHandler` base class as a starting point.
"""
class Meta:
class Meta(Interface.Meta):
"""Handler meta-data."""
@ -32,7 +35,7 @@ class TemplateInterface(Interface):
interface = 'template'
@abstractmethod
def render(self, content, data):
def render(self, content: str, data: Dict[str, Any]) -> Union[str, None]:
"""
Render ``content`` as a template using the ``data`` dict.
@ -41,13 +44,13 @@ class TemplateInterface(Interface):
data (dict): The data dictionary to render with template.
Returns:
str: The rendered template string.
str, None: The rendered template string, or ``None`` if nothing is rendered.
"""
pass # pragma: nocover
@abstractmethod
def copy(self, src, dest, data):
def copy(self, src: str, dest: str, data: Dict[str, Any]) -> bool:
"""
Render the ``src`` directory path, and copy to ``dest``. This method
must render directory and file **names** as template content, as well
@ -57,12 +60,14 @@ class TemplateInterface(Interface):
src (str): The source template directory path.
dest (str): The destination directory path.
data (dict): The data dictionary to render with template.
Returns: None
Returns:
bool: Returns ``True`` if the copy completed successfully.
"""
pass # pragma: nocover
@abstractmethod
def load(self, path):
def load(self, path: str) -> Tuple[Union[str, bytes], str, Optional[str]]:
"""
Loads a template file first from ``self.app._meta.template_dirs`` and
secondly from ``self.app._meta.template_module``. The
@ -91,27 +96,27 @@ class TemplateHandler(TemplateInterface, Handler):
Keyword arguments passed to this class will override meta-data options.
"""
class Meta:
class Meta(Handler.Meta):
#: Unique identifier (str), used internally.
label = None
label: str = None # type: ignore
#: The interface that this handler implements.
interface = 'template'
#: List of file patterns to exclude (copy but not render as template)
exclude = None
exclude: List[str] = None # type: ignore
#: List of file patterns to ignore completely (not copy at all)
ignore = None
ignore: List[str] = None # type: ignore
def __init__(self, *args, **kwargs):
def __init__(self, *args: Any, **kwargs: Any) -> None:
super(TemplateHandler, self).__init__(*args, **kwargs)
if self._meta.ignore is None:
self._meta.ignore = []
if self._meta.exclude is None:
self._meta.exclude = []
def render(self, content, data):
def render(self, content: Union[str, bytes], data: Dict[str, Any]) -> Union[str, None]:
"""
Render ``content`` as template using using the ``data`` dictionary.
@ -120,19 +125,25 @@ class TemplateHandler(TemplateInterface, Handler):
data (dict): The data dictionary to interpolate in the template.
Returns:
str: The rendered content.
str, None: The rendered content, or ``None`` if nothing is rendered.
"""
# must be provided by a subclass
raise NotImplementedError # pragma: nocover
def _match_patterns(self, item, patterns):
def _match_patterns(self, item: str, patterns: List[str]) -> bool:
for pattern in patterns:
if re.match(pattern, item):
return True
return False
def copy(self, src, dest, data, force=False, exclude=None, ignore=None):
def copy(self,
src: str,
dest: str,
data: Dict[str, Any],
force: bool = False,
exclude: Optional[List[str]] = None,
ignore: Optional[List[str]] = None) -> bool:
"""
Render ``src`` directory as template, including directory and file
names, and copy to ``dest`` directory.
@ -161,8 +172,8 @@ class TemplateHandler(TemplateInterface, Handler):
escaped_src = src.encode('unicode-escape').decode('utf-8')
# double escape for regex matching
escaped_src_pattern = escaped_src.encode('unicode-escape')
escaped_src_pattern = escaped_src_pattern.decode('utf-8')
encoded_src_pattern = escaped_src.encode('unicode-escape')
escaped_src_pattern = encoded_src_pattern.decode('utf-8')
if exclude is None:
exclude = []
@ -171,17 +182,18 @@ class TemplateHandler(TemplateInterface, Handler):
ignore_patterns = self._meta.ignore + ignore
exclude_patterns = self._meta.exclude + exclude
assert os.path.exists(src), "Source path %s does not exist!" % src
assert os.path.exists(src), f"Source path {src} does not exist!"
if not os.path.exists(dest):
os.makedirs(dest)
LOG.debug('copying source template %s -> %s' % (src, dest))
LOG.debug(f'copying source template {src} -> {dest}')
# here's the fun
for cur_dir, sub_dirs, files in os.walk(src):
escaped_cur_dir = cur_dir.encode('unicode-escape').decode('utf-8')
cur_dir_stub: str
if cur_dir == '.':
continue # pragma: nocover
elif cur_dir == src:
@ -190,12 +202,12 @@ class TemplateHandler(TemplateInterface, Handler):
cur_dir_dest = dest
elif self._match_patterns(cur_dir, ignore_patterns):
LOG.debug(
'not copying ignored directory: %s' % cur_dir)
f'not copying ignored directory: {cur_dir}')
continue
elif self._match_patterns(cur_dir, exclude_patterns):
LOG.debug(
'not rendering excluded directory as template: ' +
'%s' % cur_dir)
f'{cur_dir}')
cur_dir_stub = re.sub(escaped_src_pattern,
'',
@ -207,12 +219,12 @@ class TemplateHandler(TemplateInterface, Handler):
else:
# render the cur dir
LOG.debug(
'rendering directory as template: %s' % cur_dir)
f'rendering directory as template: {cur_dir}')
cur_dir_stub = re.sub(escaped_src_pattern,
'',
escaped_cur_dir)
cur_dir_stub = self.render(cur_dir_stub, data)
cur_dir_stub = self.render(cur_dir_stub, data) # type: ignore
cur_dir_stub = cur_dir_stub.lstrip('/')
cur_dir_stub = cur_dir_stub.lstrip('\\\\')
cur_dir_stub = cur_dir_stub.lstrip('\\')
@ -220,37 +232,37 @@ class TemplateHandler(TemplateInterface, Handler):
# render sub-dirs
for sub_dir in sub_dirs:
escaped_sub_dir = sub_dir.encode('unicode-escape')
escaped_sub_dir = escaped_sub_dir.decode('utf-8')
encoded_sub_dir = sub_dir.encode('unicode-escape')
escaped_sub_dir = encoded_sub_dir.decode('utf-8')
full_path = os.path.join(cur_dir, sub_dir)
if self._match_patterns(full_path, ignore_patterns):
LOG.debug(
'not copying ignored sub-directory: ' +
'%s' % full_path)
f'{full_path}')
continue
elif self._match_patterns(full_path, exclude_patterns):
LOG.debug(
'not rendering excluded sub-directory as template: ' +
'%s' % full_path)
f'{full_path}')
sub_dir_dest = os.path.join(cur_dir_dest, sub_dir)
else:
LOG.debug(
'rendering sub-directory as template: %s' % full_path)
f'rendering sub-directory as template: {full_path}')
new_sub_dir = re.sub(escaped_src_pattern,
'',
self.render(escaped_sub_dir, data))
self.render(escaped_sub_dir, data)) # type: ignore
sub_dir_dest = os.path.join(cur_dir_dest, new_sub_dir)
if not os.path.exists(sub_dir_dest):
LOG.debug('creating sub-directory %s' % sub_dir_dest)
LOG.debug(f'creating sub-directory {sub_dir_dest}')
os.makedirs(sub_dir_dest)
for _file in files:
_rendered = self.render(_file, data)
new_file = re.sub(escaped_src_pattern, '', _rendered)
new_file = re.sub(escaped_src_pattern, '', _rendered) # type: ignore
_file = fs.abspath(os.path.join(cur_dir, _file))
_file_dest = fs.abspath(os.path.join(cur_dir_dest, new_file))
@ -260,61 +272,60 @@ class TemplateHandler(TemplateInterface, Handler):
if os.path.exists(_file_dest):
if force is True:
LOG.debug(
'overwriting existing file: %s ' % _file_dest)
f'overwriting existing file: {_file_dest} ')
else:
assert False, \
'Destination file already exists: %s ' % _file_dest
f'Destination file already exists: {_file_dest} '
if self._match_patterns(_file, ignore_patterns):
LOG.debug(
'not copying ignored file: ' +
'%s' % _file)
f'{_file}')
continue
elif self._match_patterns(_file, exclude_patterns):
LOG.debug(
'not rendering excluded file: ' +
'%s' % _file)
f'{_file}')
shutil.copy(_file, _file_dest)
else:
LOG.debug('rendering file as template: %s' % _file)
LOG.debug(f'rendering file as template: {_file}')
f = open(_file, 'r')
content = f.read()
f.close()
_file_content = self.render(content, data)
f = open(_file_dest, 'w')
f.write(_file_content)
f.write(_file_content) # type: ignore
f.close()
return True
def _load_template_from_file(self, template_path):
def _load_template_from_file(self,
template_path: str) -> LoadTemplateReturnType:
for template_dir in self.app._meta.template_dirs:
template_prefix = template_dir.rstrip('/')
template_path = template_path.lstrip('/')
full_path = fs.abspath(os.path.join(template_prefix,
template_path))
LOG.debug(
"attemping to load output template from file %s" % full_path)
f"attemping to load output template from file {full_path}")
if os.path.exists(full_path):
content = open(full_path, 'r').read()
LOG.debug("loaded output template from file %s" %
full_path)
LOG.debug(f"loaded output template from file {full_path}")
return (content, full_path)
else:
LOG.debug("output template file %s does not exist" %
full_path)
LOG.debug(f"output template file {full_path} does not exist")
continue
return (None, None)
def _load_template_from_module(self, template_path):
def _load_template_from_module(self,
template_path: str) -> LoadTemplateReturnType:
template_module = self.app._meta.template_module
template_path = template_path.lstrip('/')
full_module_path = "%s.%s" % (template_module,
re.sub('/', '.', template_path))
full_module_path = f"{template_module}.{re.sub('/', '.', template_path)}"
LOG.debug("attemping to load output template '%s' from module %s" %
(template_path, template_module))
@ -322,15 +333,14 @@ class TemplateHandler(TemplateInterface, Handler):
# see if the module exists first
if template_module not in sys.modules:
try:
__import__(template_module, globals(), locals(), [], 0)
__import__(template_module, globals(), locals(), [], 0) # type: ignore
except ImportError:
LOG.debug("unable to import template module '%s'."
% template_module)
LOG.debug(f"unable to import template module '{template_module}'.")
return (None, None)
# get the template content
try:
content = pkgutil.get_data(template_module, template_path)
content = pkgutil.get_data(template_module, template_path) # type: ignore
LOG.debug("loaded output template '%s' from module %s" %
(template_path, template_module))
return (content, full_module_path)
@ -339,7 +349,7 @@ class TemplateHandler(TemplateInterface, Handler):
(template_path, template_module))
return (None, None)
def load(self, template_path):
def load(self, template_path: str) -> Tuple[Union[str, bytes], str, Optional[str]]:
"""
Loads a template file first from ``self.app._meta.template_dirs`` and
secondly from ``self.app._meta.template_module``. The
@ -360,10 +370,10 @@ class TemplateHandler(TemplateInterface, Handler):
either the ``template_module`` or ``template_dirs``.
"""
if not template_path:
raise exc.FrameworkError("Invalid template path '%s'." %
template_path)
raise exc.FrameworkError(f"Invalid template path '{template_path}'.")
# first attempt to load from file
content: Union[str, bytes, None]
content, path = self._load_template_from_file(template_path)
if content is None:
# second attempt to load from module
@ -375,7 +385,6 @@ class TemplateHandler(TemplateInterface, Handler):
# if content is None, that means we didn't find a template file in
# either and that is an exception
if content is None:
raise exc.FrameworkError("Could not locate template: %s" %
template_path)
raise exc.FrameworkError(f"Could not locate template: {template_path}")
return (content, template_type, path)

View File

@ -2,13 +2,18 @@
Cement alarm extension module.
"""
from __future__ import annotations
import signal
from typing import Any, TYPE_CHECKING
from ..utils.misc import minimal_logger
if TYPE_CHECKING:
from ..core.foundation import App # pragma: nocover
LOG = minimal_logger(__name__)
def alarm_handler(app, signum, frame):
def alarm_handler(app: App, signum: int, frame: Any) -> None:
if signum == signal.SIGALRM:
app.log.error(app.alarm.msg)
@ -20,11 +25,11 @@ class AlarmManager(object):
"""
def __init__(self, *args, **kw):
def __init__(self, *args: Any, **kw: Any) -> None:
super(AlarmManager, self).__init__(*args, **kw)
self.msg = None
self.msg: str = None # type: ignore
def set(self, time, msg):
def set(self, time: int, msg: str) -> None:
"""
Set the application alarm to ``time`` seconds. If the time is
exceeded ``signal.SIGALRM`` is raised.
@ -34,11 +39,11 @@ class AlarmManager(object):
msg (str): The message to display if the alarm is triggered.
"""
LOG.debug('setting application alarm for %s seconds' % time)
LOG.debug(f'setting application alarm for {time} seconds')
self.msg = msg
signal.alarm(int(time))
def stop(self):
def stop(self) -> None:
"""
Stop the application alarm.
"""
@ -46,7 +51,7 @@ class AlarmManager(object):
signal.alarm(0)
def load(app):
def load(app: App) -> None:
app.catch_signal(signal.SIGALRM)
app.extend('alarm', AlarmManager())
app.hook.register('signal', alarm_handler)

View File

@ -2,21 +2,27 @@
Cement argparse extension module.
"""
from __future__ import annotations
import re
from dataclasses import dataclass
from argparse import ArgumentParser, RawDescriptionHelpFormatter, SUPPRESS
from typing import Any, Callable, List, Dict, Tuple, Optional, TYPE_CHECKING
from ..core.arg import ArgumentHandler
from ..core.controller import ControllerHandler
from ..core.exc import FrameworkError
from ..utils.misc import minimal_logger
if TYPE_CHECKING:
from ..core.foundation import App, ArgparseArgumentType # pragma: nocover
LOG = minimal_logger(__name__)
def _clean_label(label):
def _clean_label(label: str) -> str:
return re.sub('_', '-', label)
def _clean_func(func):
def _clean_func(func: str) -> Optional[str]:
if func is None:
return None
else:
@ -35,7 +41,7 @@ class ArgparseArgumentHandler(ArgumentParser, ArgumentHandler):
on initialization.
"""
class Meta:
class Meta(ArgumentHandler.Meta):
"""Handler meta-data."""
@ -56,19 +62,21 @@ class ArgparseArgumentHandler(ArgumentParser, ArgumentHandler):
``unknown_args``.
"""
def __init__(self, *args, **kw):
_meta: Meta # type: ignore
def __init__(self, *args: Any, **kw: Any) -> None:
super().__init__(*args, **kw)
self.config = None
self.unknown_args = None
self.parsed_args = None
def parse(self, arg_list):
def parse(self, *args: List[str]) -> object:
"""
Parse a list of arguments, and return them as an object. Meaning an
argument name of 'foo' will be stored as parsed_args.foo.
Args:
arg_list (list): A list of arguments (generally sys.argv) to be
args (list): A list of arguments (generally sys.argv) to be
parsed.
Returns:
@ -77,15 +85,14 @@ class ArgparseArgumentHandler(ArgumentParser, ArgumentHandler):
"""
if self._meta.ignore_unknown_arguments is True:
args, unknown = self.parse_known_args(arg_list)
self.parsed_args = args
self.unknown_args = unknown
known_args, unknown_args = self.parse_known_args(*args)
self.parsed_args = known_args # type: ignore
self.unknown_args = unknown_args # type: ignore
else:
args = self.parse_args(arg_list)
self.parsed_args = args
self.parsed_args = self.parse_args(*args) # type: ignore
return self.parsed_args
def add_argument(self, *args, **kw):
def add_argument(self, *args: Any, **kw: Any) -> None: # type: ignore
"""
Add an argument to the parser. Arguments and keyword arguments are
passed directly to ``ArgumentParser.add_argument()``.
@ -94,6 +101,17 @@ class ArgparseArgumentHandler(ArgumentParser, ArgumentHandler):
super().add_argument(*args, **kw)
@dataclass
class CommandMeta:
label: str
func_name: str
exposed: bool
hide: bool
arguments: List[ArgparseArgumentType]
parser_options: Dict[str, Any]
controller: ArgparseController
class expose(object):
"""
@ -135,25 +153,31 @@ class expose(object):
"""
# pylint: disable=W0622
def __init__(self, hide=False, arguments=[], label=None, **parser_options):
def __init__(self,
hide: bool = False,
arguments: List[ArgparseArgumentType] = [],
label: Optional[str] = None,
**parser_options: Any) -> None:
self.hide = hide
self.arguments = arguments
self.label = label
self.parser_options = parser_options
def __call__(self, func):
def __call__(self, func: Callable) -> Callable:
if self.label is None:
self.label = func.__name__
metadict = {}
metadict['label'] = _clean_label(self.label)
metadict['func_name'] = func.__name__
metadict['exposed'] = True
metadict['hide'] = self.hide
metadict['arguments'] = self.arguments
metadict['parser_options'] = self.parser_options
metadict['controller'] = None # added by the controller
func.__cement_meta__ = metadict
meta = CommandMeta(
label=_clean_label(self.label),
func_name=func.__name__,
exposed=True,
hide=self.hide,
arguments=self.arguments,
parser_options=self.parser_options,
controller=None # type: ignore
)
func.__cement_meta__ = meta
return func
@ -206,7 +230,7 @@ class ArgparseController(ControllerHandler):
"""
class Meta:
class Meta(ControllerHandler.Meta):
"""
Controller meta-data (can be passed as keyword arguments to the parent
@ -218,19 +242,19 @@ class ArgparseController(ControllerHandler):
interface = 'controller'
#: The string identifier for the controller.
label = None
label: str = None # type: ignore
#: A list of aliases for the controller/sub-parser. **Only available
#: in Python > 3**.
aliases = []
aliases: List[str] = []
#: A config [section] to merge config_defaults into. Cement will
#: default to controller.<label> if None is set.
config_section = None
config_section: str = None # type: ignore
#: Configuration defaults (type: dict) that are merged into the
#: applications config object for the config_section mentioned above.
config_defaults = {}
config_defaults: Dict[str, Any] = {}
#: Arguments to pass to the argument_handler. The format is a list
#: of tuples whos items are a ( list, dict ). Meaning:
@ -241,7 +265,7 @@ class ArgparseController(ControllerHandler):
#: parser as in the following example:
#:
#: ``add_argument('-f', '--foo', help='foo option', dest='foo')``
arguments = []
arguments: List[ArgparseArgumentType] = []
#: A label of another controller to 'stack' commands/arguments on top
#: of.
@ -261,17 +285,17 @@ class ArgparseController(ControllerHandler):
#: Text for the controller/sub-parser group in help output (for
#: nested stacked controllers only).
help = None
help: str = None # type: ignore
#: Whether or not to hide the controller entirely.
hide = False
#: The text that is displayed at the bottom when ``--help`` is passed.
epilog = None
epilog: Optional[str] = None
#: The text that is displayed at the top when ``--help`` is passed.
#: Defaults to Argparse standard usage.
usage = None
usage: Optional[str] = None
#: The argument formatter class to use to display ``--help`` output.
argument_formatter = RawDescriptionHelpFormatter
@ -281,14 +305,14 @@ class ArgparseController(ControllerHandler):
#: controller namespace. WARNING: This could break things, use at
#: your own risk. Useful if you need additional features from
#: Argparse that is not built into the controller Meta-data.
subparser_options = {}
subparser_options: Dict = {}
#: Additional keyword arguments passed when
#: ``ArgumentParser.add_parser()`` is called to create this
#: controller sub-parser. WARNING: This could break things, use at
#: your own risk. Useful if you need additional features from
#: Argparse that is not built into the controller Meta-data.
parser_options = {}
parser_options: Dict = {}
#: Function to call if no sub-command is passed. By default this is
#: ``_default``, which is equivelant to passing ``-h/--help``. It
@ -301,49 +325,51 @@ class ArgparseController(ControllerHandler):
#: Note: Currently, default function/sub-command only works on
#: Python > 3.4. Previous versions of Python/Argparse will throw the
#: exception ``error: too few arguments``.
default_func = '_default'
default_func: str = '_default'
def __init__(self, *args, **kw):
def __init__(self, *args: Any, **kw: Any) -> None:
super().__init__(*args, **kw)
self.app = None
self._parser = None
self.app: App = None # type: ignore
self._parser: ArgumentParser = None # type: ignore
if self._meta.label == 'base':
self._sub_parser_parents = dict()
self._sub_parsers = dict()
self._controllers = []
self._controllers_map = {}
self._sub_parser_parents: Dict[str, Any] = dict()
self._sub_parsers: Dict[str, Any] = dict()
self._controllers: List[ArgparseController] = []
self._controllers_map: Dict[str, ArgparseController] = {}
if self._meta.help is None:
self._meta.help = '%s controller' % _clean_label(self._meta.label)
self._meta.help = f'{_clean_label(self._meta.label)} controller'
def _default(self):
def _default(self) -> None:
self._parser.print_help()
def _validate(self):
def _validate(self) -> None:
try:
assert self._meta.stacked_type in ['embedded', 'nested'], \
"Invalid stacked type %s. " % self._meta.stacked_type \
f"Invalid stacked type {self._meta.stacked_type}. " \
+ "Expecting one of: [embedded, nested]"
except AssertionError as e:
raise FrameworkError(e.args[0])
def _setup_controllers(self):
def _setup_controllers(self) -> None:
# need a list to maintain order
resolved_controllers = []
resolved_controllers: List[ArgparseController] = []
# need a dict to do key/label based lookup
resolved_controllers_map = {}
resolved_controllers_map: Dict[str, ArgparseController] = {}
# list to maintain which controllers we haven't resolved yet
unresolved_controllers = []
for contr in self.app.handler.list('controller'):
unresolved_controllers: List[ArgparseController] = []
for ctrl in self.app.handler.list('controller'):
# don't include self/base
if contr == self.__class__:
if ctrl == self.__class__:
continue
contr = self.app.handler.resolve('controller', contr, setup=True)
unresolved_controllers.append(contr)
handler: ArgparseController
handler = self.app.handler.resolve('controller', ctrl, setup=True) # type: ignore
unresolved_controllers.append(handler)
# treat self/base separately
resolved_controllers.append(self)
@ -356,12 +382,13 @@ class ArgparseController(ControllerHandler):
current_parent = self._meta.label
while unresolved_controllers:
LOG.debug('unresolved controllers > %s' % unresolved_controllers)
LOG.debug('current parent > %s' % current_parent)
LOG.debug(f'unresolved controllers > {unresolved_controllers}')
LOG.debug(f'current parent > {current_parent}')
# handle all controllers nested on parent
current_children = []
resolved_child_controllers = []
current_children: List[ArgparseController] = []
resolved_child_controllers: List[ArgparseController] = []
for contr in list(unresolved_controllers):
# if stacked_on is the current parent, we want to process
# its children in this run first
@ -418,11 +445,11 @@ class ArgparseController(ControllerHandler):
self._controllers = resolved_controllers
self._controllers_map = resolved_controllers_map
def _process_parsed_arguments(self):
def _process_parsed_arguments(self) -> None:
pass
def _get_subparser_options(self, contr):
kwargs = contr._meta.subparser_options.copy()
def _get_subparser_options(self, contr: ArgparseController) -> Dict[str, Any]:
kwargs: Dict[str, Any] = contr._meta.subparser_options.copy()
if 'title' not in kwargs.keys():
kwargs['title'] = contr._meta.title
@ -431,8 +458,8 @@ class ArgparseController(ControllerHandler):
return kwargs
def _get_parser_options(self, contr):
kwargs = contr._meta.parser_options.copy()
def _get_parser_options(self, contr: ArgparseController) -> Dict[str, Any]:
kwargs: Dict[str, Any] = contr._meta.parser_options.copy()
if 'aliases' not in kwargs.keys():
kwargs['aliases'] = contr._meta.aliases
@ -454,13 +481,13 @@ class ArgparseController(ControllerHandler):
return kwargs
def _get_command_parser_options(self, command):
kwargs = command['parser_options'].copy()
def _get_command_parser_options(self, command: CommandMeta) -> Dict[str, Any]:
kwargs: Dict[str, Any] = command.parser_options.copy()
contr = command['controller']
contr = command.controller
hide_it = False
if command['hide'] is True:
if command.hide is True:
hide_it = True
# only hide commands from embedded controllers if the controller is
@ -475,13 +502,13 @@ class ArgparseController(ControllerHandler):
return kwargs
def _setup_parsers(self):
def _setup_parsers(self) -> None:
# this should only be run by the base controller
from cement.utils.misc import rando
_rando = rando()[:12]
self._dispatch_option = '--dispatch-%s' % _rando
self._controller_option = '--controller-namespace-%s' % _rando
self._dispatch_option = f'--dispatch-{_rando}'
self._controller_option = f'--controller-namespace-{_rando}'
# parents are sub-parser namespaces (that we can add subparsers to)
# where-as parsers are the actual root parser and sub-parsers to
@ -572,12 +599,12 @@ class ArgparseController(ControllerHandler):
elif stacked_type == 'embedded':
# if it's embedded, then just set it to use the same as the
# controller its stacked on
# controller it's stacked on
parents[label] = parents[stacked_on]
parsers[label] = parsers[stacked_on]
contr._parser = parsers[stacked_on]
def _get_parser_by_controller(self, controller):
def _get_parser_by_controller(self, controller: ArgparseController) -> ArgumentParser:
if controller._meta.stacked_type == 'embedded':
parser = self._get_parser(controller._meta.stacked_on)
else:
@ -585,7 +612,7 @@ class ArgparseController(ControllerHandler):
return parser
def _get_parser_parent_by_controller(self, controller):
def _get_parser_parent_by_controller(self, controller: ArgparseController) -> ArgumentParser:
if controller._meta.stacked_type == 'embedded':
parent = self._get_parser_parent(controller._meta.stacked_on)
else:
@ -593,45 +620,44 @@ class ArgparseController(ControllerHandler):
return parent
def _get_parser_parent(self, label):
return self._sub_parser_parents[label]
def _get_parser_parent(self, label: str) -> ArgumentParser:
return self._sub_parser_parents[label] # type: ignore
def _get_parser(self, label):
return self._sub_parsers[label]
def _get_parser(self, label: str) -> ArgumentParser:
return self._sub_parsers[label] # type: ignore
def _process_arguments(self, controller):
def _process_arguments(self, controller: ArgparseController) -> None:
label = controller._meta.label
LOG.debug("processing arguments for '%s' " % label +
LOG.debug(f"processing arguments for '{label}' " +
"controller namespace")
parser = self._get_parser_by_controller(controller)
arguments = controller._collect_arguments()
for arg, kw in arguments:
LOG.debug('adding argument (args=%s, kwargs=%s)' % (arg, kw))
LOG.debug(f'adding argument (args={arg}, kwargs={kw})')
parser.add_argument(*arg, **kw)
def _process_commands(self, controller):
def _process_commands(self, controller: ArgparseController) -> None:
label = controller._meta.label
LOG.debug("processing commands for '%s' " % label +
LOG.debug(f"processing commands for '{label}' " +
"controller namespace")
commands = controller._collect_commands()
for command in commands:
kwargs = self._get_command_parser_options(command)
func_name = command['func_name']
LOG.debug("adding command '%s' " % command['label'] +
"(controller=%s, func=%s)" %
(controller._meta.label, func_name))
func_name = command.func_name
LOG.debug(f"adding command '{command.label}' " +
f"(controller={controller._meta.label}, func={func_name})")
cmd_parent = self._get_parser_parent_by_controller(controller)
command_parser = cmd_parent.add_parser(command['label'], **kwargs)
command_parser = cmd_parent.add_parser(command.label, **kwargs)
# add an invisible dispatch option so we can figure out what to
# call later in self._dispatch
default_contr_func = "%s.%s" % (command['controller']._meta.label,
command['func_name'])
default_contr_func = "%s.%s" % (command.controller._meta.label,
command.func_name)
command_parser.add_argument(self._dispatch_option,
action='store',
default=default_contr_func,
@ -640,26 +666,25 @@ class ArgparseController(ControllerHandler):
)
# add additional arguments to the sub-command namespace
LOG.debug("processing arguments for '%s' " % command['label'] +
LOG.debug(f"processing arguments for '{command.label}' " +
"command namespace")
for arg, kw in command['arguments']:
LOG.debug('adding argument (args=%s, kwargs=%s)' %
(arg, kw))
for arg, kw in command.arguments:
LOG.debug(f'adding argument (args={arg}, kwargs={kw})')
command_parser.add_argument(*arg, **kw)
def _collect(self):
def _collect(self) -> Tuple[List[ArgparseArgumentType], List[CommandMeta]]:
arguments = self._collect_arguments()
commands = self._collect_commands()
return (arguments, commands)
def _collect_arguments(self):
LOG.debug("collecting arguments from %s " % self +
def _collect_arguments(self) -> List[ArgparseArgumentType]:
LOG.debug(f"collecting arguments from {self} " +
"(stacked_on='%s', stacked_type='%s')" %
(self._meta.stacked_on, self._meta.stacked_type))
return self._meta.arguments
return self._meta.arguments # type: ignore
def _collect_commands(self):
LOG.debug("collecting commands from %s " % self +
def _collect_commands(self) -> List[CommandMeta]:
LOG.debug(f"collecting commands from {self} " +
"(stacked_on='%s', stacked_type='%s')" %
(self._meta.stacked_on, self._meta.stacked_type))
@ -668,13 +693,13 @@ class ArgparseController(ControllerHandler):
if member.startswith('_'):
continue
elif hasattr(getattr(self, member), '__cement_meta__'):
func = getattr(self.__class__, member).__cement_meta__
func['controller'] = self
func: CommandMeta = getattr(self.__class__, member).__cement_meta__
func.controller = self
commands.append(func)
return commands
def _get_exposed_commands(self):
def _get_exposed_commands(self) -> List[str]:
"""
Get a list of exposed commands for this controller
@ -690,7 +715,7 @@ class ArgparseController(ControllerHandler):
exposed.append(_clean_label(member_key))
return exposed
def _pre_argument_parsing(self):
def _pre_argument_parsing(self) -> None:
"""
Called on every controller just before arguments are parsed.
Provides an alternative means of adding arguments to the controller,
@ -717,7 +742,7 @@ class ArgparseController(ControllerHandler):
"""
pass
def _post_argument_parsing(self):
def _post_argument_parsing(self) -> None:
"""
Called on every controller just after arguments are parsed (assuming
that the parser hasn't thrown an exception). Provides an alternative
@ -760,8 +785,8 @@ class ArgparseController(ControllerHandler):
"""
pass
def _dispatch(self):
LOG.debug("controller dispatch passed off to %s" % self)
def _dispatch(self) -> Any:
LOG.debug(f"controller dispatch passed off to {self}")
self._setup_controllers()
self._setup_parsers()
@ -787,7 +812,7 @@ class ArgparseController(ControllerHandler):
# if no __dispatch__ is set then that means we have hit a
# controller with not sub-command (argparse doesn't support
# default sub-command yet... so we rely on
# __controller_namespace__ and it's default func
# __controller_namespace__ and its default func
# We never get here on Python < 3 as Argparse would have already
# complained about too few arguments
@ -815,9 +840,9 @@ class ArgparseController(ControllerHandler):
# We never get here on Python < 3 as Argparse would have already
# complained about too few arguments
raise FrameworkError( # pragma: nocover
"Controller function does not exist %s.%s()" % \
"Controller function does not exist %s.%s()" %
(contr.__class__.__name__, func_name)) # pragma: nocover
def load(app):
def load(app: App) -> None:
app.handler.register(ArgparseArgumentHandler)

View File

@ -11,13 +11,18 @@ extensions.
dependencies.
"""
from __future__ import annotations
import os
import sys
import logging
from typing import TYPE_CHECKING
from colorlog import ColoredFormatter
from ..ext.ext_logging import LoggingLogHandler
from ..utils.misc import is_true
if TYPE_CHECKING:
from ..core.foundation import App # pragma: nocover
class ColorLogHandler(LoggingLogHandler):
@ -28,7 +33,7 @@ class ColorLogHandler(LoggingLogHandler):
console output using the
`ColorLog <https://pypi.python.org/pypi/colorlog>`_ library.
"""
class Meta:
class Meta(LoggingLogHandler.Meta):
"""Handler meta-data."""
@ -65,7 +70,9 @@ class ColorLogHandler(LoggingLogHandler):
#: Formatter class to use for colorized logging
formatter_class = ColoredFormatter
def _get_console_format(self):
_meta: Meta
def _get_console_format(self) -> str:
format = super(ColorLogHandler, self)._get_console_format()
colorize = self.app.config.get(self._meta.config_section,
'colorize_console_log')
@ -74,7 +81,7 @@ class ColorLogHandler(LoggingLogHandler):
format = "%(log_color)s" + format
return format
def _get_file_format(self):
def _get_file_format(self) -> str:
format = super(ColorLogHandler, self)._get_file_format()
colorize = self.app.config.get(self._meta.config_section,
'colorize_file_log')
@ -82,9 +89,10 @@ class ColorLogHandler(LoggingLogHandler):
format = "%(log_color)s" + format
return format
def _get_console_formatter(self, format):
def _get_console_formatter(self, format: str) -> logging.Formatter:
colorize = self.app.config.get(self._meta.config_section,
'colorize_console_log')
formatter: logging.Formatter
if sys.stdout.isatty() or 'CEMENT_TEST' in os.environ:
if is_true(colorize):
formatter = self._meta.formatter_class(
@ -92,18 +100,17 @@ class ColorLogHandler(LoggingLogHandler):
log_colors=self._meta.colors
)
else:
formatter = self._meta.formatter_class_without_color(
format
)
formatter = self._meta.formatter_class_without_color(format)
else:
klass = self._meta.formatter_class_without_color # pragma: nocover
formatter = klass(format) # pragma: nocover
formatter = klass(format) # pragma: nocover
return formatter
def _get_file_formatter(self, format):
def _get_file_formatter(self, format: str) -> logging.Formatter:
colorize = self.app.config.get(self._meta.config_section,
'colorize_file_log')
formatter: logging.Formatter
if is_true(colorize):
formatter = self._meta.formatter_class(
format,
@ -115,5 +122,5 @@ class ColorLogHandler(LoggingLogHandler):
return formatter
def load(app):
def load(app: App) -> None:
app.handler.register(ColorLogHandler)

View File

@ -2,12 +2,17 @@
Cement configparser extension module.
"""
from __future__ import annotations
import os
import re
from typing import Any, Dict, List, TYPE_CHECKING
from ..core import config
from ..utils.misc import minimal_logger
from configparser import RawConfigParser
if TYPE_CHECKING:
from ..core.foundation import App # pragma: nocover
LOG = minimal_logger(__name__)
@ -24,14 +29,14 @@ class ConfigParserConfigHandler(config.ConfigHandler, RawConfigParser):
Additional arguments and keyword arguments are passed directly to
RawConfigParser on initialization.
"""
class Meta:
class Meta(config.ConfigHandler.Meta):
"""Handler meta-data."""
label = 'configparser'
"""The string identifier of this handler."""
def merge(self, dict_obj, override=True):
def merge(self, dict_obj: dict, override: bool = True) -> None:
"""
Merge a dictionary into our config. If override is True then
existing config values are overridden by those passed in.
@ -63,7 +68,7 @@ class ConfigParserConfigHandler(config.ConfigHandler, RawConfigParser):
# we don't support nested config blocks, so no need to go
# further down to more nested dicts.
def _parse_file(self, file_path):
def _parse_file(self, file_path: str) -> bool:
"""
Parse a configuration file at ``file_path`` and store it.
@ -80,7 +85,7 @@ class ConfigParserConfigHandler(config.ConfigHandler, RawConfigParser):
# will likely raise an exception anyhow.
return True
def keys(self, section):
def keys(self, section: str) -> List[str]: # type: ignore
"""
Return a list of keys within ``section``.
@ -93,7 +98,7 @@ class ConfigParserConfigHandler(config.ConfigHandler, RawConfigParser):
"""
return self.options(section)
def get_dict(self):
def get_dict(self) -> Dict[str, Any]:
"""
Return a dict of the entire configuration.
@ -105,7 +110,7 @@ class ConfigParserConfigHandler(config.ConfigHandler, RawConfigParser):
_config[section] = self.get_section_dict(section)
return _config
def get_sections(self):
def get_sections(self) -> List[str]:
"""
Return a list of configuration sections.
@ -115,7 +120,7 @@ class ConfigParserConfigHandler(config.ConfigHandler, RawConfigParser):
"""
return self.sections()
def get_section_dict(self, section):
def get_section_dict(self, section: str) -> Dict[str, Any]:
"""
Return a dict representation of a section.
@ -131,7 +136,7 @@ class ConfigParserConfigHandler(config.ConfigHandler, RawConfigParser):
dict_obj[key] = self.get(section, key)
return dict_obj
def add_section(self, section):
def add_section(self, section: str) -> None:
"""
Adds a block section to the config.
@ -141,18 +146,30 @@ class ConfigParserConfigHandler(config.ConfigHandler, RawConfigParser):
"""
return RawConfigParser.add_section(self, section)
def _get_env_var(self, section, key):
def _get_env_var(self, section: str, key: str) -> str:
if section == self.app._meta.config_section:
env_var = "%s_%s" % (self.app._meta.config_section, key)
env_var = f"{self.app._meta.config_section}_{key}"
else:
env_var = "%s_%s_%s" % (
self.app._meta.config_section, section, key)
env_var = f"{self.app._meta.config_section}_{section}_{key}"
env_var = env_var.upper()
env_var = re.sub('[^0-9a-zA-Z_]+', '_', env_var)
return env_var
def get(self, section, key, **kwargs):
def get(self, section: str, key: str, **kwargs: Any) -> str: # type: ignore
"""
Get a config value for a given ``section``, and ``key``.
Args:
section (str): The section that the key exists.
key (str): The key of the configuration item.
Keyword Args:
kwargs (dict): Passed on to the the backend config parser (super class).
Returns:
value (unknown): Returns the value of the key in the configuration section.
"""
env_var = self._get_env_var(section, key)
if env_var in os.environ.keys():
@ -160,12 +177,31 @@ class ConfigParserConfigHandler(config.ConfigHandler, RawConfigParser):
else:
return RawConfigParser.get(self, section, key, **kwargs)
def has_section(self, section):
def has_section(self, section: str) -> bool:
"""
Test whether the section exists
Args:
section (str): The section to test.
Returns:
bool: ``True`` if the section exists, ``False`` otherwise.
"""
return RawConfigParser.has_section(self, section)
def set(self, section, key, value):
return RawConfigParser.set(self, section, key, value)
def set(self, section: str, key: str, value: Any) -> None: # type: ignore
"""
Set the value of ``key`` in ``section``.
Args:
section (str): The section that the key exists.
key (str): The key of the configuration item inside ``section``.
value (unknown): The value to set to ``key``.
Returns: None
"""
RawConfigParser.set(self, section, key, value)
def load(app):
def load(app: App) -> None:
app.handler.register(ConfigParserConfigHandler)

View File

@ -2,18 +2,23 @@
Cement daemon extension module.
"""
from __future__ import annotations
import os
import sys
import io
import pwd
import grp
from typing import Any, Dict, TYPE_CHECKING
from ..core import exc
from ..utils.misc import minimal_logger
if TYPE_CHECKING:
from ..core.foundation import App # pragma: nocover
LOG = minimal_logger(__name__)
LOG = minimal_logger(__name__)
CEMENT_DAEMON_ENV = None
CEMENT_DAEMON_APP = None
CEMENT_DAEMON_APP: App = None # type: ignore
class Environment(object):
@ -38,7 +43,7 @@ class Environment(object):
"""
def __init__(self, **kw):
def __init__(self, **kw: Any) -> None:
self.stdin = kw.get('stdin', '/dev/null')
self.stdout = kw.get('stdout', '/dev/null')
self.stderr = kw.get('stderr', '/dev/null')
@ -55,23 +60,21 @@ class Environment(object):
try:
self.user = pwd.getpwnam(self.user)
except KeyError:
raise exc.FrameworkError("Daemon user '%s' doesn't exist." %
self.user)
raise exc.FrameworkError(f"Daemon user '{self.user}' doesn't exist.")
try:
self.group = kw.get('group',
grp.getgrgid(self.user.pw_gid).gr_name)
self.group = grp.getgrnam(self.group)
except KeyError:
raise exc.FrameworkError("Daemon group '%s' doesn't exist." %
self.group)
raise exc.FrameworkError(f"Daemon group '{self.group}' doesn't exist.")
def _write_pid_file(self):
def _write_pid_file(self) -> None:
"""
Writes ``os.getpid()`` out to ``self.pid_file``.
"""
pid = str(os.getpid())
LOG.debug('writing pid (%s) out to %s' % (pid, self.pid_file))
LOG.debug(f'writing pid ({pid}) out to {self.pid_file}')
# setup pid
if self.pid_file:
@ -81,7 +84,7 @@ class Environment(object):
os.chown(self.pid_file, self.user.pw_uid, self.group.gr_gid)
def switch(self):
def switch(self) -> None:
"""
Switch the current process's user/group to ``self.user``, and
``self.group``. Change directory to ``self.dir``, and write the
@ -95,12 +98,11 @@ class Environment(object):
os.environ['HOME'] = self.user.pw_dir
os.chdir(self.dir)
if self.pid_file and os.path.exists(self.pid_file):
raise exc.FrameworkError("Process already running (%s)" %
self.pid_file)
raise exc.FrameworkError(f"Process already running ({self.pid_file})")
else:
self._write_pid_file()
def daemonize(self): # pragma: no cover
def daemonize(self) -> None: # pragma: no cover
"""
Fork the current process into a daemon.
@ -124,7 +126,7 @@ class Environment(object):
os._exit(os.EX_OK)
except OSError as e:
sys.stderr.write("Fork #1 failed: (%d) %s\n" %
(e.errno, e.strerror))
(e.errno, e.strerror)) # type: ignore
sys.exit(1)
# Decouple from parent environment.
@ -140,7 +142,7 @@ class Environment(object):
os._exit(os.EX_OK)
except OSError as e:
sys.stderr.write("Fork #2 failed: (%d) %s\n" %
(e.errno, e.strerror))
(e.errno, e.strerror)) # type: ignore
sys.exit(1)
# Redirect standard file descriptors.
@ -171,7 +173,7 @@ class Environment(object):
self._write_pid_file()
def daemonize(): # pragma: no cover
def daemonize() -> None: # pragma: no cover
"""
This function switches the running user/group to that configured in
``config['daemon']['user']`` and ``config['daemon']['group']``. The
@ -209,7 +211,7 @@ def daemonize(): # pragma: no cover
CEMENT_DAEMON_ENV.daemonize()
def extend_app(app):
def extend_app(app: App) -> None:
"""
Adds the ``--daemon`` argument to the argument object, and sets the
default ``[daemon]`` config section options.
@ -225,7 +227,7 @@ def extend_app(app):
user = pwd.getpwuid(os.getuid())
group = grp.getgrgid(user.pw_gid)
defaults = dict()
defaults: Dict[str, Any] = dict()
defaults['daemon'] = dict()
defaults['daemon']['user'] = user.pw_name
defaults['daemon']['group'] = group.gr_name
@ -236,7 +238,7 @@ def extend_app(app):
app.extend('daemonize', daemonize)
def cleanup(app): # pragma: no cover
def cleanup(app: App) -> None: # pragma: no cover
"""
After application run time, this hook just attempts to clean up the
pid_file if one was set, and exists.
@ -254,6 +256,6 @@ def cleanup(app): # pragma: no cover
os.remove(CEMENT_DAEMON_ENV.pid_file)
def load(app):
def load(app: App) -> None:
app.hook.register('post_setup', extend_app)
app.hook.register('pre_close', cleanup)

View File

@ -2,11 +2,16 @@
Cement dummy extension module.
"""
from __future__ import annotations
from typing import Any, Dict, List, Optional, Union, TYPE_CHECKING
from ..core.output import OutputHandler
from ..core.template import TemplateHandler
from ..core.mail import MailHandler
from ..utils.misc import minimal_logger
if TYPE_CHECKING:
from ..core.foundation import App # pragma: nocover
LOG = minimal_logger(__name__)
@ -18,7 +23,7 @@ class DummyOutputHandler(OutputHandler):
any parameters on initialization, and does not actually output anything.
"""
class Meta:
class Meta(OutputHandler.Meta):
"""Handler meta-data"""
@ -29,7 +34,7 @@ class DummyOutputHandler(OutputHandler):
#: to override the ``output_handler`` via command line options.
overridable = False
def render(self, data, template=None, **kw):
def render(self, data: Dict[str, Any], **kw: Any) -> None:
"""
This implementation does not actually render anything to output, but
rather logs it to the debug facility.
@ -43,7 +48,7 @@ class DummyOutputHandler(OutputHandler):
"""
LOG.debug("not rendering any output to console")
LOG.debug("DATA: %s" % data)
LOG.debug(f"DATA: {data}")
return None
@ -56,14 +61,14 @@ class DummyTemplateHandler(TemplateHandler):
anything.
"""
class Meta:
class Meta(TemplateHandler.Meta):
"""Handler meta-data"""
#: The string identifier of this handler.
label = 'dummy'
def render(self, content, data, *args, **kw):
def render(self, content: Union[str, bytes], data: Dict[str, Any]) -> None:
"""
This implementation does not actually render anything, but
rather logs it to the debug facility.
@ -73,11 +78,17 @@ class DummyTemplateHandler(TemplateHandler):
data (dict): The data dictionary to render.
"""
LOG.debug("CONTENT: %s" % content)
LOG.debug("DATA: %s" % data)
LOG.debug(f"CONTENT: {str(content)}")
LOG.debug(f"DATA: {data}")
return None
def copy(self, src, dest, data):
def copy(self,
src: str,
dest: str,
data: Dict[str, Any],
force: bool = False,
exclude: Optional[List[str]] = None,
ignore: Optional[List[str]] = None) -> bool:
"""
This implementation does not actually copy anything, but rather logs it
to the debug facility.
@ -87,7 +98,8 @@ class DummyTemplateHandler(TemplateHandler):
dest (str): The destination directory.
data (dict): The data dictionary to render with templates.
"""
LOG.debug("COPY: %s -> %s" % (src, dest))
LOG.debug(f"COPY: {src} -> {dest}")
return True
class DummyMailHandler(MailHandler):
@ -185,14 +197,14 @@ class DummyMailHandler(MailHandler):
"""
class Meta:
class Meta(MailHandler.Meta):
"""Handler meta-data."""
#: Unique identifier for this handler
label = 'dummy'
def _get_params(self, **kw):
def _get_params(self, **kw: Any) -> Dict[str, Any]:
params = dict()
for item in ['to', 'from_addr', 'cc', 'bcc', 'subject']:
config_item = self.app.config.get(self._meta.config_section, item)
@ -206,14 +218,14 @@ class DummyMailHandler(MailHandler):
return params
def send(self, body, **kw):
def send(self, body: str, **kw: Any) -> bool:
"""
Mimic sending an email message, but really just print what would be
sent to console. Keyword arguments override configuration
defaults (cc, bcc, etc).
Args:
body: The message body to send
body (str): The message body to send
Keyword Args:
to (list): List of recipients (generally email addresses)
@ -246,16 +258,15 @@ class DummyMailHandler(MailHandler):
msg = "\n" + "=" * 77 + "\n"
msg += "DUMMY MAIL MESSAGE\n"
msg += "-" * 77 + "\n\n"
msg += "To: %s\n" % ', '.join(params['to'])
msg += "From: %s\n" % params['from_addr']
msg += "CC: %s\n" % ', '.join(params['cc'])
msg += "BCC: %s\n" % ', '.join(params['bcc'])
msg += f"To: {', '.join(params['to'])}\n"
msg += f"From: {params['from_addr']}\n"
msg += f"CC: {', '.join(params['cc'])}\n"
msg += f"BCC: {', '.join(params['bcc'])}\n"
if params['subject_prefix'] not in [None, '']:
msg += "Subject: %s %s\n\n---\n\n" % (params['subject_prefix'],
params['subject'])
msg += f"Subject: {params['subject_prefix']} {params['subject']}\n\n---\n\n"
else:
msg += "Subject: %s\n\n---\n\n" % params['subject']
msg += f"Subject: {params['subject']}\n\n---\n\n"
msg += body + "\n"
msg += "\n" + "-" * 77 + "\n"
@ -264,7 +275,7 @@ class DummyMailHandler(MailHandler):
return True
def load(app):
def load(app: App) -> None:
app.handler.register(DummyOutputHandler)
app.handler.register(DummyTemplateHandler)
app.handler.register(DummyMailHandler)

View File

@ -2,30 +2,35 @@
Cement generate extension module.
"""
from __future__ import annotations
import re
import os
import inspect
import yaml
import yaml # type: ignore
import shutil
from typing import Any, Callable, Dict, TYPE_CHECKING
from .. import Controller, minimal_logger, shell
from ..utils.version import VERSION, get_version
if TYPE_CHECKING:
from ..core.foundation import App # pragma: nocover
LOG = minimal_logger(__name__)
class GenerateTemplateAbstractBase(Controller):
class Meta:
class Meta(Controller.Meta):
pass
def _generate(self, source, dest):
msg = 'Generating %s %s in %s' % (
self.app._meta.label, self._meta.label, dest
)
_meta: Meta # type: ignore
def _generate(self, source: str, dest: str) -> None:
msg = f'Generating {self.app._meta.label} {self._meta.label} in {dest}'
self.app.log.info(msg)
data = {}
data: Dict[str, Dict[str, Any]] = {}
# builtin vars
maj_min = float('%s.%s' % (VERSION[0], VERSION[1]))
maj_min = float(f'{VERSION[0]}.{VERSION[1]}')
data['cement'] = {}
data['cement']['version'] = get_version()
data['cement']['major_version'] = VERSION[0]
@ -33,7 +38,7 @@ class GenerateTemplateAbstractBase(Controller):
data['cement']['major_minor_version'] = maj_min
f = open(os.path.join(source, '.generate.yml'))
yaml_load = yaml.full_load if hasattr(yaml, 'full_load') else yaml.load
yaml_load: Callable = yaml.full_load if hasattr(yaml, 'full_load') else yaml.load
g_config = yaml_load(f)
f.close()
@ -46,7 +51,7 @@ class GenerateTemplateAbstractBase(Controller):
self._meta.label
ignore_list.append(g_config_yml)
var_defaults = {
var_defaults: Dict = {
'name': None,
'prompt': None,
'validate': None,
@ -59,14 +64,14 @@ class GenerateTemplateAbstractBase(Controller):
var.update(defined_var)
for key in ['name', 'prompt']:
assert var[key] is not None, \
"Required generate config key missing: %s" % key
f"Required generate config key missing: {key}"
val = None
val: Any = None
if var['default'] is not None and self.app.pargs.defaults:
val = var['default']
elif var['default'] is not None:
default_text = ' [%s]' % var['default']
default_text = f" [{var['default']}]"
else:
default_text = '' # pragma: nocover
@ -74,7 +79,7 @@ class GenerateTemplateAbstractBase(Controller):
if val is None:
class MyPrompt(shell.Prompt):
class Meta:
text = "%s%s:" % (var['prompt'], default_text)
text = f"{var['prompt']}{default_text}:"
default = var.get('default', None)
p = MyPrompt()
@ -85,13 +90,13 @@ class GenerateTemplateAbstractBase(Controller):
elif var['case'] is not None:
self.app.log.warning(
"Invalid configuration for variable " +
"'%s': " % var['name'] +
f"'{var['name']}': " +
"case must be one of lower, upper, or title."
)
if var['validate'] is not None:
assert re.match(var['validate'], val), \
"Invalid Response (must match: '%s')" % var['validate']
f"Invalid Response (must match: '{var['validate']}')"
data[var['name']] = val
@ -106,21 +111,19 @@ class GenerateTemplateAbstractBase(Controller):
else:
raise # pragma: nocover
def _clone(self, source, dest):
msg = 'Cloning %s %s template to %s' % (
self.app._meta.label, self._meta.label, dest
)
def _clone(self, source: str, dest: str) -> None:
msg = f'Cloning {self.app._meta.label} {self._meta.label} template to {dest}'
self.app.log.info(msg)
if os.path.exists(dest) and self.app.pargs.force is True:
shutil.rmtree(dest)
elif os.path.exists(dest):
msg = "Destination path already exists: %s (try: --force)" % dest
msg = f"Destination path already exists: {dest} (try: --force)"
raise AssertionError(msg)
shutil.copytree(source, dest)
def _default(self):
def _default(self) -> None:
source = self._meta.source_path
dest = self.app.pargs.dest
@ -130,7 +133,7 @@ class GenerateTemplateAbstractBase(Controller):
self._generate(source, dest)
def setup_template_items(app):
def setup_template_items(app: App) -> None:
template_dirs = []
template_items = []
@ -140,12 +143,12 @@ def setup_template_items(app):
if os.path.exists(subpath) and subpath not in template_dirs:
template_dirs.append(subpath)
# use app template module, find it's path on filesystem
# use app template module, find its path on filesystem
if app._meta.template_module is not None:
mod_parts = app._meta.template_module.split('.')
mod = mod_parts.pop()
mod_name = mod_parts.pop()
try:
mod = app.__import__(mod, from_module='.'.join(mod_parts))
mod = app.__import__(mod_name, from_module='.'.join(mod_parts))
mod_path = os.path.dirname(inspect.getfile(mod))
subpath = os.path.join(mod_path, 'generate')
@ -155,7 +158,7 @@ def setup_template_items(app):
# FIXME: not exactly sure how to test for this so not covering
except AttributeError: # pragma: nocover
msg = 'unable to load template module' + \
'%s from %s' % (mod, '.'.join(mod_parts)) # pragma: nocover
f"{mod} from {'.'.join(mod_parts)}" # pragma: nocover
app.log.debug(msg) # pragma: nocover
for path in template_dirs:
@ -168,7 +171,7 @@ def setup_template_items(app):
label = item
stacked_on = 'generate'
stacked_type = 'nested'
help = 'generate %s from template' % item
help = f'generate {item} from template'
arguments = [
# ------------------------------------------------------
(['dest'],
@ -195,19 +198,21 @@ def setup_template_items(app):
class Generate(Controller):
class Meta:
class Meta(Controller.Meta):
label = 'generate'
stacked_on = 'base'
stacked_type = 'nested'
config_section = 'generate'
def _setup(self, app):
_meta: Meta # type: ignore
def _setup(self, app: App) -> None:
super(Generate, self)._setup(app)
def _default(self):
def _default(self) -> None:
self._parser.print_help()
def load(app):
def load(app: App) -> None:
app.handler.register(Generate)
app.hook.register('pre_run', setup_template_items)

View File

@ -11,11 +11,16 @@ extensions.
dependencies.
"""
from __future__ import annotations
from typing import Any, Optional, Dict, Tuple, Union, TYPE_CHECKING
from ..core.output import OutputHandler
from ..core.template import TemplateHandler
from ..utils.misc import minimal_logger
from jinja2 import Environment, FileSystemLoader, PackageLoader
if TYPE_CHECKING:
from ..core.foundation import App # pragma: nocover
LOG = minimal_logger(__name__)
@ -29,24 +34,26 @@ class Jinja2OutputHandler(OutputHandler):
Please see the developer documentation on
:cement:`Output Handling <dev/output>`.
This class has an assumed depency on it's associated Jinja2TemplateHandler.
If sub-classing, you must also sub-class/implement the Jinja2TemplateHandler
and give it the same label.
"""
class Meta:
class Meta(OutputHandler.Meta):
"""Handler meta-data."""
label = 'jinja2'
def __init__(self, *args, **kw):
def __init__(self, *args: Any, **kw: Any) -> None:
super(Jinja2OutputHandler, self).__init__(*args, **kw)
self.templater = None
self.templater: TemplateHandler = None # type: ignore
def _setup(self, app):
def _setup(self, app: App) -> None:
super(Jinja2OutputHandler, self)._setup(app)
self.templater = self.app.handler.resolve('template', 'jinja2',
setup=True)
self.templater = self.app.handler.resolve('template', self._meta.label, setup=True) # type: ignore
def render(self, data, template=None, **kw):
def render(self, data: Dict[str, Any], template: str = None, **kw: Any) -> str: # type: ignore
"""
Take a data dictionary and render it using the given template file.
Additional keyword arguments are ignored.
@ -64,9 +71,9 @@ class Jinja2OutputHandler(OutputHandler):
"""
LOG.debug("rendering content using '%s' as a template." % template)
LOG.debug(f"rendering content using '{template}' as a template.")
content, _type, _path = self.templater.load(template)
return self.templater.render(content, data)
return self.templater.render(content, data) # type: ignore
class Jinja2TemplateHandler(TemplateHandler):
@ -80,20 +87,20 @@ class Jinja2TemplateHandler(TemplateHandler):
:cement:`Template Handling <dev/template>`.
"""
class Meta:
class Meta(TemplateHandler.Meta):
"""Handler meta-data."""
label = 'jinja2'
def __init__(self, *args, **kw):
def __init__(self, *args: Any, **kw: Any) -> None:
super(Jinja2TemplateHandler, self).__init__(*args, **kw)
# expose Jinja2 Environment instance so that we can manipulate it
# higher in application code if necessary
self.env = Environment(keep_trailing_newline=True)
def load(self, *args, **kw):
def load(self, *args: Any, **kw: Any) -> Tuple[Union[str, bytes], str, Optional[str]]:
"""
Loads a template file first from ``self.app._meta.template_dirs`` and
secondly from ``self.app._meta.template_module``. The
@ -113,18 +120,21 @@ class Jinja2TemplateHandler(TemplateHandler):
cement.core.exc.FrameworkError: If the template does not exist in
either the ``template_module`` or ``template_dirs``.
"""
content, _type, _path = super(Jinja2TemplateHandler, self).load(*args,
**kw)
content, _type, _path = super(Jinja2TemplateHandler, self).load(*args, **kw)
if _type == 'directory':
self.env.loader = FileSystemLoader(self.app._meta.template_dirs)
elif _type == 'module':
parts = self.app._meta.template_module.rsplit('.', 1)
parts = self.app._meta.template_module.rsplit('.', 1) # type: ignore
self.env.loader = PackageLoader(parts[0], package_path=parts[1])
return content, _type, _path
def render(self, content, data, *args, **kw):
def render(self,
content: Union[str, bytes],
data: Dict[str, Any],
*args: Any,
**kw: Any) -> str:
"""
Render the given ``content`` as template with the ``data`` dictionary.
@ -136,7 +146,7 @@ class Jinja2TemplateHandler(TemplateHandler):
str: The rendered template text
"""
LOG.debug("rendering content as text via %s" % self.__module__)
LOG.debug(f"rendering content as text via {self.__module__}")
if not isinstance(content, str):
content = content.decode('utf-8')
@ -146,6 +156,6 @@ class Jinja2TemplateHandler(TemplateHandler):
return res
def load(app):
def load(app: App) -> None:
app.handler.register(Jinja2OutputHandler)
app.handler.register(Jinja2TemplateHandler)

View File

@ -2,14 +2,19 @@
Cement json extension module.
"""
from __future__ import annotations
from typing import Any, Dict, TYPE_CHECKING
from ..core import output
from ..utils.misc import minimal_logger
from ..ext.ext_configparser import ConfigParserConfigHandler
if TYPE_CHECKING:
from ..core.foundation import App # pragma: nocover
LOG = minimal_logger(__name__)
def suppress_output_before_run(app):
def suppress_output_before_run(app: App) -> None:
"""
This is a ``post_argument_parsing`` hook that suppresses console output if
the ``JsonOutputHandler`` is triggered via command line.
@ -23,7 +28,7 @@ def suppress_output_before_run(app):
app._suppress_output()
def unsuppress_output_before_render(app, data):
def unsuppress_output_before_render(app: App, data: Any) -> None:
"""
This is a ``pre_render`` that unsuppresses console output if
the ``JsonOutputHandler`` is triggered via command line so that the JSON
@ -38,7 +43,7 @@ def unsuppress_output_before_render(app, data):
app._unsuppress_output()
def suppress_output_after_render(app, out_text):
def suppress_output_after_render(app: App, out_text: str) -> None:
"""
This is a ``post_render`` hook that suppresses console output again after
rendering, only if the ``JsonOutputHandler`` is triggered via command
@ -68,7 +73,7 @@ class JsonOutputHandler(output.OutputHandler):
order to unsuppress output and see what's happening.
"""
class Meta:
class Meta(output.OutputHandler.Meta):
"""Handler meta-data"""
@ -82,16 +87,18 @@ class JsonOutputHandler(output.OutputHandler):
#: Backend JSON library module to use (`json`, `ujson`)
json_module = 'json'
def __init__(self, *args, **kw):
_meta: Meta # type: ignore
def __init__(self, *args: Any, **kw: Any) -> None:
super().__init__(*args, **kw)
self._json = None
def _setup(self, app):
def _setup(self, app: App) -> None:
super()._setup(app)
self._json = __import__(self._meta.json_module,
self._json = __import__(self._meta.json_module, # type: ignore
globals(), locals(), [], 0)
def render(self, data_dict, template=None, **kw):
def render(self, data: Dict[str, Any], template: str = None, **kw: Any) -> str: # type: ignore
"""
Take a data dictionary and render it as Json output. Note that the
template option is received here per the interface, however this
@ -99,7 +106,7 @@ class JsonOutputHandler(output.OutputHandler):
``json.dumps()``.
Args:
data_dict (dict): The data dictionary to render.
data (dict): The data dictionary to render.
Keyword Args:
template: This option is completely ignored.
@ -108,8 +115,8 @@ class JsonOutputHandler(output.OutputHandler):
str: A JSON encoded string.
"""
LOG.debug("rendering output as Json via %s" % self.__module__)
return self._json.dumps(data_dict, **kw)
LOG.debug(f"rendering output as Json via {self.__module__}")
return self._json.dumps(data, **kw) # type: ignore
class JsonConfigHandler(ConfigParserConfigHandler):
@ -121,7 +128,7 @@ class JsonConfigHandler(ConfigParserConfigHandler):
but with JSON configuration files.
"""
class Meta:
class Meta(ConfigParserConfigHandler.Meta):
"""Handler meta-data."""
@ -130,16 +137,18 @@ class JsonConfigHandler(ConfigParserConfigHandler):
#: Backend JSON library module to use (`json`, `ujson`).
json_module = 'json'
def __init__(self, *args, **kw):
_meta: Meta # type: ignore
def __init__(self, *args: Any, **kw: Any) -> None:
super().__init__(*args, **kw)
self._json = None
def _setup(self, app):
def _setup(self, app: App) -> None:
super()._setup(app)
self._json = __import__(self._meta.json_module,
self._json = __import__(self._meta.json_module, # type: ignore
globals(), locals(), [], 0)
def _parse_file(self, file_path):
def _parse_file(self, file_path: str) -> bool:
"""
Parse JSON configuration file settings from file_path, overwriting
existing config settings. If the file does not exist, returns False.
@ -160,7 +169,7 @@ class JsonConfigHandler(ConfigParserConfigHandler):
return True
def load(app):
def load(app: App) -> None:
app.hook.register('post_argument_parsing', suppress_output_before_run)
app.hook.register('pre_render', unsuppress_output_before_render)
app.hook.register('post_render', suppress_output_after_render)

Some files were not shown because too many files have changed in this diff Show More