code
stringlengths
3
1.01M
repo_name
stringlengths
5
116
path
stringlengths
3
311
language
stringclasses
30 values
license
stringclasses
15 values
size
int64
3
1.01M
# For maintainers only ### Setup your minio-java Github Repository Fork [minio-java upstream](https://github.com/minio/minio-java/fork) source repository to your own personal repository. ```bash $ git clone https://github.com/$USER_ID/minio-java $ cd minio-java ``` Minio Java Library uses gradle for its dependency management https://gradle.org/ ### Publishing new artifacts #### Setup your gradle properties Create a new gradle properties file ```bash $ cat >> ${HOME}/.gradle/gradle.properties << EOF signing.keyId=76A57749 signing.password=**REDACTED** signing.secretKeyRingFile=/home/harsha/.gnupg/secring.gpg ossrhUsername=minio ossrhPassword=**REDACTED** EOF ``` #### Import minio private key ```bash $ gpg --import minio.asc ``` #### Modify build.gradle with new version ```bash $ cat build.gradle ... ... group = 'io.minio' archivesBaseName = 'minio' version = '0.3.0' ... ... ``` #### Upload archives to maven for publishing ```bash $ ./gradlew uploadArchives ```
arunsingh/minio-java
MAINTAINERS.md
Markdown
apache-2.0
987
Previous change logs can be found at [CHANGELOG-3.3](https://github.com/etcd-io/etcd/blob/main/CHANGELOG-3.3.md). The minimum recommended etcd versions to run in **production** are 3.2.28+, 3.3.18+, and 3.4.2+. <hr> ## v3.4.16 (2021-05-11) See [code changes](https://github.com/etcd-io/etcd/compare/v3.4.15...v3.4.16) and [v3.4 upgrade guide](https://etcd.io/docs/latest/upgrades/upgrade_3_4/) for any breaking changes. ### etcd server - Add [`--experimental-warning-apply-duration`](https://github.com/etcd-io/etcd/pull/12448) flag which allows apply duration threshold to be configurable. - Fix [`--unsafe-no-fsync`](https://github.com/etcd-io/etcd/pull/12751) to still write-out data avoiding corruption (most of the time). - Reduce [around 30% memory allocation by logging range response size without marshal](https://github.com/etcd-io/etcd/pull/12871). - Add [exclude alarms from health check conditionally](https://github.com/etcd-io/etcd/pull/12880). ### Metrics - Fix [incorrect metrics generated when clients cancel watches](https://github.com/etcd-io/etcd/pull/12803) back-ported from (https://github.com/etcd-io/etcd/pull/12196). ### Go - Compile with [*Go 1.12.17*](https://golang.org/doc/devel/release.html#go1.12). <hr> ## [v3.4.15](https://github.com/etcd-io/etcd/releases/tag/v3.4.15) (2021-02-26) See [code changes](https://github.com/etcd-io/etcd/compare/v3.4.14...v3.4.15) and [v3.4 upgrade guide](https://etcd.io/docs/latest/upgrades/upgrade_3_4/) for any breaking changes. ### etcd server - Log [successful etcd server-side health check in debug level](https://github.com/etcd-io/etcd/pull/12677). - Fix [64 KB websocket notification message limit](https://github.com/etcd-io/etcd/pull/12402). ### Package `fileutil` - Fix [`F_OFD_` constants](https://github.com/etcd-io/etcd/pull/12444). ### Dependency - Bump up [`gorilla/websocket` to v1.4.2](https://github.com/etcd-io/etcd/pull/12645). ### Go - Compile with [*Go 1.12.17*](https://golang.org/doc/devel/release.html#go1.12). <hr> ## [v3.4.14](https://github.com/etcd-io/etcd/releases/tag/v3.4.14) (2020-11-25) See [code changes](https://github.com/etcd-io/etcd/compare/v3.4.13...v3.4.14) and [v3.4 upgrade guide](https://etcd.io/docs/latest/upgrades/upgrade_3_4/) for any breaking changes. ### Package `clientv3` - Fix [auth token invalid after watch reconnects](https://github.com/etcd-io/etcd/pull/12264). Get AuthToken automatically when clientConn is ready. ### etcd server - [Fix server panic](https://github.com/etcd-io/etcd/pull/12288) when force-new-cluster flag is enabled in a cluster which had learner node. ### Package `netutil` - Remove [`netutil.DropPort/RecoverPort/SetLatency/RemoveLatency`](https://github.com/etcd-io/etcd/pull/12491). - These are not used anymore. They were only used for older versions of functional testing. - Removed to adhere to best security practices, minimize arbitrary shell invocation. ### `tools/etcd-dump-metrics` - Implement [input validation to prevent arbitrary shell invocation](https://github.com/etcd-io/etcd/pull/12491). ### Go - Compile with [*Go 1.12.17*](https://golang.org/doc/devel/release.html#go1.12). <hr> ## [v3.4.13](https://github.com/etcd-io/etcd/releases/tag/v3.4.13) (2020-8-24) See [code changes](https://github.com/etcd-io/etcd/compare/v3.4.12...v3.4.13) and [v3.4 upgrade guide](https://etcd.io/docs/latest/upgrades/upgrade_3_4/) for any breaking changes. ### Security - A [log warning](https://github.com/etcd-io/etcd/pull/12242) is added when etcd use any existing directory that has a permission different than 700 on Linux and 777 on Windows. ### Go - Compile with [*Go 1.12.17*](https://golang.org/doc/devel/release.html#go1.12). <hr> ## [v3.4.12](https://github.com/etcd-io/etcd/releases/tag/v3.4.12) (2020-08-19) See [code changes](https://github.com/etcd-io/etcd/compare/v3.4.11...v3.4.12) and [v3.4 upgrade guide](https://etcd.io/docs/latest/upgrades/upgrade_3_4/) for any breaking changes. ### etcd server - Fix [server panic in slow writes warnings](https://github.com/etcd-io/etcd/issues/12197). - Fixed via [PR#12238](https://github.com/etcd-io/etcd/pull/12238). ### Go - Compile with [*Go 1.12.17*](https://golang.org/doc/devel/release.html#go1.12). <hr> ## [v3.4.11](https://github.com/etcd-io/etcd/releases/tag/v3.4.11) (2020-08-18) See [code changes](https://github.com/etcd-io/etcd/compare/v3.4.10...v3.4.11) and [v3.4 upgrade guide](https://etcd.io/docs/latest/upgrades/upgrade_3_4/) for any breaking changes. ### etcd server - Improve [`runtime.FDUsage` call pattern to reduce objects malloc of Memory Usage and CPU Usage](https://github.com/etcd-io/etcd/pull/11986). - Add [`etcd --experimental-watch-progress-notify-interval`](https://github.com/etcd-io/etcd/pull/12216) flag to make watch progress notify interval configurable. ### Package `clientv3` - Remove [excessive watch cancel logging messages](https://github.com/etcd-io/etcd/pull/12187). - See [kubernetes/kubernetes#93450](https://github.com/kubernetes/kubernetes/issues/93450). ### Package `runtime` - Optimize [`runtime.FDUsage` by removing unnecessary sorting](https://github.com/etcd-io/etcd/pull/12214). ### Metrics, Monitoring - Add [`os_fd_used` and `os_fd_limit` to monitor current OS file descriptors](https://github.com/etcd-io/etcd/pull/12214). ### Go - Compile with [*Go 1.12.17*](https://golang.org/doc/devel/release.html#go1.12). <hr> ## [v3.4.10](https://github.com/etcd-io/etcd/releases/tag/v3.4.10) (2020-07-16) See [code changes](https://github.com/etcd-io/etcd/compare/v3.4.9...v3.4.10) and [v3.4 upgrade guide](https://etcd.io/docs/latest/upgrades/upgrade_3_4/) for any breaking changes. ### Package `etcd server` - Add [`--unsafe-no-fsync`](https://github.com/etcd-io/etcd/pull/11946) flag. - Setting the flag disables all uses of fsync, which is unsafe and will cause data loss. This flag makes it possible to run an etcd node for testing and development without placing lots of load on the file system. - Add [etcd --auth-token-ttl](https://github.com/etcd-io/etcd/pull/11980) flag to customize `simpleTokenTTL` settings. - Improve [runtime.FDUsage objects malloc of Memory Usage and CPU Usage](https://github.com/etcd-io/etcd/pull/11986). - Improve [mvcc.watchResponse channel Memory Usage](https://github.com/etcd-io/etcd/pull/11987). - Fix [`int64` convert panic in raft logger](https://github.com/etcd-io/etcd/pull/12106). - Fix [kubernetes/kubernetes#91937](https://github.com/kubernetes/kubernetes/issues/91937). ### Breaking Changes - Changed behavior on [existing dir permission](https://github.com/etcd-io/etcd/pull/11798). - Previously, the permission was not checked on existing data directory and the directory used for automatically generating self-signed certificates for TLS connections with clients. Now a check is added to make sure those directories, if already exist, has a desired permission of 700 on Linux and 777 on Windows. ### Go - Compile with [*Go 1.12.17*](https://golang.org/doc/devel/release.html#go1.12). <hr> ## [v3.4.9](https://github.com/etcd-io/etcd/releases/tag/v3.4.9) (2020-05-20) See [code changes](https://github.com/etcd-io/etcd/compare/v3.4.8...v3.4.9) and [v3.4 upgrade guide](https://etcd.io/docs/latest/upgrades/upgrade_3_4/) for any breaking changes. ### Package `wal` - Add [missing CRC checksum check in WAL validate method otherwise causes panic](https://github.com/etcd-io/etcd/pull/11924). - See https://github.com/etcd-io/etcd/issues/11918. ### Go - Compile with [*Go 1.12.17*](https://golang.org/doc/devel/release.html#go1.12). <hr> ## [v3.4.8](https://github.com/etcd-io/etcd/releases/tag/v3.4.8) (2020-05-18) See [code changes](https://github.com/etcd-io/etcd/compare/v3.4.7...v3.4.8) and [v3.4 upgrade guide](https://etcd.io/docs/latest/upgrades/upgrade_3_4/) for any breaking changes. ### `etcdctl` - Make sure [save snapshot downloads checksum for integrity checks](https://github.com/etcd-io/etcd/pull/11896). ### Package `clientv3` - Make sure [save snapshot downloads checksum for integrity checks](https://github.com/etcd-io/etcd/pull/11896). ### etcd server - Improve logging around snapshot send and receive. - [Add log when etcdserver failed to apply command](https://github.com/etcd-io/etcd/pull/11670). - [Fix deadlock bug in mvcc](https://github.com/etcd-io/etcd/pull/11817). - Fix [inconsistency between WAL and server snapshot](https://github.com/etcd-io/etcd/pull/11888). - Previously, server restore fails if it had crashed after persisting raft hard state but before saving snapshot. - See https://github.com/etcd-io/etcd/issues/10219 for more. ### Package Auth - [Fix a data corruption bug by saving consistent index](https://github.com/etcd-io/etcd/pull/11652). ### Metrics, Monitoring - Add [`etcd_debugging_auth_revision`](https://github.com/etcd-io/etcd/commit/f14d2a087f7b0fd6f7980b95b5e0b945109c95f3). ### Go - Compile with [*Go 1.12.17*](https://golang.org/doc/devel/release.html#go1.12). <hr> ## [v3.4.7](https://github.com/etcd-io/etcd/releases/tag/v3.4.7) (2020-04-01) See [code changes](https://github.com/etcd-io/etcd/compare/v3.4.6...v3.4.7) and [v3.4 upgrade guide](https://etcd.io/docs/latest/upgrades/upgrade_3_4/) for any breaking changes. ### etcd server - Improve [compaction performance when latest index is greater than 1-million](https://github.com/etcd-io/etcd/pull/11734). ### Package `wal` - Add [`etcd_wal_write_bytes_total`](https://github.com/etcd-io/etcd/pull/11738). ### Metrics, Monitoring - Add [`etcd_wal_write_bytes_total`](https://github.com/etcd-io/etcd/pull/11738). ### Go - Compile with [*Go 1.12.17*](https://golang.org/doc/devel/release.html#go1.12). <hr> ## [v3.4.6](https://github.com/etcd-io/etcd/releases/tag/v3.4.6) (2020-03-29) See [code changes](https://github.com/etcd-io/etcd/compare/v3.4.5...v3.4.6) and [v3.4 upgrade guide](https://etcd.io/docs/latest/upgrades/upgrade_3_4/) for any breaking changes. ### Package `lease` - Fix [memory leak in follower nodes](https://github.com/etcd-io/etcd/pull/11731). - https://github.com/etcd-io/etcd/issues/11495 - https://github.com/etcd-io/etcd/issues/11730 ### Go - Compile with [*Go 1.12.17*](https://golang.org/doc/devel/release.html#go1.12). <hr> ## [v3.4.5](https://github.com/etcd-io/etcd/releases/tag/v3.4.5) (2020-03-18) See [code changes](https://github.com/etcd-io/etcd/compare/v3.4.4...v3.4.5) and [v3.4 upgrade guide](https://etcd.io/docs/latest/upgrades/upgrade_3_4/) for any breaking changes. **Again, before running upgrades from any previous release, please make sure to read change logs below and [v3.4 upgrade guide](https://etcd.io/docs/latest/upgrades/upgrade_3_4/).** ### etcd server - Log [`[CLIENT-PORT]/health` check in server side](https://github.com/etcd-io/etcd/pull/11704). ### client v3 - Fix [`"hasleader"` metadata embedding](https://github.com/etcd-io/etcd/pull/11687). - Previously, `clientv3.WithRequireLeader(ctx)` was overwriting existing context keys. ### etcdctl v3 - Fix [`etcdctl member add`](https://github.com/etcd-io/etcd/pull/11638) command to prevent potential timeout. ### Metrics, Monitoring See [List of metrics](https://etcd.io/docs/latest/metrics/) for all metrics per release. - Add [`etcd_server_client_requests_total` with `"type"` and `"client_api_version"` labels](https://github.com/etcd-io/etcd/pull/11687). ### gRPC Proxy - Fix [`panic on error`](https://github.com/etcd-io/etcd/pull/11694) for metrics handler. ### Go - Compile with [*Go 1.12.17*](https://golang.org/doc/devel/release.html#go1.12). <hr> ## [v3.4.4](https://github.com/etcd-io/etcd/releases/tag/v3.4.4) (2020-02-24) See [code changes](https://github.com/etcd-io/etcd/compare/v3.4.3...v3.4.4) and [v3.4 upgrade guide](https://etcd.io/docs/latest/upgrades/upgrade_3_4/) for any breaking changes. **Again, before running upgrades from any previous release, please make sure to read change logs below and [v3.4 upgrade guide](https://etcd.io/docs/latest/upgrades/upgrade_3_4/).** ### etcd server - Fix [`wait purge file loop during shutdown`](https://github.com/etcd-io/etcd/pull/11308). - Previously, during shutdown etcd could accidentally remove needed wal files, resulting in catastrophic error `etcdserver: open wal error: wal: file not found.` during startup. - Now, etcd makes sure the purge file loop exits before server signals stop of the raft node. - [Fix corruption bug in defrag](https://github.com/etcd-io/etcd/pull/11613). - Fix [quorum protection logic when promoting a learner](https://github.com/etcd-io/etcd/pull/11640). - Improve [peer corruption checker](https://github.com/etcd-io/etcd/pull/11621) to work when peer mTLS is enabled. ### Metrics, Monitoring See [List of metrics](https://etcd.io/docs/latest/metrics/) for all metrics per release. Note that any `etcd_debugging_*` metrics are experimental and subject to change. - Add [`etcd_debugging_mvcc_total_put_size_in_bytes`](https://github.com/etcd-io/etcd/pull/11374) Prometheus metric. - Fix bug where [etcd_debugging_mvcc_db_compaction_keys_total is always 0](https://github.com/etcd-io/etcd/pull/11400). ### Auth - Fix [NoPassword check when adding user through GRPC gateway](https://github.com/etcd-io/etcd/pull/11418) ([issue#11414](https://github.com/etcd-io/etcd/issues/11414)) - Fix bug where [some auth related messages are logged at wrong level](https://github.com/etcd-io/etcd/pull/11586) <hr> ## [v3.4.3](https://github.com/etcd-io/etcd/releases/tag/v3.4.3) (2019-10-24) See [code changes](https://github.com/etcd-io/etcd/compare/v3.4.2...v3.4.3) and [v3.4 upgrade guide](https://etcd.io/docs/latest/upgrades/upgrade_3_4/) for any breaking changes. **Again, before running upgrades from any previous release, please make sure to read change logs below and [v3.4 upgrade guide](https://etcd.io/docs/latest/upgrades/upgrade_3_4/).** ### Metrics, Monitoring See [List of metrics](https://etcd.io/docs/latest/metrics/) for all metrics per release. Note that any `etcd_debugging_*` metrics are experimental and subject to change. - Change [`etcd_cluster_version`](https://github.com/etcd-io/etcd/pull/11254) Prometheus metrics to include only major and minor version. ### Go - Compile with [*Go 1.12.12*](https://golang.org/doc/devel/release.html#go1.12). <hr> ## [v3.4.2](https://github.com/etcd-io/etcd/releases/tag/v3.4.2) (2019-10-11) See [code changes](https://github.com/etcd-io/etcd/compare/v3.4.1...v3.4.2) and [v3.4 upgrade guide](https://etcd.io/docs/latest/upgrades/upgrade_3_4/) for any breaking changes. **Again, before running upgrades from any previous release, please make sure to read change logs below and [v3.4 upgrade guide](https://etcd.io/docs/latest/upgrades/upgrade_3_4/).** ### etcdctl v3 - Fix [`etcdctl member add`](https://github.com/etcd-io/etcd/pull/11194) command to prevent potential timeout. ### etcdserver - Add [`tracing`](https://github.com/etcd-io/etcd/pull/11179) to range, put and compact requests in etcdserver. ### Go - Compile with [*Go 1.12.9*](https://golang.org/doc/devel/release.html#go1.12) including [*Go 1.12.8*](https://groups.google.com/d/msg/golang-announce/65QixT3tcmg/DrFiG6vvCwAJ) security fixes. ### client v3 - Fix [client balancer failover against multiple endpoints](https://github.com/etcd-io/etcd/pull/11184). - Fix ["kube-apiserver: failover on multi-member etcd cluster fails certificate check on DNS mismatch" (kubernetes#83028)](https://github.com/kubernetes/kubernetes/issues/83028). - Fix [IPv6 endpoint parsing in client](https://github.com/etcd-io/etcd/pull/11211). - Fix ["1.16: etcd client does not parse IPv6 addresses correctly when members are joining" (kubernetes#83550)](https://github.com/kubernetes/kubernetes/issues/83550). <hr> ## [v3.4.1](https://github.com/etcd-io/etcd/releases/tag/v3.4.1) (2019-09-17) See [code changes](https://github.com/etcd-io/etcd/compare/v3.4.0...v3.4.1) and [v3.4 upgrade guide](https://etcd.io/docs/latest/upgrades/upgrade_3_4/) for any breaking changes. **Again, before running upgrades from any previous release, please make sure to read change logs below and [v3.4 upgrade guide](https://etcd.io/docs/latest/upgrades/upgrade_3_4/).** ### Metrics, Monitoring See [List of metrics](https://etcd.io/docs/latest/metrics/) for all metrics per release. Note that any `etcd_debugging_*` metrics are experimental and subject to change. - Add [`etcd_debugging_mvcc_current_revision`](https://github.com/etcd-io/etcd/pull/11126) Prometheus metric. - Add [`etcd_debugging_mvcc_compact_revision`](https://github.com/etcd-io/etcd/pull/11126) Prometheus metric. ### etcd server - Fix [secure server logging message](https://github.com/etcd-io/etcd/commit/8b053b0f44c14ac0d9f39b9b78c17c57d47966eb). - Remove [redundant `%` characters in file descriptor warning message](https://github.com/etcd-io/etcd/commit/d5f79adc9cea9ec8c93669526464b0aa19ed417b). ### Package `embed` - Add [`embed.Config.ZapLoggerBuilder`](https://github.com/etcd-io/etcd/pull/11148) to allow creating a custom zap logger. ### Dependency - Upgrade [`google.golang.org/grpc`](https://github.com/grpc/grpc-go/releases) from [**`v1.23.0`**](https://github.com/grpc/grpc-go/releases/tag/v1.23.0) to [**`v1.23.1`**](https://github.com/grpc/grpc-go/releases/tag/v1.23.1). ### Go - Compile with [*Go 1.12.9*](https://golang.org/doc/devel/release.html#go1.12) including [*Go 1.12.8*](https://groups.google.com/d/msg/golang-announce/65QixT3tcmg/DrFiG6vvCwAJ) security fixes. <hr> ## v3.4.0 (2019-08-30) See [code changes](https://github.com/etcd-io/etcd/compare/v3.3.0...v3.4.0) and [v3.4 upgrade guide](https://etcd.io/docs/latest/upgrades/upgrade_3_4/) for any breaking changes. - [v3.4.0](https://github.com/etcd-io/etcd/releases/tag/v3.4.0) (2019-08-30), see [code changes](https://github.com/etcd-io/etcd/compare/v3.4.0-rc.4...v3.4.0). - [v3.4.0-rc.4](https://github.com/etcd-io/etcd/releases/tag/v3.4.0-rc.4) (2019-08-29), see [code changes](https://github.com/etcd-io/etcd/compare/v3.4.0-rc.3...v3.4.0-rc.4). - [v3.4.0-rc.3](https://github.com/etcd-io/etcd/releases/tag/v3.4.0-rc.3) (2019-08-27), see [code changes](https://github.com/etcd-io/etcd/compare/v3.4.0-rc.2...v3.4.0-rc.3). - [v3.4.0-rc.2](https://github.com/etcd-io/etcd/releases/tag/v3.4.0-rc.2) (2019-08-23), see [code changes](https://github.com/etcd-io/etcd/compare/v3.4.0-rc.1...v3.4.0-rc.2). - [v3.4.0-rc.1](https://github.com/etcd-io/etcd/releases/tag/v3.4.0-rc.1) (2019-08-15), see [code changes](https://github.com/etcd-io/etcd/compare/v3.4.0-rc.0...v3.4.0-rc.1). - [v3.4.0-rc.0](https://github.com/etcd-io/etcd/releases/tag/v3.4.0-rc.0) (2019-08-12), see [code changes](https://github.com/etcd-io/etcd/compare/v3.3.0...v3.4.0-rc.0). **Again, before running upgrades from any previous release, please make sure to read change logs below and [v3.4 upgrade guide](https://etcd.io/docs/latest/upgrades/upgrade_3_4/).** ### Documentation - etcd now has a new website! Please visit https://etcd.io. ### Improved - Add Raft learner: [etcd#10725](https://github.com/etcd-io/etcd/pull/10725), [etcd#10727](https://github.com/etcd-io/etcd/pull/10727), [etcd#10730](https://github.com/etcd-io/etcd/pull/10730). - User guide: [runtime-configuration document](https://etcd.io/docs/latest/op-guide/runtime-configuration/#add-a-new-member-as-learner). - API change: [API reference document](https://etcd.io/docs/latest/dev-guide/api_reference_v3/). - More details on implementation: [learner design document](https://etcd.io/docs/latest/learning/design-learner/) and [implementation task list](https://github.com/etcd-io/etcd/issues/10537). - Rewrite [client balancer](https://github.com/etcd-io/etcd/pull/9860) with [new gRPC balancer interface](https://github.com/etcd-io/etcd/issues/9106). - Upgrade [gRPC to v1.23.0](https://github.com/etcd-io/etcd/pull/10911). - Improve [client balancer failover against secure endpoints](https://github.com/etcd-io/etcd/pull/10911). - Fix ["kube-apiserver 1.13.x refuses to work when first etcd-server is not available" (kubernetes#72102)](https://github.com/kubernetes/kubernetes/issues/72102). - Fix [gRPC panic "send on closed channel](https://github.com/etcd-io/etcd/issues/9956). - [The new client balancer](https://etcd.io/docs/latest/learning/design-client/) uses an asynchronous resolver to pass endpoints to the gRPC dial function. To block until the underlying connection is up, pass `grpc.WithBlock()` to `clientv3.Config.DialOptions`. - Add [backoff on watch retries on transient errors](https://github.com/etcd-io/etcd/pull/9840). - Add [jitter to watch progress notify](https://github.com/etcd-io/etcd/pull/9278) to prevent [spikes in `etcd_network_client_grpc_sent_bytes_total`](https://github.com/etcd-io/etcd/issues/9246). - Improve [read index wait timeout warning log](https://github.com/etcd-io/etcd/pull/10026), which indicates that local node might have slow network. - Improve [slow request apply warning log](https://github.com/etcd-io/etcd/pull/9288). - e.g. `read-only range request "key:\"/a\" range_end:\"/b\" " with result "range_response_count:3 size:96" took too long (97.966µs) to execute`. - Redact [request value field](https://github.com/etcd-io/etcd/pull/9822). - Provide [response size](https://github.com/etcd-io/etcd/pull/9826). - Improve ["became inactive" warning log](https://github.com/etcd-io/etcd/pull/10024), which indicates message send to a peer failed. - Improve [TLS setup error logging](https://github.com/etcd-io/etcd/pull/9518) to help debug [TLS-enabled cluster configuring issues](https://github.com/etcd-io/etcd/issues/9400). - Improve [long-running concurrent read transactions under light write workloads](https://github.com/etcd-io/etcd/pull/9296). - Previously, periodic commit on pending writes blocks incoming read transactions, even if there is no pending write. - Now, periodic commit operation does not block concurrent read transactions, thus improves long-running read transaction performance. - Make [backend read transactions fully concurrent](https://github.com/etcd-io/etcd/pull/10523). - Previously, ongoing long-running read transactions block writes and future reads. - With this change, write throughput is increased by 70% and P99 write latency is reduced by 90% in the presence of long-running reads. - Improve [Raft Read Index timeout warning messages](https://github.com/etcd-io/etcd/pull/9897). - Adjust [election timeout on server restart](https://github.com/etcd-io/etcd/pull/9415) to reduce [disruptive rejoining servers](https://github.com/etcd-io/etcd/issues/9333). - Previously, etcd fast-forwards election ticks on server start, with only one tick left for leader election. This is to speed up start phase, without having to wait until all election ticks elapse. Advancing election ticks is useful for cross datacenter deployments with larger election timeouts. However, it was affecting cluster availability if the last tick elapses before leader contacts the restarted node. - Now, when etcd restarts, it adjusts election ticks with more than one tick left, thus more time for leader to prevent disruptive restart. - Add [Raft Pre-Vote feature](https://github.com/etcd-io/etcd/pull/9352) to reduce [disruptive rejoining servers](https://github.com/etcd-io/etcd/issues/9333). - For instance, a flaky(or rejoining) member may drop in and out, and start campaign. This member will end up with a higher term, and ignore all incoming messages with lower term. In this case, a new leader eventually need to get elected, thus disruptive to cluster availability. Raft implements Pre-Vote phase to prevent this kind of disruptions. If enabled, Raft runs an additional phase of election to check if pre-candidate can get enough votes to win an election. - Adjust [periodic compaction retention window](https://github.com/etcd-io/etcd/pull/9485). - e.g. `etcd --auto-compaction-mode=revision --auto-compaction-retention=1000` automatically `Compact` on `"latest revision" - 1000` every 5-minute (when latest revision is 30000, compact on revision 29000). - e.g. Previously, `etcd --auto-compaction-mode=periodic --auto-compaction-retention=24h` automatically `Compact` with 24-hour retention windown for every 2.4-hour. Now, `Compact` happens for every 1-hour. - e.g. Previously, `etcd --auto-compaction-mode=periodic --auto-compaction-retention=30m` automatically `Compact` with 30-minute retention windown for every 3-minute. Now, `Compact` happens for every 30-minute. - Periodic compactor keeps recording latest revisions for every compaction period when given period is less than 1-hour, or for every 1-hour when given compaction period is greater than 1-hour (e.g. 1-hour when `etcd --auto-compaction-mode=periodic --auto-compaction-retention=24h`). - For every compaction period or 1-hour, compactor uses the last revision that was fetched before compaction period, to discard historical data. - The retention window of compaction period moves for every given compaction period or hour. - For instance, when hourly writes are 100 and `etcd --auto-compaction-mode=periodic --auto-compaction-retention=24h`, `v3.2.x`, `v3.3.0`, `v3.3.1`, and `v3.3.2` compact revision 2400, 2640, and 2880 for every 2.4-hour, while `v3.3.3` *or later* compacts revision 2400, 2500, 2600 for every 1-hour. - Futhermore, when `etcd --auto-compaction-mode=periodic --auto-compaction-retention=30m` and writes per minute are about 1000, `v3.3.0`, `v3.3.1`, and `v3.3.2` compact revision 30000, 33000, and 36000, for every 3-minute, while `v3.3.3` *or later* compacts revision 30000, 60000, and 90000, for every 30-minute. - Improve [lease expire/revoke operation performance](https://github.com/etcd-io/etcd/pull/9418), address [lease scalability issue](https://github.com/etcd-io/etcd/issues/9496). - Make [Lease `Lookup` non-blocking with concurrent `Grant`/`Revoke`](https://github.com/etcd-io/etcd/pull/9229). - Make etcd server return `raft.ErrProposalDropped` on internal Raft proposal drop in [v3 applier](https://github.com/etcd-io/etcd/pull/9549) and [v2 applier](https://github.com/etcd-io/etcd/pull/9558). - e.g. a node is removed from cluster, or [`raftpb.MsgProp` arrives at current leader while there is an ongoing leadership transfer](https://github.com/etcd-io/etcd/issues/8975). - Add [`snapshot`](https://github.com/etcd-io/etcd/pull/9118) package for easier snapshot workflow (see [`godoc.org/github.com/etcd/clientv3/snapshot`](https://godoc.org/github.com/etcd-io/etcd/clientv3/snapshot) for more). - Improve [functional tester](https://github.com/etcd-io/etcd/tree/main/functional) coverage: [proxy layer to run network fault tests in CI](https://github.com/etcd-io/etcd/pull/9081), [TLS is enabled both for server and client](https://github.com/etcd-io/etcd/pull/9534), [liveness mode](https://github.com/etcd-io/etcd/issues/9230), [shuffle test sequence](https://github.com/etcd-io/etcd/issues/9381), [membership reconfiguration failure cases](https://github.com/etcd-io/etcd/pull/9564), [disastrous quorum loss and snapshot recover from a seed member](https://github.com/etcd-io/etcd/pull/9565), [embedded etcd](https://github.com/etcd-io/etcd/pull/9572). - Improve [index compaction blocking](https://github.com/etcd-io/etcd/pull/9511) by using a copy on write clone to avoid holding the lock for the traversal of the entire index. - Update [JWT methods](https://github.com/etcd-io/etcd/pull/9883) to allow for use of any supported signature method/algorithm. - Add [Lease checkpointing](https://github.com/etcd-io/etcd/pull/9924) to persist remaining TTLs to the consensus log periodically so that long lived leases progress toward expiry in the presence of leader elections and server restarts. - Enabled by experimental flag "--experimental-enable-lease-checkpoint". - Add [gRPC interceptor for debugging logs](https://github.com/etcd-io/etcd/pull/9990); enable `etcd --debug` flag to see per-request debug information. - Add [consistency check in snapshot status](https://github.com/etcd-io/etcd/pull/10109). If consistency check on snapshot file fails, `snapshot status` returns `"snapshot file integrity check failed..."` error. - Add [`Verify` function to perform corruption check on WAL contents](https://github.com/etcd-io/etcd/pull/10603). - Improve [heartbeat send failure logging](https://github.com/etcd-io/etcd/pull/10663). - Support [users with no password](https://github.com/etcd-io/etcd/pull/9817) for reducing security risk introduced by leaked password. The users can only be authenticated with `CommonName` based auth. - Add `etcd --experimental-peer-skip-client-san-verification` to [skip verification of peer client address](https://github.com/etcd-io/etcd/pull/10524). - Add `etcd --experimental-compaction-batch-limit` to [sets the maximum revisions deleted in each compaction batch](https://github.com/etcd-io/etcd/pull/11034). - Reduced default compaction batch size from 10k revisions to 1k revisions to improve p99 latency during compactions and reduced wait between compactions from 100ms to 10ms. ### Breaking Changes - Rewrite [client balancer](https://github.com/etcd-io/etcd/pull/9860) with [new gRPC balancer interface](https://github.com/etcd-io/etcd/issues/9106). - Upgrade [gRPC to v1.23.0](https://github.com/etcd-io/etcd/pull/10911). - Improve [client balancer failover against secure endpoints](https://github.com/etcd-io/etcd/pull/10911). - Fix ["kube-apiserver 1.13.x refuses to work when first etcd-server is not available" (kubernetes#72102)](https://github.com/kubernetes/kubernetes/issues/72102). - Fix [gRPC panic "send on closed channel](https://github.com/etcd-io/etcd/issues/9956). - [The new client balancer](https://etcd.io/docs/latest/learning/design-client/) uses an asynchronous resolver to pass endpoints to the gRPC dial function. To block until the underlying connection is up, pass `grpc.WithBlock()` to `clientv3.Config.DialOptions`. - Require [*Go 1.12+*](https://github.com/etcd-io/etcd/pull/10045). - Compile with [*Go 1.12.9*](https://golang.org/doc/devel/release.html#go1.12) including [*Go 1.12.8*](https://groups.google.com/d/msg/golang-announce/65QixT3tcmg/DrFiG6vvCwAJ) security fixes. - Migrate dependency management tool from `glide` to [Go module](https://github.com/etcd-io/etcd/pull/10063). - <= 3.3 puts `vendor` directory under `cmd/vendor` directory to [prevent conflicting transitive dependencies](https://github.com/etcd-io/etcd/issues/4913). - 3.4 moves `cmd/vendor` directory to `vendor` at repository root. - Remove recursive symlinks in `cmd` directory. - Now `go get/install/build` on `etcd` packages (e.g. `clientv3`, `tools/benchmark`) enforce builds with etcd `vendor` directory. - Deprecated `latest` [release container](https://console.cloud.google.com/gcr/images/etcd-development/GLOBAL/etcd) tag. - **`docker pull gcr.io/etcd-development/etcd:latest` would not be up-to-date**. - Deprecated [minor](https://semver.org/) version [release container](https://console.cloud.google.com/gcr/images/etcd-development/GLOBAL/etcd) tags. - `docker pull gcr.io/etcd-development/etcd:v3.3` would still work. - **`docker pull gcr.io/etcd-development/etcd:v3.4` would not work**. - Use **`docker pull gcr.io/etcd-development/etcd:v3.4.x`** instead, with the exact patch version. - Deprecated [ACIs from official release](https://github.com/etcd-io/etcd/pull/9059). - [AppC was officially suspended](https://github.com/appc/spec#-disclaimer-), as of late 2016. - [`acbuild`](https://github.com/containers/build#this-project-is-currently-unmaintained) is not maintained anymore. - `*.aci` files are not available from `v3.4` release. - Move [`"github.com/coreos/etcd"`](https://github.com/etcd-io/etcd/issues/9965) to [`"github.com/etcd-io/etcd"`](https://github.com/etcd-io/etcd/issues/9965). - Change import path to `"go.etcd.io/etcd"`. - e.g. `import "go.etcd.io/etcd/raft"`. - Make [`ETCDCTL_API=3 etcdctl` default](https://github.com/etcd-io/etcd/issues/9600). - Now, `etcdctl set foo bar` must be `ETCDCTL_API=2 etcdctl set foo bar`. - Now, `ETCDCTL_API=3 etcdctl put foo bar` could be just `etcdctl put foo bar`. - Make [`etcd --enable-v2=false` default](https://github.com/etcd-io/etcd/pull/10935). - Make [`embed.DefaultEnableV2` `false` default](https://github.com/etcd-io/etcd/pull/10935). - **Deprecated `etcd --ca-file` flag**. Use [`etcd --trusted-ca-file`](https://github.com/etcd-io/etcd/pull/9470) instead (`etcd --ca-file` flag has been marked deprecated since v2.1). - **Deprecated `etcd --peer-ca-file` flag**. Use [`etcd --peer-trusted-ca-file`](https://github.com/etcd-io/etcd/pull/9470) instead (`etcd --peer-ca-file` flag has been marked deprecated since v2.1). - **Deprecated `pkg/transport.TLSInfo.CAFile` field**. Use [`pkg/transport.TLSInfo.TrustedCAFile`](https://github.com/etcd-io/etcd/pull/9470) instead (`CAFile` field has been marked deprecated since v2.1). - Exit on [empty hosts in advertise URLs](https://github.com/etcd-io/etcd/pull/8786). - Address [advertise client URLs accepts empty hosts](https://github.com/etcd-io/etcd/issues/8379). - e.g. exit with error on `--advertise-client-urls=http://:2379`. - e.g. exit with error on `--initial-advertise-peer-urls=http://:2380`. - Exit on [shadowed environment variables](https://github.com/etcd-io/etcd/pull/9382). - Address [error on shadowed environment variables](https://github.com/etcd-io/etcd/issues/8380). - e.g. exit with error on `ETCD_NAME=abc etcd --name=def`. - e.g. exit with error on `ETCD_INITIAL_CLUSTER_TOKEN=abc etcd --initial-cluster-token=def`. - e.g. exit with error on `ETCDCTL_ENDPOINTS=abc.com ETCDCTL_API=3 etcdctl endpoint health --endpoints=def.com`. - Change [`etcdserverpb.AuthRoleRevokePermissionRequest/key,range_end` fields type from `string` to `bytes`](https://github.com/etcd-io/etcd/pull/9433). - Deprecating `etcd_debugging_mvcc_db_total_size_in_bytes` Prometheus metric (to be removed in v3.5). Use [`etcd_mvcc_db_total_size_in_bytes`](https://github.com/etcd-io/etcd/pull/9819) instead. - Deprecating `etcd_debugging_mvcc_put_total` Prometheus metric (to be removed in v3.5). Use [`etcd_mvcc_put_total`](https://github.com/etcd-io/etcd/pull/10962) instead. - Deprecating `etcd_debugging_mvcc_delete_total` Prometheus metric (to be removed in v3.5). Use [`etcd_mvcc_delete_total`](https://github.com/etcd-io/etcd/pull/10962) instead. - Deprecating `etcd_debugging_mvcc_range_total` Prometheus metric (to be removed in v3.5). Use [`etcd_mvcc_range_total`](https://github.com/etcd-io/etcd/pull/10968) instead. - Deprecating `etcd_debugging_mvcc_txn_total`Prometheus metric (to be removed in v3.5). Use [`etcd_mvcc_txn_total`](https://github.com/etcd-io/etcd/pull/10968) instead. - Rename `etcdserver.ServerConfig.SnapCount` field to `etcdserver.ServerConfig.SnapshotCount`, to be consistent with the flag name `etcd --snapshot-count`. - Rename `embed.Config.SnapCount` field to [`embed.Config.SnapshotCount`](https://github.com/etcd-io/etcd/pull/9745), to be consistent with the flag name `etcd --snapshot-count`. - Change [`embed.Config.CorsInfo` in `*cors.CORSInfo` type to `embed.Config.CORS` in `map[string]struct{}` type](https://github.com/etcd-io/etcd/pull/9490). - Deprecated [`embed.Config.SetupLogging`](https://github.com/etcd-io/etcd/pull/9572). - Now logger is set up automatically based on [`embed.Config.Logger`, `embed.Config.LogOutputs`, `embed.Config.Debug` fields](https://github.com/etcd-io/etcd/pull/9572). - Rename [`etcd --log-output` to `etcd --log-outputs`](https://github.com/etcd-io/etcd/pull/9624) to support multiple log outputs. - **`etcd --log-output`** will be deprecated in v3.5. - Rename [**`embed.Config.LogOutput`** to **`embed.Config.LogOutputs`**](https://github.com/etcd-io/etcd/pull/9624) to support multiple log outputs. - Change [**`embed.Config.LogOutputs`** type from `string` to `[]string`](https://github.com/etcd-io/etcd/pull/9579) to support multiple log outputs. - Now that `etcd --log-outputs` accepts multiple writers, etcd configuration YAML file `log-outputs` field must be changed to `[]string` type. - Previously, `etcd --config-file etcd.config.yaml` can have `log-outputs: default` field, now must be `log-outputs: [default]`. - Deprecating [`etcd --debug`](https://github.com/etcd-io/etcd/pull/10947) flag. Use `etcd --log-level=debug` flag instead. - v3.5 will deprecate `etcd --debug` flag in favor of `etcd --log-level=debug`. - Change v3 `etcdctl snapshot` exit codes with [`snapshot` package](https://github.com/etcd-io/etcd/pull/9118/commits/df689f4280e1cce4b9d61300be13ca604d41670a). - Exit on error with exit code 1 (no more exit code 5 or 6 on `snapshot save/restore` commands). - Deprecated [`grpc.ErrClientConnClosing`](https://github.com/etcd-io/etcd/pull/10981). - `clientv3` and `proxy/grpcproxy` now does not return `grpc.ErrClientConnClosing`. - `grpc.ErrClientConnClosing` has been [deprecated in gRPC >= 1.10](https://github.com/grpc/grpc-go/pull/1854). - Use `clientv3.IsConnCanceled(error)` or `google.golang.org/grpc/status.FromError(error)` instead. - Deprecated [gRPC gateway](https://github.com/grpc-ecosystem/grpc-gateway) endpoint `/v3beta` with [`/v3`](https://github.com/etcd-io/etcd/pull/9298). - Deprecated [`/v3alpha`](https://github.com/etcd-io/etcd/pull/9298). - To deprecate [`/v3beta`](https://github.com/etcd-io/etcd/issues/9189) in v3.5. - In v3.4, `curl -L http://localhost:2379/v3beta/kv/put -X POST -d '{"key": "Zm9v", "value": "YmFy"}'` still works as a fallback to `curl -L http://localhost:2379/v3/kv/put -X POST -d '{"key": "Zm9v", "value": "YmFy"}'`, but `curl -L http://localhost:2379/v3beta/kv/put -X POST -d '{"key": "Zm9v", "value": "YmFy"}'` won't work in v3.5. Use `curl -L http://localhost:2379/v3/kv/put -X POST -d '{"key": "Zm9v", "value": "YmFy"}'` instead. - Change [`wal` package function signatures](https://github.com/etcd-io/etcd/pull/9572) to support [structured logger and logging to file](https://github.com/etcd-io/etcd/issues/9438) in server-side. - Previously, `Open(dirpath string, snap walpb.Snapshot) (*WAL, error)`, now `Open(lg *zap.Logger, dirpath string, snap walpb.Snapshot) (*WAL, error)`. - Previously, `OpenForRead(dirpath string, snap walpb.Snapshot) (*WAL, error)`, now `OpenForRead(lg *zap.Logger, dirpath string, snap walpb.Snapshot) (*WAL, error)`. - Previously, `Repair(dirpath string) bool`, now `Repair(lg *zap.Logger, dirpath string) bool`. - Previously, `Create(dirpath string, metadata []byte) (*WAL, error)`, now `Create(lg *zap.Logger, dirpath string, metadata []byte) (*WAL, error)`. - Remove [`pkg/cors` package](https://github.com/etcd-io/etcd/pull/9490). - Move internal packages to `etcdserver`. - `"github.com/coreos/etcd/alarm"` to `"go.etcd.io/etcd/etcdserver/api/v3alarm"`. - `"github.com/coreos/etcd/compactor"` to `"go.etcd.io/etcd/etcdserver/api/v3compactor"`. - `"github.com/coreos/etcd/discovery"` to `"go.etcd.io/etcd/etcdserver/api/v2discovery"`. - `"github.com/coreos/etcd/etcdserver/auth"` to `"go.etcd.io/etcd/etcdserver/api/v2auth"`. - `"github.com/coreos/etcd/etcdserver/membership"` to `"go.etcd.io/etcd/etcdserver/api/membership"`. - `"github.com/coreos/etcd/etcdserver/stats"` to `"go.etcd.io/etcd/etcdserver/api/v2stats"`. - `"github.com/coreos/etcd/error"` to `"go.etcd.io/etcd/etcdserver/api/v2error"`. - `"github.com/coreos/etcd/rafthttp"` to `"go.etcd.io/etcd/etcdserver/api/rafthttp"`. - `"github.com/coreos/etcd/snap"` to `"go.etcd.io/etcd/etcdserver/api/snap"`. - `"github.com/coreos/etcd/store"` to `"go.etcd.io/etcd/etcdserver/api/v2store"`. - Change [snapshot file permissions](https://github.com/etcd-io/etcd/pull/9977): On Linux, the snapshot file changes from readable by all (mode 0644) to readable by the user only (mode 0600). - Change [`pkg/adt.IntervalTree` from `struct` to `interface`](https://github.com/etcd-io/etcd/pull/10959). - See [`pkg/adt` README](https://github.com/etcd-io/etcd/tree/main/pkg/adt) and [`pkg/adt` godoc](https://godoc.org/go.etcd.io/etcd/pkg/adt). - Release branch `/version` defines version `3.4.x-pre`, instead of `3.4.y+git`. - Use `3.4.5-pre`, instead of `3.4.4+git`. ### Dependency - Upgrade [`github.com/coreos/bbolt`](https://github.com/etcd-io/bbolt/releases) from [**`v1.3.1-coreos.6`**](https://github.com/etcd-io/bbolt/releases/tag/v1.3.1-coreos.6) to [`go.etcd.io/bbolt`](https://github.com/etcd-io/bbolt/releases) [**`v1.3.3`**](https://github.com/etcd-io/bbolt/releases/tag/v1.3.3). - Upgrade [`google.golang.org/grpc`](https://github.com/grpc/grpc-go/releases) from [**`v1.7.5`**](https://github.com/grpc/grpc-go/releases/tag/v1.7.5) to [**`v1.23.0`**](https://github.com/grpc/grpc-go/releases/tag/v1.23.0). - Migrate [`github.com/ugorji/go/codec`](https://github.com/ugorji/go/releases) to [**`github.com/json-iterator/go`**](https://github.com/json-iterator/go), to [regenerate v2 `client`](https://github.com/etcd-io/etcd/pull/9494) (See [#10667](https://github.com/etcd-io/etcd/pull/10667) for more). - Migrate [`github.com/ghodss/yaml`](https://github.com/ghodss/yaml/releases) to [**`sigs.k8s.io/yaml`**](https://github.com/kubernetes-sigs/yaml) (See [#10687](https://github.com/etcd-io/etcd/pull/10687) for more). - Upgrade [`golang.org/x/crypto`](https://github.com/golang/crypto) from [**`crypto@9419663f5`**](https://github.com/golang/crypto/commit/9419663f5a44be8b34ca85f08abc5fe1be11f8a3) to [**`crypto@0709b304e793`**](https://github.com/golang/crypto/commit/0709b304e793a5edb4a2c0145f281ecdc20838a4). - Upgrade [`golang.org/x/net`](https://github.com/golang/net) from [**`net@66aacef3d`**](https://github.com/golang/net/commit/66aacef3dd8a676686c7ae3716979581e8b03c47) to [**`net@adae6a3d119a`**](https://github.com/golang/net/commit/adae6a3d119ae4890b46832a2e88a95adc62b8e7). - Upgrade [`golang.org/x/sys`](https://github.com/golang/sys) from [**`sys@ebfc5b463`**](https://github.com/golang/sys/commit/ebfc5b4631820b793c9010c87fd8fef0f39eb082) to [**`sys@c7b8b68b1456`**](https://github.com/golang/sys/commit/c7b8b68b14567162c6602a7c5659ee0f26417c18). - Upgrade [`golang.org/x/text`](https://github.com/golang/text) from [**`text@b19bf474d`**](https://github.com/golang/text/commit/b19bf474d317b857955b12035d2c5acb57ce8b01) to [**`v0.3.0`**](https://github.com/golang/text/releases/tag/v0.3.0). - Upgrade [`golang.org/x/time`](https://github.com/golang/time) from [**`time@c06e80d93`**](https://github.com/golang/time/commit/c06e80d9300e4443158a03817b8a8cb37d230320) to [**`time@fbb02b229`**](https://github.com/golang/time/commit/fbb02b2291d28baffd63558aa44b4b56f178d650). - Upgrade [`github.com/golang/protobuf`](https://github.com/golang/protobuf/releases) from [**`golang/protobuf@1e59b77b5`**](https://github.com/golang/protobuf/commit/1e59b77b52bf8e4b449a57e6f79f21226d571845) to [**`v1.3.2`**](https://github.com/golang/protobuf/releases/tag/v1.3.2). - Upgrade [`gopkg.in/yaml.v2`](https://github.com/go-yaml/yaml/releases) from [**`yaml@cd8b52f82`**](https://github.com/go-yaml/yaml/commit/cd8b52f8269e0feb286dfeef29f8fe4d5b397e0b) to [**`yaml@5420a8b67`**](https://github.com/go-yaml/yaml/commit/5420a8b6744d3b0345ab293f6fcba19c978f1183). - Upgrade [`github.com/dgrijalva/jwt-go`](https://github.com/dgrijalva/jwt-go/releases) from [**`v3.0.0`**](https://github.com/dgrijalva/jwt-go/releases/tag/v3.0.0) to [**`v3.2.0`**](https://github.com/dgrijalva/jwt-go/releases/tag/v3.2.0). - Upgrade [`github.com/soheilhy/cmux`](https://github.com/soheilhy/cmux/releases) from [**`v0.1.3`**](https://github.com/soheilhy/cmux/releases/tag/v0.1.3) to [**`v0.1.4`**](https://github.com/soheilhy/cmux/releases/tag/v0.1.4). - Upgrade [`github.com/google/btree`](https://github.com/google/btree/releases) from [**`google/btree@925471ac9`**](https://github.com/google/btree/commit/925471ac9e2131377a91e1595defec898166fe49) to [**`v1.0.0`**](https://github.com/google/btree/releases/tag/v1.0.0). - Upgrade [`github.com/spf13/cobra`](https://github.com/spf13/cobra/releases) from [**`spf13/cobra@1c44ec8d3`**](https://github.com/spf13/cobra/commit/1c44ec8d3f1552cac48999f9306da23c4d8a288b) to [**`v0.0.3`**](https://github.com/spf13/cobra/releases/tag/v0.0.3). - Upgrade [`github.com/spf13/pflag`](https://github.com/spf13/pflag/releases) from [**`v1.0.0`**](https://github.com/spf13/pflag/releases/tag/v1.0.0) to [**`spf13/pflag@1ce0cc6db`**](https://github.com/spf13/pflag/commit/1ce0cc6db4029d97571db82f85092fccedb572ce). - Upgrade [`github.com/coreos/go-systemd`](https://github.com/coreos/go-systemd/releases) from [**`v15`**](https://github.com/coreos/go-systemd/releases/tag/v15) to [**`v17`**](https://github.com/coreos/go-systemd/releases/tag/v17). - Upgrade [`github.com/prometheus/client_golang`](https://github.com/prometheus/client_golang/releases) from [**``prometheus/client_golang@5cec1d042``**](https://github.com/prometheus/client_golang/commit/5cec1d0429b02e4323e042eb04dafdb079ddf568) to [**`v1.0.0`**](https://github.com/prometheus/client_golang/releases/tag/v1.0.0). - Upgrade [`github.com/grpc-ecosystem/go-grpc-prometheus`](https://github.com/grpc-ecosystem/go-grpc-prometheus/releases) from [**``grpc-ecosystem/go-grpc-prometheus@0dafe0d49``**](https://github.com/grpc-ecosystem/go-grpc-prometheus/commit/0dafe0d496ea71181bf2dd039e7e3f44b6bd11a7) to [**`v1.2.0`**](https://github.com/grpc-ecosystem/go-grpc-prometheus/releases/tag/v1.2.0). - Upgrade [`github.com/grpc-ecosystem/grpc-gateway`](https://github.com/grpc-ecosystem/grpc-gateway/releases) from [**`v1.3.1`**](https://github.com/grpc-ecosystem/grpc-gateway/releases/tag/v1.3.1) to [**`v1.4.1`**](https://github.com/grpc-ecosystem/grpc-gateway/releases/tag/v1.4.1). - Migrate [`github.com/kr/pty`](https://github.com/kr/pty/releases) to [**`github.com/creack/pty`**](https://github.com/creack/pty/releases/tag/v1.1.7), as the later has replaced the original module. - Upgrade [`github.com/gogo/protobuf`](https://github.com/gogo/protobuf/releases) from [**`v1.0.0`**](https://github.com/gogo/protobuf/releases/tag/v1.0.0) to [**`v1.2.1`**](https://github.com/gogo/protobuf/releases/tag/v1.2.1). ### Metrics, Monitoring See [List of metrics](https://etcd.io/docs/latest/metrics/) for all metrics per release. Note that any `etcd_debugging_*` metrics are experimental and subject to change. - Add [`etcd_snap_db_fsync_duration_seconds_count`](https://github.com/etcd-io/etcd/pull/9997) Prometheus metric. - Add [`etcd_snap_db_save_total_duration_seconds_bucket`](https://github.com/etcd-io/etcd/pull/9997) Prometheus metric. - Add [`etcd_network_snapshot_send_success`](https://github.com/etcd-io/etcd/pull/9997) Prometheus metric. - Add [`etcd_network_snapshot_send_failures`](https://github.com/etcd-io/etcd/pull/9997) Prometheus metric. - Add [`etcd_network_snapshot_send_total_duration_seconds`](https://github.com/etcd-io/etcd/pull/9997) Prometheus metric. - Add [`etcd_network_snapshot_receive_success`](https://github.com/etcd-io/etcd/pull/9997) Prometheus metric. - Add [`etcd_network_snapshot_receive_failures`](https://github.com/etcd-io/etcd/pull/9997) Prometheus metric. - Add [`etcd_network_snapshot_receive_total_duration_seconds`](https://github.com/etcd-io/etcd/pull/9997) Prometheus metric. - Add [`etcd_network_active_peers`](https://github.com/etcd-io/etcd/pull/9762) Prometheus metric. - Let's say `"7339c4e5e833c029"` server `/metrics` returns `etcd_network_active_peers{Local="7339c4e5e833c029",Remote="729934363faa4a24"} 1` and `etcd_network_active_peers{Local="7339c4e5e833c029",Remote="b548c2511513015"} 1`. This indicates that the local node `"7339c4e5e833c029"` currently has two active remote peers `"729934363faa4a24"` and `"b548c2511513015"` in a 3-node cluster. If the node `"b548c2511513015"` is down, the local node `"7339c4e5e833c029"` will show `etcd_network_active_peers{Local="7339c4e5e833c029",Remote="729934363faa4a24"} 1` and `etcd_network_active_peers{Local="7339c4e5e833c029",Remote="b548c2511513015"} 0`. - Add [`etcd_network_disconnected_peers_total`](https://github.com/etcd-io/etcd/pull/9762) Prometheus metric. - If a remote peer `"b548c2511513015"` is down, the local node `"7339c4e5e833c029"` server `/metrics` would return `etcd_network_disconnected_peers_total{Local="7339c4e5e833c029",Remote="b548c2511513015"} 1`, while active peer metrics will show `etcd_network_active_peers{Local="7339c4e5e833c029",Remote="729934363faa4a24"} 1` and `etcd_network_active_peers{Local="7339c4e5e833c029",Remote="b548c2511513015"} 0`. - Add [`etcd_network_server_stream_failures_total`](https://github.com/etcd-io/etcd/pull/9760) Prometheus metric. - e.g. `etcd_network_server_stream_failures_total{API="lease-keepalive",Type="receive"} 1` - e.g. `etcd_network_server_stream_failures_total{API="watch",Type="receive"} 1` - Improve [`etcd_network_peer_round_trip_time_seconds`](https://github.com/etcd-io/etcd/pull/10155) Prometheus metric to track leader heartbeats. - Previously, it only samples the TCP connection for snapshot messages. - Increase [`etcd_network_peer_round_trip_time_seconds`](https://github.com/etcd-io/etcd/pull/9762) Prometheus metric histogram upper-bound. - Previously, highest bucket only collects requests taking 0.8192 seconds or more. - Now, highest buckets collect 0.8192 seconds, 1.6384 seconds, and 3.2768 seconds or more. - Add [`etcd_server_is_leader`](https://github.com/etcd-io/etcd/pull/9587) Prometheus metric. - Add [`etcd_server_id`](https://github.com/etcd-io/etcd/pull/9998) Prometheus metric. - Add [`etcd_cluster_version`](https://github.com/etcd-io/etcd/pull/10257) Prometheus metric. - Add [`etcd_server_version`](https://github.com/etcd-io/etcd/pull/8960) Prometheus metric. - To replace [Kubernetes `etcd-version-monitor`](https://github.com/etcd-io/etcd/issues/8948). - Add [`etcd_server_go_version`](https://github.com/etcd-io/etcd/pull/9957) Prometheus metric. - Add [`etcd_server_health_success`](https://github.com/etcd-io/etcd/pull/10156) Prometheus metric. - Add [`etcd_server_health_failures`](https://github.com/etcd-io/etcd/pull/10156) Prometheus metric. - Add [`etcd_server_read_indexes_failed_total`](https://github.com/etcd-io/etcd/pull/10094) Prometheus metric. - Add [`etcd_server_heartbeat_send_failures_total`](https://github.com/etcd-io/etcd/pull/9761) Prometheus metric. - Add [`etcd_server_slow_apply_total`](https://github.com/etcd-io/etcd/pull/9761) Prometheus metric. - Add [`etcd_server_slow_read_indexes_total`](https://github.com/etcd-io/etcd/pull/9897) Prometheus metric. - Add [`etcd_server_quota_backend_bytes`](https://github.com/etcd-io/etcd/pull/9820) Prometheus metric. - Use it with `etcd_mvcc_db_total_size_in_bytes` and `etcd_mvcc_db_total_size_in_use_in_bytes`. - `etcd_server_quota_backend_bytes 2.147483648e+09` means current quota size is 2 GB. - `etcd_mvcc_db_total_size_in_bytes 20480` means current physically allocated DB size is 20 KB. - `etcd_mvcc_db_total_size_in_use_in_bytes 16384` means future DB size if defragment operation is complete. - `etcd_mvcc_db_total_size_in_bytes - etcd_mvcc_db_total_size_in_use_in_bytes` is the number of bytes that can be saved on disk with defragment operation. - Add [`etcd_mvcc_db_total_size_in_use_in_bytes`](https://github.com/etcd-io/etcd/pull/9256) Prometheus metric. - Use it with `etcd_mvcc_db_total_size_in_bytes` and `etcd_mvcc_db_total_size_in_use_in_bytes`. - `etcd_server_quota_backend_bytes 2.147483648e+09` means current quota size is 2 GB. - `etcd_mvcc_db_total_size_in_bytes 20480` means current physically allocated DB size is 20 KB. - `etcd_mvcc_db_total_size_in_use_in_bytes 16384` means future DB size if defragment operation is complete. - `etcd_mvcc_db_total_size_in_bytes - etcd_mvcc_db_total_size_in_use_in_bytes` is the number of bytes that can be saved on disk with defragment operation. - Add [`etcd_mvcc_db_open_read_transactions`](https://github.com/etcd-io/etcd/pull/10523/commits/ad80752715aaed449629369687c5fd30eb1bda76) Prometheus metric. - Add [`etcd_snap_fsync_duration_seconds`](https://github.com/etcd-io/etcd/pull/9762) Prometheus metric. - Add [`etcd_disk_backend_defrag_duration_seconds`](https://github.com/etcd-io/etcd/pull/9761) Prometheus metric. - Add [`etcd_mvcc_hash_duration_seconds`](https://github.com/etcd-io/etcd/pull/9761) Prometheus metric. - Add [`etcd_mvcc_hash_rev_duration_seconds`](https://github.com/etcd-io/etcd/pull/9761) Prometheus metric. - Add [`etcd_debugging_disk_backend_commit_rebalance_duration_seconds`](https://github.com/etcd-io/etcd/pull/9834) Prometheus metric. - Add [`etcd_debugging_disk_backend_commit_spill_duration_seconds`](https://github.com/etcd-io/etcd/pull/9834) Prometheus metric. - Add [`etcd_debugging_disk_backend_commit_write_duration_seconds`](https://github.com/etcd-io/etcd/pull/9834) Prometheus metric. - Add [`etcd_debugging_lease_granted_total`](https://github.com/etcd-io/etcd/pull/9778) Prometheus metric. - Add [`etcd_debugging_lease_revoked_total`](https://github.com/etcd-io/etcd/pull/9778) Prometheus metric. - Add [`etcd_debugging_lease_renewed_total`](https://github.com/etcd-io/etcd/pull/9778) Prometheus metric. - Add [`etcd_debugging_lease_ttl_total`](https://github.com/etcd-io/etcd/pull/9778) Prometheus metric. - Add [`etcd_network_snapshot_send_inflights_total`](https://github.com/etcd-io/etcd/pull/11009) Prometheus metric. - Add [`etcd_network_snapshot_receive_inflights_total`](https://github.com/etcd-io/etcd/pull/11009) Prometheus metric. - Add [`etcd_server_snapshot_apply_in_progress_total`](https://github.com/etcd-io/etcd/pull/11009) Prometheus metric. - Add [`etcd_server_is_learner`](https://github.com/etcd-io/etcd/pull/10731) Prometheus metric. - Add [`etcd_server_learner_promote_failures`](https://github.com/etcd-io/etcd/pull/10731) Prometheus metric. - Add [`etcd_server_learner_promote_successes`](https://github.com/etcd-io/etcd/pull/10731) Prometheus metric. - Increase [`etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds`](https://github.com/etcd-io/etcd/pull/9762) Prometheus metric histogram upper-bound. - Previously, highest bucket only collects requests taking 1.024 seconds or more. - Now, highest buckets collect 1.024 seconds, 2.048 seconds, and 4.096 seconds or more. - Fix missing [`etcd_network_peer_sent_failures_total`](https://github.com/etcd-io/etcd/pull/9437) Prometheus metric count. - Fix [`etcd_debugging_server_lease_expired_total`](https://github.com/etcd-io/etcd/pull/9557) Prometheus metric. - Fix [race conditions in v2 server stat collecting](https://github.com/etcd-io/etcd/pull/9562). - Change [gRPC proxy to expose etcd server endpoint /metrics](https://github.com/etcd-io/etcd/pull/10618). - The metrics that were exposed via the proxy were not etcd server members but instead the proxy itself. - Fix bug where [db_compaction_total_duration_milliseconds metric incorrectly measured duration as 0](https://github.com/etcd-io/etcd/pull/10646). - Deprecating `etcd_debugging_mvcc_db_total_size_in_bytes` Prometheus metric (to be removed in v3.5). Use [`etcd_mvcc_db_total_size_in_bytes`](https://github.com/etcd-io/etcd/pull/9819) instead. - Deprecating `etcd_debugging_mvcc_put_total` Prometheus metric (to be removed in v3.5). Use [`etcd_mvcc_put_total`](https://github.com/etcd-io/etcd/pull/10962) instead. - Deprecating `etcd_debugging_mvcc_delete_total` Prometheus metric (to be removed in v3.5). Use [`etcd_mvcc_delete_total`](https://github.com/etcd-io/etcd/pull/10962) instead. - Deprecating `etcd_debugging_mvcc_range_total` Prometheus metric (to be removed in v3.5). Use [`etcd_mvcc_range_total`](https://github.com/etcd-io/etcd/pull/10968) instead. - Deprecating `etcd_debugging_mvcc_txn_total`Prometheus metric (to be removed in v3.5). Use [`etcd_mvcc_txn_total`](https://github.com/etcd-io/etcd/pull/10968) instead. ### Security, Authentication See [security doc](https://etcd.io/docs/latest/op-guide/security/) for more details. - Support TLS cipher suite whitelisting. - To block [weak cipher suites](https://github.com/etcd-io/etcd/issues/8320). - TLS handshake fails when client hello is requested with invalid cipher suites. - Add [`etcd --cipher-suites`](https://github.com/etcd-io/etcd/pull/9801) flag. - If empty, Go auto-populates the list. - Add [`etcd --host-whitelist`](https://github.com/etcd-io/etcd/pull/9372) flag, [`etcdserver.Config.HostWhitelist`](https://github.com/etcd-io/etcd/pull/9372), and [`embed.Config.HostWhitelist`](https://github.com/etcd-io/etcd/pull/9372), to prevent ["DNS Rebinding"](https://en.wikipedia.org/wiki/DNS_rebinding) attack. - Any website can simply create an authorized DNS name, and direct DNS to `"localhost"` (or any other address). Then, all HTTP endpoints of etcd server listening on `"localhost"` becomes accessible, thus vulnerable to [DNS rebinding attacks (CVE-2018-5702)](https://bugs.chromium.org/p/project-zero/issues/detail?id=1447#c2). - Client origin enforce policy works as follow: - If client connection is secure via HTTPS, allow any hostnames.. - If client connection is not secure and `"HostWhitelist"` is not empty, only allow HTTP requests whose Host field is listed in whitelist. - By default, `"HostWhitelist"` is `"*"`, which means insecure server allows all client HTTP requests. - Note that the client origin policy is enforced whether authentication is enabled or not, for tighter controls. - When specifying hostnames, loopback addresses are not added automatically. To allow loopback interfaces, add them to whitelist manually (e.g. `"localhost"`, `"127.0.0.1"`, etc.). - e.g. `etcd --host-whitelist example.com`, then the server will reject all HTTP requests whose Host field is not `example.com` (also rejects requests to `"localhost"`). - Support [`etcd --cors`](https://github.com/etcd-io/etcd/pull/9490) in v3 HTTP requests (gRPC gateway). - Support [`ttl` field for `etcd` Authentication JWT token](https://github.com/etcd-io/etcd/pull/8302). - e.g. `etcd --auth-token jwt,pub-key=<pub key path>,priv-key=<priv key path>,sign-method=<sign method>,ttl=5m`. - Allow empty token provider in [`etcdserver.ServerConfig.AuthToken`](https://github.com/etcd-io/etcd/pull/9369). - Fix [TLS reload](https://github.com/etcd-io/etcd/pull/9570) when [certificate SAN field only includes IP addresses but no domain names](https://github.com/etcd-io/etcd/issues/9541). - In Go, server calls `(*tls.Config).GetCertificate` for TLS reload if and only if server's `(*tls.Config).Certificates` field is not empty, or `(*tls.ClientHelloInfo).ServerName` is not empty with a valid SNI from the client. Previously, etcd always populates `(*tls.Config).Certificates` on the initial client TLS handshake, as non-empty. Thus, client was always expected to supply a matching SNI in order to pass the TLS verification and to trigger `(*tls.Config).GetCertificate` to reload TLS assets. - However, a certificate whose SAN field does [not include any domain names but only IP addresses](https://github.com/etcd-io/etcd/issues/9541) would request `*tls.ClientHelloInfo` with an empty `ServerName` field, thus failing to trigger the TLS reload on initial TLS handshake; this becomes a problem when expired certificates need to be replaced online. - Now, `(*tls.Config).Certificates` is created empty on initial TLS client handshake, first to trigger `(*tls.Config).GetCertificate`, and then to populate rest of the certificates on every new TLS connection, even when client SNI is empty (e.g. cert only includes IPs). ### etcd server - Add [`rpctypes.ErrLeaderChanged`](https://github.com/etcd-io/etcd/pull/10094). - Now linearizable requests with read index would fail fast when there is a leadership change, instead of waiting until context timeout. - Add [`etcd --initial-election-tick-advance`](https://github.com/etcd-io/etcd/pull/9591) flag to configure initial election tick fast-forward. - By default, `etcd --initial-election-tick-advance=true`, then local member fast-forwards election ticks to speed up "initial" leader election trigger. - This benefits the case of larger election ticks. For instance, cross datacenter deployment may require longer election timeout of 10-second. If true, local node does not need wait up to 10-second. Instead, forwards its election ticks to 8-second, and have only 2-second left before leader election. - Major assumptions are that: cluster has no active leader thus advancing ticks enables faster leader election. Or cluster already has an established leader, and rejoining follower is likely to receive heartbeats from the leader after tick advance and before election timeout. - However, when network from leader to rejoining follower is congested, and the follower does not receive leader heartbeat within left election ticks, disruptive election has to happen thus affecting cluster availabilities. - Now, this can be disabled by setting `etcd --initial-election-tick-advance=false`. - Disabling this would slow down initial bootstrap process for cross datacenter deployments. Make tradeoffs by configuring `etcd --initial-election-tick-advance` at the cost of slow initial bootstrap. - If single-node, it advances ticks regardless. - Address [disruptive rejoining follower node](https://github.com/etcd-io/etcd/issues/9333). - Add [`etcd --pre-vote`](https://github.com/etcd-io/etcd/pull/9352) flag to enable to run an additional Raft election phase. - For instance, a flaky(or rejoining) member may drop in and out, and start campaign. This member will end up with a higher term, and ignore all incoming messages with lower term. In this case, a new leader eventually need to get elected, thus disruptive to cluster availability. Raft implements Pre-Vote phase to prevent this kind of disruptions. If enabled, Raft runs an additional phase of election to check if pre-candidate can get enough votes to win an election. - `etcd --pre-vote=false` by default. - v3.5 will enable `etcd --pre-vote=true` by default. - Add `etcd --experimental-compaction-batch-limit` to [sets the maximum revisions deleted in each compaction batch](https://github.com/etcd-io/etcd/pull/11034). - Reduced default compaction batch size from 10k revisions to 1k revisions to improve p99 latency during compactions and reduced wait between compactions from 100ms to 10ms. - Add [`etcd --discovery-srv-name`](https://github.com/etcd-io/etcd/pull/8690) flag to support custom DNS SRV name with discovery. - If not given, etcd queries `_etcd-server-ssl._tcp.[YOUR_HOST]` and `_etcd-server._tcp.[YOUR_HOST]`. - If `etcd --discovery-srv-name="foo"`, then query `_etcd-server-ssl-foo._tcp.[YOUR_HOST]` and `_etcd-server-foo._tcp.[YOUR_HOST]`. - Useful for operating multiple etcd clusters under the same domain. - Support TLS cipher suite whitelisting. - To block [weak cipher suites](https://github.com/etcd-io/etcd/issues/8320). - TLS handshake fails when client hello is requested with invalid cipher suites. - Add [`etcd --cipher-suites`](https://github.com/etcd-io/etcd/pull/9801) flag. - If empty, Go auto-populates the list. - Support [`etcd --cors`](https://github.com/etcd-io/etcd/pull/9490) in v3 HTTP requests (gRPC gateway). - Rename [`etcd --log-output` to `etcd --log-outputs`](https://github.com/etcd-io/etcd/pull/9624) to support multiple log outputs. - **`etcd --log-output` will be deprecated in v3.5**. - Add [`etcd --logger`](https://github.com/etcd-io/etcd/pull/9572) flag to support [structured logger and multiple log outputs](https://github.com/etcd-io/etcd/issues/9438) in server-side. - **`etcd --logger=capnslog` will be deprecated in v3.5**. - Main motivation is to promote automated etcd monitoring, rather than looking back server logs when it starts breaking. Future development will make etcd log as few as possible, and make etcd easier to monitor with metrics and alerts. - `etcd --logger=capnslog --log-outputs=default` is the default setting and same as previous etcd server logging format. - `etcd --logger=zap --log-outputs=default` is not supported when `etcd --logger=zap`. - Use `etcd --logger=zap --log-outputs=stderr` instead. - Or, use `etcd --logger=zap --log-outputs=systemd/journal` to send logs to the local systemd journal. - Previously, if etcd parent process ID (PPID) is 1 (e.g. run with systemd), `etcd --logger=capnslog --log-outputs=default` redirects server logs to local systemd journal. And if write to journald fails, it writes to `os.Stderr` as a fallback. - However, even with PPID 1, it can fail to dial systemd journal (e.g. run embedded etcd with Docker container). Then, [every single log write will fail](https://github.com/etcd-io/etcd/pull/9729) and fall back to `os.Stderr`, which is inefficient. - To avoid this problem, systemd journal logging must be configured manually. - `etcd --logger=zap --log-outputs=stderr` will log server operations in [JSON-encoded format](https://godoc.org/go.uber.org/zap#NewProductionEncoderConfig) and writes logs to `os.Stderr`. Use this to override journald log redirects. - `etcd --logger=zap --log-outputs=stdout` will log server operations in [JSON-encoded format](https://godoc.org/go.uber.org/zap#NewProductionEncoderConfig) and writes logs to `os.Stdout` Use this to override journald log redirects. - `etcd --logger=zap --log-outputs=a.log` will log server operations in [JSON-encoded format](https://godoc.org/go.uber.org/zap#NewProductionEncoderConfig) and writes logs to the specified file `a.log`. - `etcd --logger=zap --log-outputs=a.log,b.log,c.log,stdout` [writes server logs to multiple files `a.log`, `b.log` and `c.log` at the same time](https://github.com/etcd-io/etcd/pull/9579) and outputs to `os.Stderr`, in [JSON-encoded format](https://godoc.org/go.uber.org/zap#NewProductionEncoderConfig). - `etcd --logger=zap --log-outputs=/dev/null` will discard all server logs. - Add [`etcd --log-level`](https://github.com/etcd-io/etcd/pull/10947) flag to support log level. - v3.5 will deprecate `etcd --debug` flag in favor of `etcd --log-level=debug`. - Add [`etcd --backend-batch-limit`](https://github.com/etcd-io/etcd/pull/10283) flag. - Add [`etcd --backend-batch-interval`](https://github.com/etcd-io/etcd/pull/10283) flag. - Fix [`mvcc` "unsynced" watcher restore operation](https://github.com/etcd-io/etcd/pull/9281). - "unsynced" watcher is watcher that needs to be in sync with events that have happened. - That is, "unsynced" watcher is the slow watcher that was requested on old revision. - "unsynced" watcher restore operation was not correctly populating its underlying watcher group. - Which possibly causes [missing events from "unsynced" watchers](https://github.com/etcd-io/etcd/issues/9086). - A node gets network partitioned with a watcher on a future revision, and falls behind receiving a leader snapshot after partition gets removed. When applying this snapshot, etcd watch storage moves current synced watchers to unsynced since sync watchers might have become stale during network partition. And reset synced watcher group to restart watcher routines. Previously, there was a bug when moving from synced watcher group to unsynced, thus client would miss events when the watcher was requested to the network-partitioned node. - Fix [`mvcc` server panic from restore operation](https://github.com/etcd-io/etcd/pull/9775). - Let's assume that a watcher had been requested with a future revision X and sent to node A that became network-partitioned thereafter. Meanwhile, cluster makes progress. Then when the partition gets removed, the leader sends a snapshot to node A. Previously if the snapshot's latest revision is still lower than the watch revision X, **etcd server panicked** during snapshot restore operation. - Now, this server-side panic has been fixed. - Fix [server panic on invalid Election Proclaim/Resign HTTP(S) requests](https://github.com/etcd-io/etcd/pull/9379). - Previously, wrong-formatted HTTP requests to Election API could trigger panic in etcd server. - e.g. `curl -L http://localhost:2379/v3/election/proclaim -X POST -d '{"value":""}'`, `curl -L http://localhost:2379/v3/election/resign -X POST -d '{"value":""}'`. - Fix [revision-based compaction retention parsing](https://github.com/etcd-io/etcd/pull/9339). - Previously, `etcd --auto-compaction-mode revision --auto-compaction-retention 1` was [translated to revision retention 3600000000000](https://github.com/etcd-io/etcd/issues/9337). - Now, `etcd --auto-compaction-mode revision --auto-compaction-retention 1` is correctly parsed as revision retention 1. - Prevent [overflow by large `TTL` values for `Lease` `Grant`](https://github.com/etcd-io/etcd/pull/9399). - `TTL` parameter to `Grant` request is unit of second. - Leases with too large `TTL` values exceeding `math.MaxInt64` [expire in unexpected ways](https://github.com/etcd-io/etcd/issues/9374). - Server now returns `rpctypes.ErrLeaseTTLTooLarge` to client, when the requested `TTL` is larger than *9,000,000,000 seconds* (which is >285 years). - Again, etcd `Lease` is meant for short-periodic keepalives or sessions, in the range of seconds or minutes. Not for hours or days! - Fix [expired lease revoke](https://github.com/etcd-io/etcd/pull/10693). - Fix ["the key is not deleted when the bound lease expires"](https://github.com/etcd-io/etcd/issues/10686). - Enable etcd server [`raft.Config.CheckQuorum` when starting with `ForceNewCluster`](https://github.com/etcd-io/etcd/pull/9347). - Allow [non-WAL files in `etcd --wal-dir` directory](https://github.com/etcd-io/etcd/pull/9743). - Previously, existing files such as [`lost+found`](https://github.com/etcd-io/etcd/issues/7287) in WAL directory prevent etcd server boot. - Now, WAL directory that contains only `lost+found` or a file that's not suffixed with `.wal` is considered non-initialized. - Fix [`ETCD_CONFIG_FILE` env variable parsing in `etcd`](https://github.com/etcd-io/etcd/pull/10762). - Fix [race condition in `rafthttp` transport pause/resume](https://github.com/etcd-io/etcd/pull/10826). - Fix [server crash from creating an empty role](https://github.com/etcd-io/etcd/pull/10907). - Previously, creating a role with an empty name crashed etcd server with an error code `Unavailable`. - Now, creating a role with an empty name is not allowed with an error code `InvalidArgument`. ### API - Add `isLearner` field to `etcdserverpb.Member`, `etcdserverpb.MemberAddRequest` and `etcdserverpb.StatusResponse` as part of [raft learner implementation](https://github.com/etcd-io/etcd/pull/10725). - Add `MemberPromote` rpc to `etcdserverpb.Cluster` interface and the corresponding `MemberPromoteRequest` and `MemberPromoteResponse` as part of [raft learner implementation](https://github.com/etcd-io/etcd/pull/10725). - Add [`snapshot`](https://github.com/etcd-io/etcd/pull/9118) package for snapshot restore/save operations (see [`godoc.org/github.com/etcd/clientv3/snapshot`](https://godoc.org/github.com/coreos/etcd/clientv3/snapshot) for more). - Add [`watch_id` field to `etcdserverpb.WatchCreateRequest`](https://github.com/etcd-io/etcd/pull/9065) to allow user-provided watch ID to `mvcc`. - Corresponding `watch_id` is returned via `etcdserverpb.WatchResponse`, if any. - Add [`fragment` field to `etcdserverpb.WatchCreateRequest`](https://github.com/etcd-io/etcd/pull/9291) to request etcd server to [split watch events](https://github.com/etcd-io/etcd/issues/9294) when the total size of events exceeds `etcd --max-request-bytes` flag value plus gRPC-overhead 512 bytes. - The default server-side request bytes limit is `embed.DefaultMaxRequestBytes` which is 1.5 MiB plus gRPC-overhead 512 bytes. - If watch response events exceed this server-side request limit and watch request is created with `fragment` field `true`, the server will split watch events into a set of chunks, each of which is a subset of watch events below server-side request limit. - Useful when client-side has limited bandwidths. - For example, watch response contains 10 events, where each event is 1 MiB. And server `etcd --max-request-bytes` flag value is 1 MiB. Then, server will send 10 separate fragmented events to the client. - For example, watch response contains 5 events, where each event is 2 MiB. And server `etcd --max-request-bytes` flag value is 1 MiB and `clientv3.Config.MaxCallRecvMsgSize` is 1 MiB. Then, server will try to send 5 separate fragmented events to the client, and the client will error with `"code = ResourceExhausted desc = grpc: received message larger than max (...)"`. - Client must implement fragmented watch event merge (which `clientv3` does in etcd v3.4). - Add [`raftAppliedIndex` field to `etcdserverpb.StatusResponse`](https://github.com/etcd-io/etcd/pull/9176) for current Raft applied index. - Add [`errors` field to `etcdserverpb.StatusResponse`](https://github.com/etcd-io/etcd/pull/9206) for server-side error. - e.g. `"etcdserver: no leader", "NOSPACE", "CORRUPT"` - Add [`dbSizeInUse` field to `etcdserverpb.StatusResponse`](https://github.com/etcd-io/etcd/pull/9256) for actual DB size after compaction. - Add [`WatchRequest.WatchProgressRequest`](https://github.com/etcd-io/etcd/pull/9869). - To manually trigger broadcasting watch progress event (empty watch response with latest header) to all associated watch streams. - Think of it as `WithProgressNotify` that can be triggered manually. Note: **v3.5 will deprecate `etcd --log-package-levels` flag for `capnslog`**; `etcd --logger=zap --log-outputs=stderr` will the default. **v3.5 will deprecate `[CLIENT-URL]/config/local/log` endpoint.** ### Package `embed` - Add [`embed.Config.CipherSuites`](https://github.com/etcd-io/etcd/pull/9801) to specify a list of supported cipher suites for TLS handshake between client/server and peers. - If empty, Go auto-populates the list. - Both `embed.Config.ClientTLSInfo.CipherSuites` and `embed.Config.CipherSuites` cannot be non-empty at the same time. - If not empty, specify either `embed.Config.ClientTLSInfo.CipherSuites` or `embed.Config.CipherSuites`. - Add [`embed.Config.InitialElectionTickAdvance`](https://github.com/etcd-io/etcd/pull/9591) to enable/disable initial election tick fast-forward. - `embed.NewConfig()` would return `*embed.Config` with `InitialElectionTickAdvance` as true by default. - Define [`embed.CompactorModePeriodic`](https://godoc.org/github.com/etcd-io/etcd/embed#pkg-variables) for `compactor.ModePeriodic`. - Define [`embed.CompactorModeRevision`](https://godoc.org/github.com/etcd-io/etcd/embed#pkg-variables) for `compactor.ModeRevision`. - Change [`embed.Config.CorsInfo` in `*cors.CORSInfo` type to `embed.Config.CORS` in `map[string]struct{}` type](https://github.com/etcd-io/etcd/pull/9490). - Remove [`embed.Config.SetupLogging`](https://github.com/etcd-io/etcd/pull/9572). - Now logger is set up automatically based on [`embed.Config.Logger`, `embed.Config.LogOutputs`, `embed.Config.Debug` fields](https://github.com/etcd-io/etcd/pull/9572). - Add [`embed.Config.Logger`](https://github.com/etcd-io/etcd/pull/9518) to support [structured logger `zap`](https://github.com/uber-go/zap) in server-side. - Add [`embed.Config.LogLevel`](https://github.com/etcd-io/etcd/pull/10947). - Rename `embed.Config.SnapCount` field to [`embed.Config.SnapshotCount`](https://github.com/etcd-io/etcd/pull/9745), to be consistent with the flag name `etcd --snapshot-count`. - Rename [**`embed.Config.LogOutput`** to **`embed.Config.LogOutputs`**](https://github.com/etcd-io/etcd/pull/9624) to support multiple log outputs. - Change [**`embed.Config.LogOutputs`** type from `string` to `[]string`](https://github.com/etcd-io/etcd/pull/9579) to support multiple log outputs. - Add [`embed.Config.BackendBatchLimit`](https://github.com/etcd-io/etcd/pull/10283) field. - Add [`embed.Config.BackendBatchInterval`](https://github.com/etcd-io/etcd/pull/10283) field. - Make [`embed.DefaultEnableV2` `false` default](https://github.com/etcd-io/etcd/pull/10935). ### Package `pkg/adt` - Change [`pkg/adt.IntervalTree` from `struct` to `interface`](https://github.com/etcd-io/etcd/pull/10959). - See [`pkg/adt` README](https://github.com/etcd-io/etcd/tree/main/pkg/adt) and [`pkg/adt` godoc](https://godoc.org/go.etcd.io/etcd/pkg/adt). - Improve [`pkg/adt.IntervalTree` test coverage](https://github.com/etcd-io/etcd/pull/10959). - See [`pkg/adt` README](https://github.com/etcd-io/etcd/tree/main/pkg/adt) and [`pkg/adt` godoc](https://godoc.org/go.etcd.io/etcd/pkg/adt). - Fix [Red-Black tree to maintain black-height property](https://github.com/etcd-io/etcd/pull/10978). - Previously, delete operation violates [black-height property](https://github.com/etcd-io/etcd/issues/10965). ### Package `integration` - Add [`CLUSTER_DEBUG` to enable test cluster logging](https://github.com/etcd-io/etcd/pull/9678). - Deprecated `capnslog` in integration tests. ### client v3 - Add [`MemberAddAsLearner`](https://github.com/etcd-io/etcd/pull/10725) to `Clientv3.Cluster` interface. This API is used to add a learner member to etcd cluster. - Add [`MemberPromote`](https://github.com/etcd-io/etcd/pull/10727) to `Clientv3.Cluster` interface. This API is used to promote a learner member in etcd cluster. - Client may receive [`rpctypes.ErrLeaderChanged`](https://github.com/etcd-io/etcd/pull/10094) from server. - Now linearizable requests with read index would fail fast when there is a leadership change, instead of waiting until context timeout. - Add [`WithFragment` `OpOption`](https://github.com/etcd-io/etcd/pull/9291) to support [watch events fragmentation](https://github.com/etcd-io/etcd/issues/9294) when the total size of events exceeds `etcd --max-request-bytes` flag value plus gRPC-overhead 512 bytes. - Watch fragmentation is disabled by default. - The default server-side request bytes limit is `embed.DefaultMaxRequestBytes` which is 1.5 MiB plus gRPC-overhead 512 bytes. - If watch response events exceed this server-side request limit and watch request is created with `fragment` field `true`, the server will split watch events into a set of chunks, each of which is a subset of watch events below server-side request limit. - Useful when client-side has limited bandwidths. - For example, watch response contains 10 events, where each event is 1 MiB. And server `etcd --max-request-bytes` flag value is 1 MiB. Then, server will send 10 separate fragmented events to the client. - For example, watch response contains 5 events, where each event is 2 MiB. And server `etcd --max-request-bytes` flag value is 1 MiB and `clientv3.Config.MaxCallRecvMsgSize` is 1 MiB. Then, server will try to send 5 separate fragmented events to the client, and the client will error with `"code = ResourceExhausted desc = grpc: received message larger than max (...)"`. - Add [`Watcher.RequestProgress` method](https://github.com/etcd-io/etcd/pull/9869). - To manually trigger broadcasting watch progress event (empty watch response with latest header) to all associated watch streams. - Think of it as `WithProgressNotify` that can be triggered manually. - Fix [lease keepalive interval updates when response queue is full](https://github.com/etcd-io/etcd/pull/9952). - If `<-chan *clientv3LeaseKeepAliveResponse` from `clientv3.Lease.KeepAlive` was never consumed or channel is full, client was [sending keepalive request every 500ms](https://github.com/etcd-io/etcd/issues/9911) instead of expected rate of every "TTL / 3" duration. - Change [snapshot file permissions](https://github.com/etcd-io/etcd/pull/9977): On Linux, the snapshot file changes from readable by all (mode 0644) to readable by the user only (mode 0600). - Client may choose to send keepalive pings to server using [`PermitWithoutStream`](https://github.com/etcd-io/etcd/pull/10146). - By setting `PermitWithoutStream` to true, client can send keepalive pings to server without any active streams(RPCs). In other words, it allows sending keepalive pings with unary or simple RPC calls. - `PermitWithoutStream` is set to false by default. - Fix logic on [release lock key if cancelled](https://github.com/etcd-io/etcd/pull/10153) in `clientv3/concurrency` package. - Fix [`(*Client).Endpoints()` method race condition](https://github.com/etcd-io/etcd/pull/10595). - Deprecated [`grpc.ErrClientConnClosing`](https://github.com/etcd-io/etcd/pull/10981). - `clientv3` and `proxy/grpcproxy` now does not return `grpc.ErrClientConnClosing`. - `grpc.ErrClientConnClosing` has been [deprecated in gRPC >= 1.10](https://github.com/grpc/grpc-go/pull/1854). - Use `clientv3.IsConnCanceled(error)` or `google.golang.org/grpc/status.FromError(error)` instead. ### etcdctl v3 - Make [`ETCDCTL_API=3 etcdctl` default](https://github.com/etcd-io/etcd/issues/9600). - Now, `etcdctl set foo bar` must be `ETCDCTL_API=2 etcdctl set foo bar`. - Now, `ETCDCTL_API=3 etcdctl put foo bar` could be just `etcdctl put foo bar`. - Add [`etcdctl member add --learner` and `etcdctl member promote`](https://github.com/etcd-io/etcd/pull/10725) to add and promote raft learner member in etcd cluster. - Add [`etcdctl --password`](https://github.com/etcd-io/etcd/pull/9730) flag. - To support [`:` character in user name](https://github.com/etcd-io/etcd/issues/9691). - e.g. `etcdctl --user user --password password get foo` - Add [`etcdctl user add --new-user-password`](https://github.com/etcd-io/etcd/pull/9730) flag. - Add [`etcdctl check datascale`](https://github.com/etcd-io/etcd/pull/9185) command. - Add [`etcdctl check datascale --auto-compact, --auto-defrag`](https://github.com/etcd-io/etcd/pull/9351) flags. - Add [`etcdctl check perf --auto-compact, --auto-defrag`](https://github.com/etcd-io/etcd/pull/9330) flags. - Add [`etcdctl defrag --cluster`](https://github.com/etcd-io/etcd/pull/9390) flag. - Add ["raft applied index" field to `endpoint status`](https://github.com/etcd-io/etcd/pull/9176). - Add ["errors" field to `endpoint status`](https://github.com/etcd-io/etcd/pull/9206). - Add [`etcdctl endpoint health --write-out` support](https://github.com/etcd-io/etcd/pull/9540). - Previously, [`etcdctl endpoint health --write-out json` did not work](https://github.com/etcd-io/etcd/issues/9532). - Add [missing newline in `etcdctl endpoint health`](https://github.com/etcd-io/etcd/pull/10793). - Fix [`etcdctl watch [key] [range_end] -- [exec-command…]`](https://github.com/etcd-io/etcd/pull/9688) parsing. - Previously, `ETCDCTL_API=3 etcdctl watch foo -- echo watch event received` panicked. - Fix [`etcdctl move-leader` command for TLS-enabled endpoints](https://github.com/etcd-io/etcd/pull/9807). - Add [`progress` command to `etcdctl watch --interactive`](https://github.com/etcd-io/etcd/pull/9869). - To manually trigger broadcasting watch progress event (empty watch response with latest header) to all associated watch streams. - Think of it as `WithProgressNotify` that can be triggered manually. - Add [timeout](https://github.com/etcd-io/etcd/pull/10301) to `etcdctl snapshot save`. - User can specify timeout of `etcdctl snapshot save` command using flag `--command-timeout`. - Fix etcdctl to [strip out insecure endpoints from DNS SRV records when using discovery](https://github.com/etcd-io/etcd/pull/10443) ### gRPC proxy - Fix [etcd server panic from restore operation](https://github.com/etcd-io/etcd/pull/9775). - Let's assume that a watcher had been requested with a future revision X and sent to node A that became network-partitioned thereafter. Meanwhile, cluster makes progress. Then when the partition gets removed, the leader sends a snapshot to node A. Previously if the snapshot's latest revision is still lower than the watch revision X, **etcd server panicked** during snapshot restore operation. - Especially, gRPC proxy was affected, since it detects a leader loss with a key `"proxy-namespace__lostleader"` and a watch revision `"int64(math.MaxInt64 - 2)"`. - Now, this server-side panic has been fixed. - Fix [memory leak in cache layer](https://github.com/etcd-io/etcd/pull/10327). - Change [gRPC proxy to expose etcd server endpoint /metrics](https://github.com/etcd-io/etcd/pull/10618). - The metrics that were exposed via the proxy were not etcd server members but instead the proxy itself. ### gRPC gateway - Replace [gRPC gateway](https://github.com/grpc-ecosystem/grpc-gateway) endpoint `/v3beta` with [`/v3`](https://github.com/etcd-io/etcd/pull/9298). - Deprecated [`/v3alpha`](https://github.com/etcd-io/etcd/pull/9298). - To deprecate [`/v3beta`](https://github.com/etcd-io/etcd/issues/9189) in v3.5. - In v3.4, `curl -L http://localhost:2379/v3beta/kv/put -X POST -d '{"key": "Zm9v", "value": "YmFy"}'` still works as a fallback to `curl -L http://localhost:2379/v3/kv/put -X POST -d '{"key": "Zm9v", "value": "YmFy"}'`, but `curl -L http://localhost:2379/v3beta/kv/put -X POST -d '{"key": "Zm9v", "value": "YmFy"}'` won't work in v3.5. Use `curl -L http://localhost:2379/v3/kv/put -X POST -d '{"key": "Zm9v", "value": "YmFy"}'` instead. - Add API endpoints [`/{v3beta,v3}/lease/leases, /{v3beta,v3}/lease/revoke, /{v3beta,v3}/lease/timetolive`](https://github.com/etcd-io/etcd/pull/9450). - To deprecate [`/{v3beta,v3}/kv/lease/leases, /{v3beta,v3}/kv/lease/revoke, /{v3beta,v3}/kv/lease/timetolive`](https://github.com/etcd-io/etcd/issues/9430) in v3.5. - Support [`etcd --cors`](https://github.com/etcd-io/etcd/pull/9490) in v3 HTTP requests (gRPC gateway). ### Package `raft` - Fix [deadlock during PreVote migration process](https://github.com/etcd-io/etcd/pull/8525). - Add [`raft.ErrProposalDropped`](https://github.com/etcd-io/etcd/pull/9067). - Now [`(r *raft) Step` returns `raft.ErrProposalDropped`](https://github.com/etcd-io/etcd/pull/9137) if a proposal has been ignored. - e.g. a node is removed from cluster, or [`raftpb.MsgProp` arrives at current leader while there is an ongoing leadership transfer](https://github.com/etcd-io/etcd/issues/8975). - Improve [Raft `becomeLeader` and `stepLeader`](https://github.com/etcd-io/etcd/pull/9073) by keeping track of latest `pb.EntryConfChange` index. - Previously record `pendingConf` boolean field scanning the entire tail of the log, which can delay hearbeat send. - Fix [missing learner nodes on `(n *node) ApplyConfChange`](https://github.com/etcd-io/etcd/pull/9116). - Add [`raft.Config.MaxUncommittedEntriesSize`](https://github.com/etcd-io/etcd/pull/10167) to limit the total size of the uncommitted entries in bytes. - Once exceeded, raft returns `raft.ErrProposalDropped` error. - Prevent [unbounded Raft log growth](https://github.com/cockroachdb/cockroach/issues/27772). - There was a bug in [PR#10167](https://github.com/etcd-io/etcd/pull/10167) but fixed via [PR#10199](https://github.com/etcd-io/etcd/pull/10199). - Add [`raft.Ready.CommittedEntries` pagination using `raft.Config.MaxSizePerMsg`](https://github.com/etcd-io/etcd/pull/9982). - This prevents out-of-memory errors if the raft log has become very large and commits all at once. - Fix [correctness bug in CommittedEntries pagination](https://github.com/etcd-io/etcd/pull/10063). - Optimize [message send flow control](https://github.com/etcd-io/etcd/pull/9985). - Leader now sends more append entries if it has more non-empty entries to send after updating flow control information. - Now, Raft allows multiple in-flight append messages. - Optimize [memory allocation when boxing slice in `maybeCommit`](https://github.com/etcd-io/etcd/pull/10679). - By boxing a heap-allocated slice header instead of the slice header on the stack, we can avoid an allocation when passing through the sort.Interface interface. - Avoid [memory allocation in Raft entry `String` method](https://github.com/etcd-io/etcd/pull/10680). - Avoid [multiple memory allocations when merging stable and unstable log](https://github.com/etcd-io/etcd/pull/10684). - Extract [progress tracking into own component](https://github.com/etcd-io/etcd/pull/10683). - Add [package `raft/tracker`](https://github.com/etcd-io/etcd/pull/10807). - Optimize [string representation of `Progress`](https://github.com/etcd-io/etcd/pull/10882). - Make [relationship between `node` and `RawNode` explicit](https://github.com/etcd-io/etcd/pull/10803). - Prevent [learners from becoming leader](https://github.com/etcd-io/etcd/pull/10822). - Add [package `raft/quorum` to reason about committed indexes as well as vote outcomes for both majority and joint quorums](https://github.com/etcd-io/etcd/pull/10779). - Bundle [Voters and Learner into `raft/tracker.Config` struct](https://github.com/etcd-io/etcd/pull/10865). - Use [membership sets in progress tracking](https://github.com/etcd-io/etcd/pull/10779). - Implement [joint quorum computation](https://github.com/etcd-io/etcd/pull/10779). - Refactor [`raft/node.go` to centralize configuration change application](https://github.com/etcd-io/etcd/pull/10865). - Allow [voter to become learner through snapshot](https://github.com/etcd-io/etcd/pull/10864). - Add [package `raft/confchange` to internally support joint consensus](https://github.com/etcd-io/etcd/pull/10779). - Use [`RawNode` for node's event loop](https://github.com/etcd-io/etcd/pull/10892). - Add [`RawNode.Bootstrap` method](https://github.com/etcd-io/etcd/pull/10892). - Add [`raftpb.ConfChangeV2` to use joint quorums](https://github.com/etcd-io/etcd/pull/10914). - `raftpb.ConfChange` continues to work as today: it allows carrying out a single configuration change. A `pb.ConfChange` proposal gets added to the Raft log as such and is thus also observed by the app during Ready handling, and fed back to ApplyConfChange. - `raftpb.ConfChangeV2` allows joint configuration changes but will continue to carry out configuration changes in "one phase" (i.e. without ever entering a joint config) when this is possible. - `raftpb.ConfChangeV2` messages initiate configuration changes. They support both the simple "one at a time" membership change protocol and full Joint Consensus allowing for arbitrary changes in membership. - Change [`raftpb.ConfState.Nodes` to `raftpb.ConfState.Voters`](https://github.com/etcd-io/etcd/pull/10914). - Allow [learners to vote, but still learners do not count in quorum](https://github.com/etcd-io/etcd/pull/10998). - necessary in the situation in which a learner has been promoted (i.e. is now a voter) but has not learned about this yet. - Fix [restoring joint consensus](https://github.com/etcd-io/etcd/pull/11003). - Visit [`Progress` in stable order](https://github.com/etcd-io/etcd/pull/11004). - Proactively [probe newly added followers](https://github.com/etcd-io/etcd/pull/11037). - The general expectation in `tracker.Progress.Next == c.LastIndex` is that the follower has no log at all (and will thus likely need a snapshot), though the app may have applied a snapshot out of band before adding the replica (thus making the first index the better choice). - Previously, when the leader applied a new configuration that added voters, it would not immediately probe these voters, delaying when they would be caught up. ### Package `wal` - Add [`Verify` function to perform corruption check on WAL contents](https://github.com/etcd-io/etcd/pull/10603). - Fix [`wal` directory cleanup on creation failures](https://github.com/etcd-io/etcd/pull/10689). ### Tooling - Add [`etcd-dump-logs --entry-type`](https://github.com/etcd-io/etcd/pull/9628) flag to support WAL log filtering by entry type. - Add [`etcd-dump-logs --stream-decoder`](https://github.com/etcd-io/etcd/pull/9790) flag to support custom decoder. - Add [`SHA256SUMS`](https://github.com/etcd-io/etcd/pull/11087) file to release assets. - etcd maintainers are a distributed team, this change allows for releases to be cut and validation provided without requiring a signing key. ### Go - Require [*Go 1.12+*](https://github.com/etcd-io/etcd/pull/10045). - Compile with [*Go 1.12.9*](https://golang.org/doc/devel/release.html#go1.12) including [*Go 1.12.8*](https://groups.google.com/d/msg/golang-announce/65QixT3tcmg/DrFiG6vvCwAJ) security fixes. ### Dockerfile - [Rebase etcd image from Alpine to Debian](https://github.com/etcd-io/etcd/pull/10805) to improve security and maintenance effort for etcd release. <hr>
rnaveiras/etcd
CHANGELOG-3.4.md
Markdown
apache-2.0
92,598
#ifdef __cplusplus extern "C" { #endif #include <inttypes.h> #include <libxml.h> #include <libxml/relaxng.h> #include <libxml/xmlerror.h> #include <stdlib.h> #include <libhfuzz/libhfuzz.h> FILE* null_file = NULL; int LLVMFuzzerInitialize(int* argc, char*** argv) { null_file = fopen("/dev/null", "w"); return 0; } int LLVMFuzzerTestOneInput(const uint8_t* buf, size_t len) { xmlDocPtr p = xmlReadMemory((const char*)buf, len, "http://www.google.com", "UTF-8", XML_PARSE_RECOVER | XML_PARSE_NONET); if (!p) { return 0; } xmlDocFormatDump(null_file, p, 1); xmlFreeDoc(p); return 0; } #ifdef __cplusplus } #endif
anestisb/honggfuzz
examples/libxml2/persistent-xml2.c
C
apache-2.0
656
package org.seamoo.gaeForTest; public class PrivateCtorClass { private String privateField; private String publicField; private PrivateCtorClass() { this.publicField = "0"; } private PrivateCtorClass(String[] arguments) { this.publicField = "1"; } public void setPublicField(String publicField) { this.publicField = publicField; } public String getPublicField() { return publicField; } }
phuongnd08/maven-geaForTest-plugin
src/test/java/org/seamoo/gaeForTest/PrivateCtorClass.java
Java
apache-2.0
411
# Peucedanum lativittatum Fisch., C.A.Mey. & Avé-Lall. SPECIES #### Status ACCEPTED #### According to The Catalogue of Life, 3rd January 2011 #### Published in null #### Original name null ### Remarks null
mdoering/backbone
life/Plantae/Magnoliophyta/Magnoliopsida/Apiales/Apiaceae/Peucedanum/Peucedanum lativittatum/README.md
Markdown
apache-2.0
211
# Volvariopsis alabamensis Murrill SPECIES #### Status ACCEPTED #### According to Index Fungorum #### Published in null #### Original name Volvariopsis alabamensis Murrill ### Remarks null
mdoering/backbone
life/Fungi/Basidiomycota/Agaricomycetes/Agaricales/Pluteaceae/Volvariopsis/Volvariopsis alabamensis/README.md
Markdown
apache-2.0
193
# Desmotrichum grandiflorum Blume SPECIES #### Status SYNONYM #### According to The Catalogue of Life, 3rd January 2011 #### Published in null #### Original name null ### Remarks null
mdoering/backbone
life/Plantae/Magnoliophyta/Liliopsida/Asparagales/Orchidaceae/Dendrobium/Dendrobium conspicuum/ Syn. Desmotrichum grandiflorum/README.md
Markdown
apache-2.0
188
# Elettariopsis smithiae Y.K.Kam SPECIES #### Status ACCEPTED #### According to The Catalogue of Life, 3rd January 2011 #### Published in null #### Original name null ### Remarks null
mdoering/backbone
life/Plantae/Magnoliophyta/Liliopsida/Zingiberales/Zingiberaceae/Elettariopsis/Elettariopsis smithiae/README.md
Markdown
apache-2.0
188
# Rhamnus polymorpha f. sylvatica Loefgr. FORM #### Status ACCEPTED #### According to International Plant Names Index #### Published in null #### Original name null ### Remarks null
mdoering/backbone
life/Plantae/Magnoliophyta/Magnoliopsida/Rosales/Rhamnaceae/Rhamnus/Rhamnus polymorpha/Rhamnus polymorpha sylvatica/README.md
Markdown
apache-2.0
186
# Walafrida articulata Rolfe SPECIES #### Status ACCEPTED #### According to International Plant Names Index #### Published in null #### Original name null ### Remarks null
mdoering/backbone
life/Plantae/Magnoliophyta/Magnoliopsida/Lamiales/Scrophulariaceae/Walafrida/Walafrida articulata/README.md
Markdown
apache-2.0
176
# Juncus camptotropus V.I.Krecz. SPECIES #### Status ACCEPTED #### According to International Plant Names Index #### Published in null #### Original name null ### Remarks null
mdoering/backbone
life/Plantae/Magnoliophyta/Liliopsida/Poales/Juncaceae/Juncus/Juncus camptotropus/README.md
Markdown
apache-2.0
180
# Navarretia klickitatensis Suksd. SPECIES #### Status ACCEPTED #### According to International Plant Names Index #### Published in null #### Original name null ### Remarks null
mdoering/backbone
life/Plantae/Magnoliophyta/Magnoliopsida/Ericales/Polemoniaceae/Navarretia/Navarretia klickitatensis/README.md
Markdown
apache-2.0
182
# Leucothoë ambigua var. tomentella Meisn. in Mart. VARIETY #### Status ACCEPTED #### According to International Plant Names Index #### Published in null #### Original name null ### Remarks null
mdoering/backbone
life/Plantae/Magnoliophyta/Magnoliopsida/Ericales/Ericaceae/Leucothoë/Leucothoë ambigua/Leucothoë ambigua tomentella/README.md
Markdown
apache-2.0
200
# Acrocorona Haeckel, 1881 GENUS #### Status ACCEPTED #### According to Interim Register of Marine and Nonmarine Genera #### Published in Jena. Z. , 15, 430. #### Original name null ### Remarks null
mdoering/backbone
life/Protozoa/Sarcomastigophora/Acrocorona/README.md
Markdown
apache-2.0
203
# Stellaria silvatica (Bég.) Maguire SPECIES #### Status SYNONYM #### According to The Catalogue of Life, 3rd January 2011 #### Published in null #### Original name null ### Remarks null
mdoering/backbone
life/Plantae/Magnoliophyta/Magnoliopsida/Caryophyllales/Caryophyllaceae/Stellaria/Stellaria corei/ Syn. Stellaria silvatica/README.md
Markdown
apache-2.0
192
# Millioudodinium L.E. Stover & W.R. Evitt, 1978 GENUS #### Status ACCEPTED #### According to Interim Register of Marine and Nonmarine Genera #### Published in Stanford Univ. Publs Geol. Sci. 15: 173. #### Original name null ### Remarks null
mdoering/backbone
life/Protozoa/Dinophyta/Dinophyceae/Peridiniales/Gonyaulacaceae/Millioudodinium/README.md
Markdown
apache-2.0
246
/* * Copyright 2014-2019 Real Logic Ltd. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package io.aeron.archive; import io.aeron.archive.client.ArchiveException; import io.aeron.archive.codecs.*; import io.aeron.logbuffer.*; import org.agrona.DirectBuffer; import org.agrona.collections.ArrayUtil; class ControlRequestAdapter implements FragmentHandler { private final ControlRequestListener listener; private final MessageHeaderDecoder headerDecoder = new MessageHeaderDecoder(); private final ConnectRequestDecoder connectRequestDecoder = new ConnectRequestDecoder(); private final CloseSessionRequestDecoder closeSessionRequestDecoder = new CloseSessionRequestDecoder(); private final StartRecordingRequestDecoder startRecordingRequestDecoder = new StartRecordingRequestDecoder(); private final StopRecordingRequestDecoder stopRecordingRequestDecoder = new StopRecordingRequestDecoder(); private final ReplayRequestDecoder replayRequestDecoder = new ReplayRequestDecoder(); private final StopReplayRequestDecoder stopReplayRequestDecoder = new StopReplayRequestDecoder(); private final ListRecordingsRequestDecoder listRecordingsRequestDecoder = new ListRecordingsRequestDecoder(); private final ListRecordingsForUriRequestDecoder listRecordingsForUriRequestDecoder = new ListRecordingsForUriRequestDecoder(); private final ListRecordingRequestDecoder listRecordingRequestDecoder = new ListRecordingRequestDecoder(); private final ExtendRecordingRequestDecoder extendRecordingRequestDecoder = new ExtendRecordingRequestDecoder(); private final RecordingPositionRequestDecoder recordingPositionRequestDecoder = new RecordingPositionRequestDecoder(); private final TruncateRecordingRequestDecoder truncateRecordingRequestDecoder = new TruncateRecordingRequestDecoder(); private final StopRecordingSubscriptionRequestDecoder stopRecordingSubscriptionRequestDecoder = new StopRecordingSubscriptionRequestDecoder(); private final StopPositionRequestDecoder stopPositionRequestDecoder = new StopPositionRequestDecoder(); private final FindLastMatchingRecordingRequestDecoder findLastMatchingRecordingRequestDecoder = new FindLastMatchingRecordingRequestDecoder(); private final ListRecordingSubscriptionsRequestDecoder listRecordingSubscriptionsRequestDecoder = new ListRecordingSubscriptionsRequestDecoder(); ControlRequestAdapter(final ControlRequestListener listener) { this.listener = listener; } @SuppressWarnings("MethodLength") public void onFragment(final DirectBuffer buffer, final int offset, final int length, final Header header) { headerDecoder.wrap(buffer, offset); final int schemaId = headerDecoder.schemaId(); if (schemaId != MessageHeaderDecoder.SCHEMA_ID) { throw new ArchiveException("expected schemaId=" + MessageHeaderDecoder.SCHEMA_ID + ", actual=" + schemaId); } final int templateId = headerDecoder.templateId(); switch (templateId) { case ConnectRequestDecoder.TEMPLATE_ID: { connectRequestDecoder.wrap( buffer, offset + MessageHeaderDecoder.ENCODED_LENGTH, headerDecoder.blockLength(), headerDecoder.version()); listener.onConnect( connectRequestDecoder.correlationId(), connectRequestDecoder.responseStreamId(), connectRequestDecoder.version(), connectRequestDecoder.responseChannel()); break; } case CloseSessionRequestDecoder.TEMPLATE_ID: { closeSessionRequestDecoder.wrap( buffer, offset + MessageHeaderDecoder.ENCODED_LENGTH, headerDecoder.blockLength(), headerDecoder.version()); listener.onCloseSession(closeSessionRequestDecoder.controlSessionId()); break; } case StartRecordingRequestDecoder.TEMPLATE_ID: { startRecordingRequestDecoder.wrap( buffer, offset + MessageHeaderDecoder.ENCODED_LENGTH, headerDecoder.blockLength(), headerDecoder.version()); listener.onStartRecording( startRecordingRequestDecoder.controlSessionId(), startRecordingRequestDecoder.correlationId(), startRecordingRequestDecoder.streamId(), startRecordingRequestDecoder.channel(), startRecordingRequestDecoder.sourceLocation()); break; } case StopRecordingRequestDecoder.TEMPLATE_ID: { stopRecordingRequestDecoder.wrap( buffer, offset + MessageHeaderDecoder.ENCODED_LENGTH, headerDecoder.blockLength(), headerDecoder.version()); listener.onStopRecording( stopRecordingRequestDecoder.controlSessionId(), stopRecordingRequestDecoder.correlationId(), stopRecordingRequestDecoder.streamId(), stopRecordingRequestDecoder.channel()); break; } case ReplayRequestDecoder.TEMPLATE_ID: { replayRequestDecoder.wrap( buffer, offset + MessageHeaderDecoder.ENCODED_LENGTH, headerDecoder.blockLength(), headerDecoder.version()); listener.onStartReplay( replayRequestDecoder.controlSessionId(), replayRequestDecoder.correlationId(), replayRequestDecoder.recordingId(), replayRequestDecoder.position(), replayRequestDecoder.length(), replayRequestDecoder.replayStreamId(), replayRequestDecoder.replayChannel()); break; } case StopReplayRequestDecoder.TEMPLATE_ID: { stopReplayRequestDecoder.wrap( buffer, offset + MessageHeaderDecoder.ENCODED_LENGTH, headerDecoder.blockLength(), headerDecoder.version()); listener.onStopReplay( stopReplayRequestDecoder.controlSessionId(), stopReplayRequestDecoder.correlationId(), stopReplayRequestDecoder.replaySessionId()); break; } case ListRecordingsRequestDecoder.TEMPLATE_ID: { listRecordingsRequestDecoder.wrap( buffer, offset + MessageHeaderDecoder.ENCODED_LENGTH, headerDecoder.blockLength(), headerDecoder.version()); listener.onListRecordings( listRecordingsRequestDecoder.controlSessionId(), listRecordingsRequestDecoder.correlationId(), listRecordingsRequestDecoder.fromRecordingId(), listRecordingsRequestDecoder.recordCount()); break; } case ListRecordingsForUriRequestDecoder.TEMPLATE_ID: { listRecordingsForUriRequestDecoder.wrap( buffer, offset + MessageHeaderDecoder.ENCODED_LENGTH, headerDecoder.blockLength(), headerDecoder.version()); final int channelLength = listRecordingsForUriRequestDecoder.channelLength(); final byte[] bytes = 0 == channelLength ? ArrayUtil.EMPTY_BYTE_ARRAY : new byte[channelLength]; listRecordingsForUriRequestDecoder.getChannel(bytes, 0, channelLength); listener.onListRecordingsForUri( listRecordingsForUriRequestDecoder.controlSessionId(), listRecordingsForUriRequestDecoder.correlationId(), listRecordingsForUriRequestDecoder.fromRecordingId(), listRecordingsForUriRequestDecoder.recordCount(), listRecordingsForUriRequestDecoder.streamId(), bytes); break; } case ListRecordingRequestDecoder.TEMPLATE_ID: { listRecordingRequestDecoder.wrap( buffer, offset + MessageHeaderDecoder.ENCODED_LENGTH, headerDecoder.blockLength(), headerDecoder.version()); listener.onListRecording( listRecordingRequestDecoder.controlSessionId(), listRecordingRequestDecoder.correlationId(), listRecordingRequestDecoder.recordingId()); break; } case ExtendRecordingRequestDecoder.TEMPLATE_ID: { extendRecordingRequestDecoder.wrap( buffer, offset + MessageHeaderDecoder.ENCODED_LENGTH, headerDecoder.blockLength(), headerDecoder.version()); listener.onExtendRecording( extendRecordingRequestDecoder.controlSessionId(), extendRecordingRequestDecoder.correlationId(), extendRecordingRequestDecoder.recordingId(), extendRecordingRequestDecoder.streamId(), extendRecordingRequestDecoder.channel(), extendRecordingRequestDecoder.sourceLocation()); break; } case RecordingPositionRequestDecoder.TEMPLATE_ID: { recordingPositionRequestDecoder.wrap( buffer, offset + MessageHeaderDecoder.ENCODED_LENGTH, headerDecoder.blockLength(), headerDecoder.version()); listener.onGetRecordingPosition( recordingPositionRequestDecoder.controlSessionId(), recordingPositionRequestDecoder.correlationId(), recordingPositionRequestDecoder.recordingId()); break; } case TruncateRecordingRequestDecoder.TEMPLATE_ID: { truncateRecordingRequestDecoder.wrap( buffer, offset + MessageHeaderDecoder.ENCODED_LENGTH, headerDecoder.blockLength(), headerDecoder.version()); listener.onTruncateRecording( truncateRecordingRequestDecoder.controlSessionId(), truncateRecordingRequestDecoder.correlationId(), truncateRecordingRequestDecoder.recordingId(), truncateRecordingRequestDecoder.position()); break; } case StopRecordingSubscriptionRequestDecoder.TEMPLATE_ID: { stopRecordingSubscriptionRequestDecoder.wrap( buffer, offset + MessageHeaderDecoder.ENCODED_LENGTH, headerDecoder.blockLength(), headerDecoder.version()); listener.onStopRecordingSubscription( stopRecordingSubscriptionRequestDecoder.controlSessionId(), stopRecordingSubscriptionRequestDecoder.correlationId(), stopRecordingSubscriptionRequestDecoder.subscriptionId()); break; } case StopPositionRequestDecoder.TEMPLATE_ID: { stopPositionRequestDecoder.wrap( buffer, offset + MessageHeaderDecoder.ENCODED_LENGTH, headerDecoder.blockLength(), headerDecoder.version()); listener.onGetStopPosition( stopPositionRequestDecoder.controlSessionId(), stopPositionRequestDecoder.correlationId(), stopPositionRequestDecoder.recordingId()); break; } case FindLastMatchingRecordingRequestDecoder.TEMPLATE_ID: { findLastMatchingRecordingRequestDecoder.wrap( buffer, offset + MessageHeaderDecoder.ENCODED_LENGTH, headerDecoder.blockLength(), headerDecoder.version()); final int channelLength = findLastMatchingRecordingRequestDecoder.channelLength(); final byte[] bytes = 0 == channelLength ? ArrayUtil.EMPTY_BYTE_ARRAY : new byte[channelLength]; findLastMatchingRecordingRequestDecoder.getChannel(bytes, 0, channelLength); listener.onFindLastMatchingRecording( findLastMatchingRecordingRequestDecoder.controlSessionId(), findLastMatchingRecordingRequestDecoder.correlationId(), findLastMatchingRecordingRequestDecoder.minRecordingId(), findLastMatchingRecordingRequestDecoder.sessionId(), findLastMatchingRecordingRequestDecoder.streamId(), bytes); break; } case ListRecordingSubscriptionsRequestDecoder.TEMPLATE_ID: { listRecordingSubscriptionsRequestDecoder.wrap( buffer, offset + MessageHeaderDecoder.ENCODED_LENGTH, headerDecoder.blockLength(), headerDecoder.version()); listener.onListRecordingSubscriptions( listRecordingSubscriptionsRequestDecoder.controlSessionId(), listRecordingSubscriptionsRequestDecoder.correlationId(), listRecordingSubscriptionsRequestDecoder.pseudoIndex(), listRecordingSubscriptionsRequestDecoder.subscriptionCount(), listRecordingSubscriptionsRequestDecoder.applyStreamId() == BooleanType.TRUE, listRecordingSubscriptionsRequestDecoder.streamId(), listRecordingSubscriptionsRequestDecoder.channel()); } } } }
galderz/Aeron
aeron-archive/src/main/java/io/aeron/archive/ControlRequestAdapter.java
Java
apache-2.0
15,170
/** * SubmitReconciliationOrderReports.java * * This file was auto-generated from WSDL * by the Apache Axis 1.4 Apr 22, 2006 (06:55:48 PDT) WSDL2Java emitter. */ package com.google.api.ads.dfp.v201306; /** * The action used for submit the reconciliation on the {@link ReconciliationOrderReport}. */ public class SubmitReconciliationOrderReports extends com.google.api.ads.dfp.v201306.ReconciliationOrderReportAction implements java.io.Serializable { public SubmitReconciliationOrderReports() { } public SubmitReconciliationOrderReports( java.lang.String reconciliationOrderReportActionType) { super( reconciliationOrderReportActionType); } private java.lang.Object __equalsCalc = null; public synchronized boolean equals(java.lang.Object obj) { if (!(obj instanceof SubmitReconciliationOrderReports)) return false; SubmitReconciliationOrderReports other = (SubmitReconciliationOrderReports) obj; if (obj == null) return false; if (this == obj) return true; if (__equalsCalc != null) { return (__equalsCalc == obj); } __equalsCalc = obj; boolean _equals; _equals = super.equals(obj); __equalsCalc = null; return _equals; } private boolean __hashCodeCalc = false; public synchronized int hashCode() { if (__hashCodeCalc) { return 0; } __hashCodeCalc = true; int _hashCode = super.hashCode(); __hashCodeCalc = false; return _hashCode; } // Type metadata private static org.apache.axis.description.TypeDesc typeDesc = new org.apache.axis.description.TypeDesc(SubmitReconciliationOrderReports.class, true); static { typeDesc.setXmlType(new javax.xml.namespace.QName("https://www.google.com/apis/ads/publisher/v201306", "SubmitReconciliationOrderReports")); } /** * Return type metadata object */ public static org.apache.axis.description.TypeDesc getTypeDesc() { return typeDesc; } /** * Get Custom Serializer */ public static org.apache.axis.encoding.Serializer getSerializer( java.lang.String mechType, java.lang.Class _javaType, javax.xml.namespace.QName _xmlType) { return new org.apache.axis.encoding.ser.BeanSerializer( _javaType, _xmlType, typeDesc); } /** * Get Custom Deserializer */ public static org.apache.axis.encoding.Deserializer getDeserializer( java.lang.String mechType, java.lang.Class _javaType, javax.xml.namespace.QName _xmlType) { return new org.apache.axis.encoding.ser.BeanDeserializer( _javaType, _xmlType, typeDesc); } }
google-code-export/google-api-dfp-java
src/com/google/api/ads/dfp/v201306/SubmitReconciliationOrderReports.java
Java
apache-2.0
2,837
/* * To change this license header, choose License Headers in Project Properties. * To change this template file, choose Tools | Templates * and open the template in the editor. */ package org.pieShare.pieTools.pieUtilities.service.pieExecutorService; import java.util.Map; import java.util.concurrent.ExecutorService; import org.testng.annotations.Test; import org.mockito.Mockito; import org.pieShare.pieTools.pieUtilities.service.beanService.BeanServiceError; import org.pieShare.pieTools.pieUtilities.service.beanService.IBeanService; import org.pieShare.pieTools.pieUtilities.service.pieExecutorService.api.event.IPieEvent; import org.pieShare.pieTools.pieUtilities.service.pieExecutorService.api.task.IPieEventTask; import org.pieShare.pieTools.pieUtilities.service.pieExecutorService.api.IPieExecutorTaskFactory; import org.pieShare.pieTools.pieUtilities.service.pieExecutorService.exception.PieExecutorTaskFactoryException; import org.testng.Assert; /** * * @author Svetoslav */ public class PieExecutorTaskFactoryTest { public PieExecutorTaskFactoryTest() { } /** * Test of handlePieEvent method, of class PieExecutorService. */ @Test public void testHandlePieEvent() throws Exception { IPieEvent event = Mockito.mock(IPieEvent.class); Map<Class, Class> map = Mockito.mock(Map.class); IPieEventTask task = Mockito.mock(IPieEventTask.class); IBeanService beanService = Mockito.mock(IBeanService.class); Mockito.when(map.get(event.getClass())).thenReturn(task.getClass()); Class clazz = task.getClass(); Mockito.when(beanService.getBean(clazz)).thenReturn(task); PieExecutorTaskFactory instance = new PieExecutorTaskFactory(); instance.setBeanService(beanService); instance.setTasks(map); IPieEventTask res = instance.getTask(event); Mockito.verify(task, Mockito.times(1)).setEvent(event); Assert.assertEquals(res, task); } /** * Test of handlePieEvent method, of class PieExecutorService. */ @Test(expectedExceptions = PieExecutorTaskFactoryException.class) public void testHandlePieEventTaskNotCreated() throws Exception { IPieEvent event = Mockito.mock(IPieEvent.class); Map<Class, Class> map = Mockito.mock(Map.class); IBeanService beanService = Mockito.mock(IBeanService.class); Mockito.when(map.get(event.getClass())).thenReturn(IPieEventTask.class); Mockito.when(beanService.getBean(IPieEventTask.class)).thenThrow(BeanServiceError.class); PieExecutorTaskFactory instance = new PieExecutorTaskFactory(); instance.setBeanService(beanService); instance.setTasks(map); instance.getTask(event); } /** * Test of handlePieEvent method, of class PieExecutorService. */ @Test(expectedExceptions = PieExecutorTaskFactoryException.class) public void testHandlePieEventNoTaskRegistered() throws Exception { IPieEvent event = Mockito.mock(IPieEvent.class); Map<Class, Class> map = Mockito.mock(Map.class); Mockito.when(map.get(event.getClass())).thenReturn(null); PieExecutorTaskFactory instance = new PieExecutorTaskFactory(); instance.setTasks(map); instance.getTask(event); } /** * Test of handlePieEvent method, of class PieExecutorService. */ @Test(expectedExceptions = NullPointerException.class) public void testHandlePieEventNullValue() throws Exception { PieExecutorTaskFactory instance = new PieExecutorTaskFactory(); instance.getTask(null); } /** * Test of registerTask method, of class PieExecutorService. */ @Test public void testRegisterTask() { Map<Class, Class> map = Mockito.mock(Map.class); PieExecutorTaskFactory instance = new PieExecutorTaskFactory(); instance.setTasks(map); instance.registerTask(IPieEvent.class, IPieEventTask.class); Mockito.verify(map, Mockito.times(1)).put(IPieEvent.class, IPieEventTask.class); } /** * Test of registerTask method, of class PieExecutorService. */ @Test(expectedExceptions = NullPointerException.class) public void testRegisterTaskEventNullValue() { PieExecutorTaskFactory instance = new PieExecutorTaskFactory(); instance.registerTask(null, IPieEventTask.class); } /** * Test of registerTask method, of class PieExecutorService. */ @Test(expectedExceptions = NullPointerException.class) public void testRegisterTaskTaskNullValue() { PieExecutorTaskFactory instance = new PieExecutorTaskFactory(); instance.registerTask(IPieEvent.class, null); } @Test public void testRegisterExtendedTask() { Map<Class, Class> map = Mockito.mock(Map.class); PieExecutorTaskFactory instance = new PieExecutorTaskFactory(); instance.setTasks(map); class SubEvent implements IPieEvent { } class SubSubEvent extends SubEvent { } class SubTask implements IPieEventTask<SubEvent> { @Override public void setEvent(SubEvent msg) { throw new UnsupportedOperationException("Not supported yet."); //To change body of generated methods, choose Tools | Templates. } @Override public void run() { throw new UnsupportedOperationException("Not supported yet."); //To change body of generated methods, choose Tools | Templates. } } instance.registerTask(SubSubEvent.class, SubTask.class); Mockito.verify(map, Mockito.times(1)).put(SubSubEvent.class, SubTask.class); } }
vauvenal5/pieShare
pieUtilities/src/test/java/org/pieShare/pieTools/pieUtilities/service/pieExecutorService/PieExecutorTaskFactoryTest.java
Java
apache-2.0
5,226
package org.docksidestage.oracle.dbflute.cbean.cq; import org.dbflute.cbean.ConditionQuery; import org.dbflute.cbean.sqlclause.SqlClause; import org.docksidestage.oracle.dbflute.cbean.cq.bs.BsWhiteRefTargetCQ; /** * The condition-query of WHITE_REF_TARGET. * <p> * You can implement your original methods here. * This class remains when re-generating. * </p> * @author oracleman */ public class WhiteRefTargetCQ extends BsWhiteRefTargetCQ { // =================================================================================== // Constructor // =========== /** * Constructor. * @param referrerQuery The instance of referrer query. (Nullable: If null, this is base query) * @param sqlClause The instance of SQL clause. (NotNull) * @param aliasName The alias name for this query. (NotNull) * @param nestLevel The nest level of this query. (If zero, this is base query) */ public WhiteRefTargetCQ(ConditionQuery referrerQuery, SqlClause sqlClause, String aliasName, int nestLevel) { super(referrerQuery, sqlClause, aliasName, nestLevel); } // =================================================================================== // Arrange Method // ============== // You can make original arrange query methods here. // public void arrangeXxx() { // ... // } }
dbflute-test/dbflute-test-dbms-oracle
src/main/java/org/docksidestage/oracle/dbflute/cbean/cq/WhiteRefTargetCQ.java
Java
apache-2.0
1,672
// Copyright 2016, Google, Inc. // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. 'use strict'; var gcloud = require('gcloud'); // Create a datastore client. var datastore = gcloud.datastore(); /** * Gets a Datastore key from the kind/key pair in the request. * * @param {Object} requestData Cloud Function request data. * @param {string} requestData.key Datastore key string. * @param {string} requestData.kind Datastore kind. * @returns {Object} Datastore key object. */ function getKeyFromRequestData (requestData) { if (!requestData.key) { throw new Error('Key not provided. Make sure you have a "key" property ' + 'in your request'); } if (!requestData.kind) { throw new Error('Kind not provided. Make sure you have a "kind" property ' + 'in your request'); } return datastore.key([requestData.kind, requestData.key]); } /** * Creates and/or updates a record. * * @example * gcloud alpha functions call ds-set --data '{"kind":"gcf-test","key":"foobar","value":{"message": "Hello World!"}}' * * @param {Object} context Cloud Function context. * @param {Function} context.success Success callback. * @param {Function} context.failure Failure callback. * @param {Object} data Request data, in this case an object provided by the user. * @param {string} data.kind The Datastore kind of the data to save, e.g. "user". * @param {string} data.key Key at which to save the data, e.g. 5075192766267392. * @param {Object} data.value Value to save to Cloud Datastore, e.g. {"name":"John"} */ function set (context, data) { try { // The value contains a JSON document representing the entity we want to save if (!data.value) { throw new Error('Value not provided. Make sure you have a "value" ' + 'property in your request'); } var key = getKeyFromRequestData(data); return datastore.save({ key: key, data: data.value }, function (err) { if (err) { console.error(err); return context.failure(err); } return context.success('Entity saved'); }); } catch (err) { console.error(err); return context.failure(err.message); } } /** * Retrieves a record. * * @example * gcloud alpha functions call ds-get --data '{"kind":"gcf-test","key":"foobar"}' * * @param {Object} context Cloud Function context. * @param {Function} context.success Success callback. * @param {Function} context.failure Failure callback. * @param {Object} data Request data, in this case an object provided by the user. * @param {string} data.kind The Datastore kind of the data to retrieve, e.g. "user". * @param {string} data.key Key at which to retrieve the data, e.g. 5075192766267392. */ function get (context, data) { try { var key = getKeyFromRequestData(data); return datastore.get(key, function (err, entity) { if (err) { console.error(err); return context.failure(err); } // The get operation will not fail for a non-existent entity, it just // returns null. if (!entity) { return context.failure('No entity found for key ' + key.path); } return context.success(entity); }); } catch (err) { console.error(err); return context.failure(err.message); } } /** * Deletes a record. * * @example * gcloud alpha functions call ds-del --data '{"kind":"gcf-test","key":"foobar"}' * * @param {Object} context Cloud Function context. * @param {Function} context.success Success callback. * @param {Function} context.failure Failure callback. * @param {Object} data Request data, in this case an object provided by the user. * @param {string} data.kind The Datastore kind of the data to delete, e.g. "user". * @param {string} data.key Key at which to delete data, e.g. 5075192766267392. */ function del (context, data) { try { var key = getKeyFromRequestData(data); return datastore.delete(key, function (err) { if (err) { console.error(err); return context.failure(err); } return context.success('Entity deleted'); }); } catch (err) { console.error(err); return context.failure(err.message); } } exports.set = set; exports.get = get; exports.del = del;
pcostell/nodejs-docs-samples
functions/datastore/index.js
JavaScript
apache-2.0
4,757
package main import ( "errors" "fmt" "strings" "sync" . "github.com/tendermint/go-common" "github.com/codegangsta/cli" ) //-------------------------------------------------------------------------------- func cmdSsh(c *cli.Context) { args := c.Args() machines := ParseMachines(c.String("machines")) cmdBase(args, machines, sshCmd) } func cmdScp(c *cli.Context) { args := c.Args() machines := ParseMachines(c.String("machines")) fmt.Println(args, machines) cmdBase(args, machines, scpCmd) } func cmdBase(args, machines []string, cmd func(string, []string) error) { var wg sync.WaitGroup for _, mach := range machines { wg.Add(1) go func(mach string) { maybeSleep(len(machines), 2000) defer wg.Done() cmd(mach, args) }(mach) } wg.Wait() } func sshCmd(mach string, args []string) error { args = []string{"ssh", mach, strings.Join(args, " ")} if !runProcess("ssh-cmd-"+mach, "docker-machine", args, true) { return errors.New("Failed to exec ssh command on machine " + mach) } return nil } func scpCmd(mach string, args []string) error { if len(args) != 2 { return errors.New("scp expects exactly two args") } args = []string{"scp", args[0], mach + ":" + args[1]} if !runProcess("ssh-cmd-"+mach, "docker-machine", args, true) { return errors.New("Failed to exec ssh command on machine " + mach) } return nil } //-------------------------------------------------------------------------------- func cmdCreate(c *cli.Context) { args := c.Args() machines := ParseMachines(c.String("machines")) errs := createMachines(machines, args) if len(errs) > 0 { Exit(Fmt("There were %v errors", len(errs))) } else { fmt.Println(Fmt("Successfully created %v machines", len(machines))) } } func createMachines(machines []string, args []string) (errs []error) { var wg sync.WaitGroup for _, mach := range machines { wg.Add(1) go func(mach string) { maybeSleep(len(machines), 2000) defer wg.Done() err := createMachine(args, mach) if err != nil { errs = append(errs, err) // TODO: make thread safe } }(mach) } wg.Wait() return errs } func createMachine(args []string, mach string) error { args = append([]string{"create"}, args...) args = append(args, mach) if !runProcess("create-"+mach, "docker-machine", args, true) { return errors.New("Failed to create machine " + mach) } return nil } //-------------------------------------------------------------------------------- func cmdDestroy(c *cli.Context) { machines := ParseMachines(c.String("machines")) // Destroy each machine. var wg sync.WaitGroup for _, mach := range machines { wg.Add(1) go func(mach string) { defer wg.Done() err := removeMachine(mach) if err != nil { fmt.Println(Red(err.Error())) } }(mach) } wg.Wait() fmt.Println("Success!") } //-------------------------------------------------------------------------------- func cmdProvision(c *cli.Context) { args := c.Args() machines := ParseMachines(c.String("machines")) errs := provisionMachines(machines, args) if len(errs) > 0 { Exit(Fmt("There were %v errors", len(errs))) } else { fmt.Println(Fmt("Successfully created %v machines", len(machines))) } } func provisionMachines(machines []string, args []string) (errs []error) { var wg sync.WaitGroup for _, mach := range machines { wg.Add(1) go func(mach string) { maybeSleep(len(machines), 2000) defer wg.Done() err := provisionMachine(args, mach) if err != nil { errs = append(errs, err) } }(mach) } wg.Wait() return errs } func provisionMachine(args []string, mach string) error { args = append([]string{"provision"}, args...) args = append(args, mach) if !runProcess("provision-"+mach, "docker-machine", args, true) { return errors.New("Failed to provision machine " + mach) } return nil } //-------------------------------------------------------------------------------- // Stop a machine // mach: name of machine func stopMachine(mach string) error { args := []string{"stop", mach} if !runProcess("stop-"+mach, "docker-machine", args, true) { return errors.New("Failed to stop machine " + mach) } return nil } // Remove a machine // mach: name of machine func removeMachine(mach string) error { args := []string{"rm", "-f", mach} if !runProcess("remove-"+mach, "docker-machine", args, true) { return errors.New("Failed to remove machine " + mach) } return nil } // List machine names that match prefix func listMachines(prefix string) ([]string, error) { args := []string{"ls", "--quiet"} output, ok := runProcessGetResult("list-machines", "docker-machine", args, true) if !ok { return nil, errors.New("Failed to list machines") } output = strings.TrimSpace(output) if len(output) == 0 { return nil, nil } machines := strings.Split(output, "\n") matched := []string{} for _, mach := range machines { if strings.HasPrefix(mach, prefix+"-") { matched = append(matched, mach) } } return matched, nil } // Get ip of a machine // mach: name of machine func getMachineIP(mach string) (string, error) { args := []string{"ip", mach} output, ok := runProcessGetResult("get-ip-"+mach, "docker-machine", args, true) if !ok { return "", errors.New("Failed to get ip of machine" + mach) } return strings.TrimSpace(output), nil }
x2cn/mintnet
machine.go
GO
apache-2.0
5,313
/* * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.facebook.presto.hive; import com.facebook.presto.hive.metastore.Storage; import com.facebook.presto.spi.ConnectorPageSource; import com.facebook.presto.spi.ConnectorSession; import com.facebook.presto.spi.predicate.TupleDomain; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.joda.time.DateTimeZone; import java.util.List; import java.util.Map; import java.util.Optional; public interface HiveBatchPageSourceFactory { Optional<? extends ConnectorPageSource> createPageSource( Configuration configuration, ConnectorSession session, Path path, long start, long length, long fileSize, Storage storage, Map<String, String> tableParameters, List<HiveColumnHandle> columns, TupleDomain<HiveColumnHandle> effectivePredicate, DateTimeZone hiveStorageTimeZone, HiveFileContext hiveFileContext); }
ptkool/presto
presto-hive/src/main/java/com/facebook/presto/hive/HiveBatchPageSourceFactory.java
Java
apache-2.0
1,555
# OOP-MANIFESTO 10 basic rules to increase your Object Oriented Programming skills. Keep practicing and follow ALL the rules. Your skills should increase. Based on [The ThoughtWorks Anthology: Essays on Software](http://www.amazon.com/The-ThoughtWorks-Anthology-Technology-Programmers/dp/193435614X) ## What are we looking for? Every interview asks the same question. *When you are coding, what are the key concepts of good programming?* This are some of the important ones: - Loose coupling - High cohesion - Write testable classes, which is easily testable - Composition over Iheritence - Try stick to the open/close principle. - KISS: Keep It Simple Stupid! ## The Rules **1- One level of Identation per method** ```java public void method() { int a = 0; if(a > 0) { if(a > 10) { a++; // This is WRONG } } } ``` + Increase Readability + Decrease Complexity level **2- Do not use the 'else' keyword.** *When a given method provides one behavior for the if branch, and another behavior for the else branch, then it means that this particular method is not cohesive. It has got more than one responsibility, dealing with different behaviors.* ```java public boolean method() { int years = 35; if(years > 21) { return true; } else { // This is WRONG return false; } } ``` + Increase Single Responsability Principle **3- Wrap primitive types, strings and Lists** *If a variable of a primitive type has behavior, consider creating a class for it.* ```java public class Account { int balance; // Wrong Balance balance; // Correct List<Customer> custommers; // This is WRONG Custommers custommers; // This is RIGHT } public class Balance {} public class Custommers { private List<Customer> custommersList; ... } ``` + Increase coesion + Increase Single Responsability Principle **4- Use only one dot per line** *Based on the [Law of Demeter] (http://www.ccs.neu.edu/research/demeter/demeter-method/LawOfDemeter/paper-boy/demeter.pdf).* *"Only talk to your friends"* ```java public class Bank { Customer customer; // This is WRONG public void withdraw() { int value = customer.getBalance().value(); // WRONG if(value > 0) { customer.getBalance().withdraw(10); } } // This is RIGHT public void withdraw() { customer.withdraw(10); } } ``` + Better real world scenario + Bank is de-coupled from Customer.getBalance() Class/methods + Readability **5- Do not abbreviate** + Increase Readability **6- Keep your classes small** A class should have 50 statements and 150 width length. This excludes blank lines, comments and structure closing lines. The code should be visible inside the text editor of your IDE. In some languages, this also exclude the imports statements. **7- Do not use classes with several instance variables** :fire: This is polemical, and very hard to achieve :fire: A class should have only **2 instances** variables. ```java // This is WRONG public class Customer { Long id; String email; String username; String password; } // This is RIGHT public class Customer { Long id; CustomerDetails details; } public class CustomerDetails { Email email; Credentials credentials; } public class Credentials { String username; String password; } ``` + Increase coesion + This helps resolve the 3th and 6th rule, and vice-versa. **8- Do not use static methods** [Can you **simple** mock a static method?] (https://dzone.com/articles/why-static-bad-and-how-avoid) It makes you loose the Polymorphism. It is extreme hard to test. [They aren't associated with any object. They really aren't methods at all, according to the usual definition. They are really just procedures.](http://stackoverflow.com/questions/4002201/why-arent-static-methods-considered-good-oo-practice) + Increase readability + Increase Polymorphism + Increase Testability **9- Methods must have a maximum of 7 statements** PAY Attention: A FOR statement counts as 3 statements. *for (int i = 0; i < array.length; i++) {}* This leads to: **1**: int i = 0 **2**: i < array.length; **3**: i++ **10- Use composition instead of inheritance** Remember the 7th rule o achieve this one. You should only use 2 instance variable per class. ## Conclusion The 7th rule is the core of the manifesto. All the others rules are here to help you with the 7th rule. It makes your code become extremely cohesive. The loose couple idea is granted by the 3th, 4th 6th and 6th rule. # EPIC TOPICS In the word of [Fabio Sales Souza](https://github.com/fabiodoaraujo) > Não estude para passar na prova, estude para aprender!
matheusmessora/OOP-MANIFESTO
README.md
Markdown
apache-2.0
4,702
package ru.shestakov.services; import java.util.*; public class IteratorFlatten implements Iterator { Iterator<Iterator<Integer>> it; Iterator<Integer> cursor; public Iterator<Integer> convert(Iterator<Iterator<Integer>> it) { this.it = it; return this; } public boolean hasNext() { return it.hasNext() || (cursor != null && cursor.hasNext()); } public Integer next() { if(cursor == null) { cursor = it.next(); }else if (!cursor.hasNext()) { cursor = it.next(); } return cursor.next(); } public void remove() { } }
savspit/java-a-to-z
Part V. Chapter 1. Iterator/Tracker/src/main/java/ru/shestakov/services/IteratorFlatten.java
Java
apache-2.0
645
/* * Copyright (C) 2015 IZITEQ B.V. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package travel.izi.api.service.constant; /** * The "cost" parameter values. */ @SuppressWarnings("unused") public final class Cost { /** * In case of 'free’ only free content will be returned. */ public static final String FREE = "free"; /** * In case of 'paid’ only paid content will be returned. */ public static final String PAID = "paid"; }
iziteq/izi-travel-android-api
api/src/main/java/travel/izi/api/service/constant/Cost.java
Java
apache-2.0
989
<template name="notFound"> <div class="ui warning message"> <i class="close icon"></i> <div class="header"> {{i18n 'common.notFound'}} </div> <a href="{{pathFor 'home'}}">{{i18n 'common.goBack'}}</a> </div> </template>
feidens/rankz
client/views/notFound/notFound.html
HTML
apache-2.0
261
// Copyright (c) 2016 Tigera, Inc. All rights reserved. // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package common import ( "context" "fmt" "os" log "github.com/sirupsen/logrus" "k8s.io/apimachinery/pkg/api/meta" v1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/runtime" "github.com/projectcalico/go-yaml-wrapper" "github.com/projectcalico/calico/calicoctl/calicoctl/commands/argutils" "github.com/projectcalico/calico/calicoctl/calicoctl/commands/clientmgr" "github.com/projectcalico/calico/calicoctl/calicoctl/commands/file" "github.com/projectcalico/calico/calicoctl/calicoctl/resourcemgr" client "github.com/projectcalico/calico/libcalico-go/lib/clientv3" calicoErrors "github.com/projectcalico/calico/libcalico-go/lib/errors" ) type action int const ( ActionApply action = iota ActionCreate ActionUpdate ActionDelete ActionGetOrList ActionPatch ) // Convert loaded resources to a slice of resources for easier processing. // The loaded resources may be a slice containing resources and resource lists, or // may be a single resource or a single resource list. This function handles the // different possible options to convert to a single slice of resources. func convertToSliceOfResources(loaded interface{}) ([]resourcemgr.ResourceObject, error) { res := []resourcemgr.ResourceObject{} log.Infof("Converting resource to slice: %v", loaded) switch r := loaded.(type) { case []runtime.Object: for i := 0; i < len(r); i++ { r, err := convertToSliceOfResources(r[i]) if err != nil { return nil, err } res = append(res, r...) } case resourcemgr.ResourceObject: res = append(res, r) case resourcemgr.ResourceListObject: ret, err := meta.ExtractList(r) if err != nil { return nil, err } for _, v := range ret { res = append(res, v.(resourcemgr.ResourceObject)) } } log.Infof("Returning slice: %v", res) return res, nil } // CommandResults contains the results from executing a CLI command type CommandResults struct { // Whether the input file was invalid. FileInvalid bool // The number of resources that are being configured. NumResources int // The number of resources that were actually configured. This will // never be 0 without an associated error. NumHandled int // The associated error. Err error // The single type of resource that is being configured, or blank // if multiple resource types are being configured in a single shot. SingleKind string // The results returned from each invocation Resources []runtime.Object // Errors associated with individual resources ResErrs []error // The Calico API client used for the requests (useful if required // again). Client client.Interface } type fileError struct { error } // ExecuteConfigCommand is main function called by all of the resource management commands // in calicoctl (apply, create, replace, get, delete and patch). This provides common function // for all these commands: // - Load resources from file (or if not specified determine the resource from // the command line options). // - Convert the loaded resources into a list of resources (easier to handle) // - Process each resource individually, fanning out to the appropriate methods on // the client interface, collate results and exit on the first error. func ExecuteConfigCommand(args map[string]interface{}, action action) CommandResults { var resources []resourcemgr.ResourceObject singleKind := false log.Info("Executing config command") err := CheckVersionMismatch(args["--config"], args["--allow-version-mismatch"]) if err != nil { return CommandResults{Err: err} } errorOnEmpty := !argutils.ArgBoolOrFalse(args, "--skip-empty") if filename := args["--filename"]; filename != nil { // Filename is specified. Use the file iterator to handle the fact that this may be a directory rather than a // single file. For each file load the resources from the file and convert to a single slice of resources for // easier handling. err := file.Iter(args, func(modifiedArgs map[string]interface{}) error { modifiedFilename := modifiedArgs["--filename"].(string) r, err := resourcemgr.CreateResourcesFromFile(modifiedFilename) if err != nil { return fileError{err} } converted, err := convertToSliceOfResources(r) if err != nil { return fileError{err} } if len(converted) == 0 && errorOnEmpty { // We should fail on empty files. return fmt.Errorf("No resources specified in file %s", modifiedFilename) } resources = append(resources, converted...) return nil }) if err != nil { _, ok := err.(fileError) return CommandResults{Err: err, FileInvalid: ok} } if len(resources) == 0 { if errorOnEmpty { // Empty files are handled above, so the only way to get here is if --filename pointed to a directory. // We can therefore tweak the error message slightly to be more specific. return CommandResults{ Err: fmt.Errorf("No resources specified in directory %s", filename), } } else { // No data, but not an error case. Return an empty set of results. return CommandResults{} } } } else { // Filename is not specific so extract the resource from the arguments. This // is only useful for delete, get and patch functions - but we don't need to check that // here since the command syntax requires a filename for the other resource // management commands. var err error singleKind = true resources, err = resourcemgr.GetResourcesFromArgs(args) if err != nil { return CommandResults{Err: err} } if len(resources) == 0 { // No resources specified on non-file input is always an error. return CommandResults{ Err: fmt.Errorf("No resources specified"), } } } if log.GetLevel() >= log.DebugLevel { for _, v := range resources { log.Debugf("Resource: %s", v.GetObjectKind().GroupVersionKind().String()) } d, err := yaml.Marshal(resources) if err != nil { return CommandResults{Err: err} } log.Debugf("Data: %s", string(d)) } // Load the client config and connect. cf := args["--config"].(string) cclient, err := clientmgr.NewClient(cf) if err != nil { fmt.Printf("Failed to create Calico API client: %s\n", err) os.Exit(1) } log.Infof("Client: %v", cclient) // Initialise the command results with the number of resources and the name of the // kind of resource (if only dealing with a single resource). results := CommandResults{Client: cclient} var kind string count := make(map[string]int) for _, r := range resources { kind = r.GetObjectKind().GroupVersionKind().Kind count[kind] = count[kind] + 1 results.NumResources = results.NumResources + 1 } if len(count) == 1 || singleKind { results.SingleKind = kind } // Now execute the command on each resource in order, exiting as soon as we hit an // error. export := argutils.ArgBoolOrFalse(args, "--export") nameSpecified := false emptyName := false switch a := args["<NAME>"].(type) { case string: nameSpecified = len(a) > 0 _, ok := args["<NAME>"] emptyName = !ok || !nameSpecified case []string: nameSpecified = len(a) > 0 for _, v := range a { if v == "" { emptyName = true } } } if emptyName { return CommandResults{Err: fmt.Errorf("resource name may not be empty")} } for _, r := range resources { res, err := ExecuteResourceAction(args, cclient, r, action) if err != nil { switch action { case ActionApply, ActionCreate, ActionDelete, ActionGetOrList: results.ResErrs = append(results.ResErrs, err) continue default: results.Err = err } } // Remove the cluster specific metadata if the "--export" flag is specified // Skip removing cluster specific metadata if this is is called as a "list" // operation (no specific name is specified). if export && nameSpecified { for i := range res { rom := res[i].(v1.ObjectMetaAccessor).GetObjectMeta() rom.SetNamespace("") rom.SetUID("") rom.SetResourceVersion("") rom.SetCreationTimestamp(v1.Time{}) rom.SetDeletionTimestamp(nil) rom.SetDeletionGracePeriodSeconds(nil) rom.SetClusterName("") } } results.Resources = append(results.Resources, res...) results.NumHandled = results.NumHandled + len(res) } return results } // ExecuteResourceAction fans out the specific resource action to the appropriate method // on the ResourceManager for the specific resource. func ExecuteResourceAction(args map[string]interface{}, client client.Interface, resource resourcemgr.ResourceObject, action action) ([]runtime.Object, error) { rm := resourcemgr.GetResourceManager(resource) err := handleNamespace(resource, rm, args) if err != nil { return nil, err } var resOut runtime.Object ctx := context.Background() switch action { case ActionApply: resOut, err = rm.Apply(ctx, client, resource) case ActionCreate: resOut, err = rm.Create(ctx, client, resource) case ActionUpdate: resOut, err = rm.Update(ctx, client, resource) case ActionDelete: resOut, err = rm.Delete(ctx, client, resource) case ActionGetOrList: resOut, err = rm.GetOrList(ctx, client, resource) case ActionPatch: patch := args["--patch"].(string) resOut, err = rm.Patch(ctx, client, resource, patch) } // Skip over some errors depending on command line options. if err != nil { skip := false switch err.(type) { case calicoErrors.ErrorResourceAlreadyExists: skip = argutils.ArgBoolOrFalse(args, "--skip-exists") case calicoErrors.ErrorResourceDoesNotExist: skip = argutils.ArgBoolOrFalse(args, "--skip-not-exists") } if skip { resOut = resource err = nil } } return []runtime.Object{resOut}, err } // handleNamespace fills in the namespace information in the resource (if required), // and validates the namespace depending on whether or not a namespace should be // provided based on the resource kind. func handleNamespace(resource resourcemgr.ResourceObject, rm resourcemgr.ResourceManager, args map[string]interface{}) error { allNs := argutils.ArgBoolOrFalse(args, "--all-namespaces") cliNs := argutils.ArgStringOrBlank(args, "--namespace") resNs := resource.GetObjectMeta().GetNamespace() if rm.IsNamespaced() { switch { case allNs && cliNs != "": // Check if --namespace and --all-namespaces flags are used together. return fmt.Errorf("cannot use both --namespace and --all-namespaces flags at the same time") case resNs == "" && cliNs != "": // If resource doesn't have a namespace specified // but it's passed in through the -n flag then use that one. resource.GetObjectMeta().SetNamespace(cliNs) case resNs != "" && allNs: // If --all-namespaces is used then we must set namespace to "" so // list operation can list resources from all the namespaces. resource.GetObjectMeta().SetNamespace("") case resNs == "" && allNs: // no-op case resNs == "" && cliNs == "" && !allNs: // Set the namespace to "default" if not specified. resource.GetObjectMeta().SetNamespace("default") case resNs != "" && cliNs == "": // Use the namespace specified in the resource, which is already set. case resNs != cliNs: // If both resource and the CLI pass in the namespace but they don't match then return an error. return fmt.Errorf("resource namespace does not match client namespace. %s != %s", resNs, cliNs) } } else if resNs != "" { return fmt.Errorf("namespace should not be specified for a non-namespaced resource. %s is not a namespaced resource", resource.GetObjectKind().GroupVersionKind().Kind) } else if allNs || cliNs != "" { return fmt.Errorf("%s is not namespaced", resource.GetObjectKind().GroupVersionKind().Kind) } return nil }
projectcalico/calico
calicoctl/calicoctl/commands/common/resources.go
GO
apache-2.0
12,245
/* * * o o o o o * | o | |\ /| | / * | o-o o--o o-o oo | | O | oo o-o OO o-o o o * | | | | | | | | | | | | | | | | \ | | \ / * O---oo-o o--O | o-o o-o-o o o o-o-o o o o-o o * | * o--o * o--o o o--o o o * | | | | o | | * O-Oo oo o-o o-O o-o o-O-o O-o o-o | o-O o-o * | \ | | | | | | | | | | | | | |-' | | | \ * o o o-o-o o o-o o-o o o o o | o-o o o-o o-o * * Logical Markov Random Fields (LoMRF). * * */ package lomrf.mln.learning.supervision.metric import lomrf.logic.{ Constant, EvidenceAtom } import lomrf.mln.model.{ EvidenceDB, MLN, PredicateSchema } /** * A structural metric space is a measure of distance for herbrand interpretations. Such a measure * enables the calculation of distances based on the structural similarity of atoms. Moreover, it can * be extended given a numerical function for calculating numerical distances over specific domains. * * === Example === * {{{ * d(A, A) = 0 * * d(P(A, B), P(A, B)) = 0 * * d(P(A, B), Q(A, B)) = 1 * * d(P(B, A), P(A, C)) = * ( 1 / (2 * arity) ) * [ distance(B, A) + distance(A, C) ] * * d(Z(foo(A),B), Z(foo(B),B)) = * (1 / 2) * [ distance(foo(A), foo(B)) + distance(B, B) ] = * (1 / 4) * [ (1 / 2) * distance(A, B) + distance(B, B) ] * }}} * * @see Distance Between Herbrand Interpretations: A Measure for Approximations * to a Target Concept (1997) * * @param predicateSchema predicate schema * @param auxConstructs a map from return constants to auxiliary constructs * @param numericDistance a numerical distance (optional) * @param numericDomains a set of numerical domains (optional) */ final class StructureMetric private ( predicateSchema: PredicateSchema, auxConstructs: Map[Constant, AuxConstruct], numericDistance: Option[(Double, Double) => Double] = None, numericDomains: Option[Set[String]] = None) extends Metric { /** * Distance for ground evidence atoms. The function must obey to the following properties: * * {{{ * 1. d(x, y) >= 0 for all x, y and d(x, y) = 0 if and only if x = y * 2. d(x, y) = d(y, x) for all x, y * 3. d(x, y) + d(y, z) >= d(x, z) for all x, y, z (triangle inequality) * }}} * * @see [[lomrf.logic.EvidenceAtom]] * @param xAtom an evidence atom * @param yAtom another evidence atom * @return a distance for the given evidence atoms */ override def distance(xAtom: EvidenceAtom, yAtom: EvidenceAtom): Double = if (xAtom.state != yAtom.state || xAtom.symbol != yAtom.symbol) 1D else distance(xAtom.terms, yAtom.terms, predicateSchema.get(xAtom.signature)) /** * Distance for auxiliary predicates. * * @see [[lomrf.mln.learning.supervision.metric.AuxConstruct]] * @param xConstruct an auxiliary predicate * @param yConstruct another auxiliary predicate * @return a distance in the interval [0, 1] for the given auxiliary predicates. */ @inline private def distance(xConstruct: AuxConstruct, yConstruct: AuxConstruct): Double = if (xConstruct.signature != yConstruct.signature) 1D else predicateSchema.get(xConstruct.signature) match { case Some(domains) => distance(xConstruct.constants, yConstruct.constants, Some(domains.tail)) case None => distance(xConstruct.constants, yConstruct.constants, None) } /** * Distance for constant sequences. * * @param constantSeqA a constant sequence * @param constantSeqB another constant sequence * @return a distance in the interval [0, 1] for the given constant sequences */ @inline private def distance( constantSeqA: IndexedSeq[Constant], constantSeqB: IndexedSeq[Constant], domains: Option[Seq[String]]): Double = domains match { case None => (constantSeqA zip constantSeqB) .map { case (a, b) => distance(a, b) }.sum / (2d * constantSeqA.length) case Some(domainSeq) => (constantSeqA zip constantSeqB zip domainSeq.map(numericDomains.getOrElse(Set.empty).contains)) .map { case ((a, b), isNumeric) => distance(a, b, isNumeric) }.sum / (2d * constantSeqA.length) } /** * Distance for individual constants. * * @note If the given constants belong to some function return type, then the * distance for their corresponding auxiliary constructs is measured. * @see [[lomrf.logic.Constant]] * @param xConstant a constant * @param yConstant another constant * @return a distance in the interval [0, 1] for the given constants. If constants are identical * the distance is 0, else is 1. */ @inline private def distance(xConstant: Constant, yConstant: Constant, isNumeric: Boolean = false): Double = (auxConstructs.get(xConstant), auxConstructs.get(yConstant)) match { case (Some(functionA), Some(functionB)) => distance(functionA, functionB) case _ if numericDistance.isDefined && isNumeric && xConstant.symbol.matches("-?\\d+") => numericDistance.get(xConstant.symbol.toDouble, yConstant.symbol.toDouble) case _ => if (xConstant == yConstant) 0.0 else 1.0 } /** * Returns a structure metric space stemming from the concatenation of this one and a given one. * * @param mln an MLN * @return an extended metric space */ def ++(mln: MLN): StructureMetric = ++(mln.evidence.db) /** * that contains the auxiliary predicates for both metric spaces * * @param evidenceDB an evidence database * @return an extended structure metric space */ def ++(evidenceDB: EvidenceDB): StructureMetric = new StructureMetric( predicateSchema, auxConstructs ++ collectAuxConstructs(evidenceDB), numericDistance, numericDomains) /** * * @param distance a numerical distance function * @param domains a set of numerical domains * @return an extended structure metric that */ def makeNumeric(distance: (Double, Double) => Double, domains: Set[String]): StructureMetric = new StructureMetric(predicateSchema, auxConstructs, Some(distance), Some(domains)) } /** * Structure metric space object that enables the construction of metric spaces * either agnostic, based on a given an MLN or a predicate schema and an evidence database. */ object StructureMetric { /** * @return a structure metric agnostic of any domain */ def apply(): StructureMetric = new StructureMetric(Map.empty, Map.empty) /** * @param mln a given MLN * @return a structure metric based on the given MLN */ def apply(mln: MLN): StructureMetric = apply(mln.schema.predicates, mln.evidence.db) /** * * @param evidenceDB an evidence database * @return a structure metric based on the given evidence database */ def apply(evidenceDB: EvidenceDB): StructureMetric = apply(Map.empty, evidenceDB) /** * @param predicateSchema a predicate schema * @param evidenceDB an evidence database * @return a structure metric based on the given predicate schema and evidence database */ def apply(predicateSchema: PredicateSchema, evidenceDB: EvidenceDB): StructureMetric = new StructureMetric(predicateSchema, collectAuxConstructs(evidenceDB)) }
anskarl/LoMRF
src/main/scala/lomrf/mln/learning/supervision/metric/StructureMetric.scala
Scala
apache-2.0
7,484
<?php header("Cache-Control: no-cache, must-revalidate"); // HTTP/1.1 header("Expires: Sat, 26 Jul 1997 05:00:00 GMT"); // Date in the past $page=$_GET["p"]; switch ($page) { case ("sermons"); $title="Sermons"; $h1=$title; $righttext="<img style=\"float:right; padding:1em; width:30%;\" src=\"images\\sermonsubsub.jpg\"><h3 class=\"subsubmenutitle\">About Pastor Yeargin's Teachings</h3><p>The members of City Temple count ourselves blessed to have in our church, not only one of the finest preachers, but one of the most outstanding teachers. Rev. Dr. Grady A. Yeargin, Jr. takes scriptures you may have read countless times and finds new insights to explore. He is a very learned pastor and is constantly reading and educating himself in order to provide us with a very holistic and all-encompassing way to understand the bible and the God we serve. Pastor Yeargin also encourages us to read the Word for ourselves so we may be able to &ldquo;rightly divide the word of truth.&rdquo;</p> <h3 class=\"subsubmenutitle\">Purchasing CDs</h3> <p>Purchase CDs of Pastor Yeargin and the pulpit staff&rsquo;s sermons.</p> <h3 class=\"subsubmenutitle\">Have any questions?</h3> <p>If you have questions about the sermons, please call Pastor Yeargin at (410)&nbsp;462-4802 or <a href=\"mailto:pastor@thecitytemple.org\">e-mail</a> him.</p> "; break; case ("dance"); $title="Liturgical Dance"; $h1=$title; $righttext="<img style=\"float:right; padding:1em; width:30%;\" src=\"images\dancesubsub.jpg\"><p>The Dance Ministry is comprised of both men and women of all ages and backgrounds who have come together for the purposes of praising and worshipping our Lord and Saviour, Jesus Christ, through dance and movement! Our intention is to use our bodies as living sacrifices, holy and acceptable unto God, to release movements that will touch the lives of those who don't have a deep relationship with Christ; heal and deliver the afflicted and suffering; war against principalities; give testimony and thanksgiving unto God for His grace and mercy; and offer an alternative to praising and worshipping the Lord.</p> <p>In keeping in alignment with the word of God, our movements may be released in dance with music or in interpretation of scripture or prayer.</p> <h3 class=\"subsubmenutitle\">How to Join</h3> <p>1. Be a member of City Temple in good standing.</p> <p>2. Contact one of the group leaders to schedule a brief orientation.</p> <p>3. Attend rehearsals.</p> <h3 class=\"subsubmenutitle\">Group Leaders</h3> <p>The dance ministry is currently under the direction of Ministers Lori Ford and Marshell Jenkins. The group leaders are Beverly Clinton and Kristin Ford.</p> </p>"; break; case ("choir"); $title="Music &amp; Choirs"; $h1=$title; $righttext="<img style=\"float:right; padding:1em; width:30%;\" src=\"images\music.jpg\"><p>Praise is our response to the unconditional love of God. At City Temple, we believe that as praises go up, blessings will come down. If singing is how you or your child choose to worship God, please consider joining one of our choirs.</p> <h3 class=\"subsubmenutitle\">Choirs</h3> <h3 class=\"greentitle\">Temple Choir</h3><p>The Temple Choir is for seniors age 50 and above.</p> <h3 class=\"greentitle\">Gospel Ensemble</h3><p>The Gospel Ensemble enjoys singing gospel music at our church home and in the community. This choir&rsquo;s age range is 30 and above.</p> <h3 class=\"greentitle\">Youth Choir</h3><p>The youth choir welcomes children currently in elementary through high school.</p> <h3 class=\"greentitle\">Mass Choir</h3><p>The mass choir makes up members from each choir and also members of the congregation whom do not normally sing on any choir.</p> "; break; } ?> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "https://www.w3.org/TR/html4/strict.dtd"> <html> <head> <link rel="stylesheet" type="text/css" href="default.css"> <meta name="keywords" content="baltimore, baptist, city, temple, christian, church, grady, yeargin" > <meta name="description" content="City Temple Baptist Church Webpage" > <meta name="revised" content="Earl Jones, <?php echo date ("F d Y H:i:s.", filemtime(__FILE__))?>" > <meta http-equiv="content-type" content="text/html; charset=UTF-8" > <meta name="generator" content="Notepad++" > <title>City Temple of Baltimore - <? echo $title ?></title> </head> <body> <? include "bannerandmenu.php" ?> <div id="container"> <div class="bothsides"> <div class="leftside"> <?php include "servicetimesleft.php"?> </div> <div class="rightside"> <div class="navchain"><a href="index.php">Home</a> &gt;&gt; <a href="submenu.php?p=worship">Worship</a> &gt;&gt; <?php echo $h1 ?></div> <h1><?php echo $h1 ?></h1> <hr> <?php echo $righttext ?> </div> </div> </div> <?include "footer.php"?> </body> </html>
EJLearner/church-site
public_html/worshippage.php
PHP
apache-2.0
4,823
<?php /** * ApiResponse36 * * PHP version 5 * * @category Class * @package BumbalClient * @author Swaagger Codegen team * @link https://github.com/swagger-api/swagger-codegen */ /** * Bumbal Client Api * * Bumbal API documentation * * OpenAPI spec version: 2.0 * Contact: gerb@bumbal.eu * Generated by: https://github.com/swagger-api/swagger-codegen.git * */ /** * NOTE: This class is auto generated by the swagger code generator program. * https://github.com/swagger-api/swagger-codegen * Do not edit the class manually. */ namespace BumbalClient\Model; use \ArrayAccess; /** * ApiResponse36 Class Doc Comment * * @category Class * @package BumbalClient * @author Swagger Codegen team * @link https://github.com/swagger-api/swagger-codegen */ class ApiResponse36 implements ArrayAccess { const DISCRIMINATOR = null; /** * The original name of the model. * @var string */ protected static $swaggerModelName = 'ApiResponse_36'; /** * Array of property to type mappings. Used for (de)serialization * @var string[] */ protected static $swaggerTypes = [ 'message' => 'string', 'type' => 'string', 'code' => 'float', 'additional_data' => 'object' ]; /** * Array of property to format mappings. Used for (de)serialization * @var string[] */ protected static $swaggerFormats = [ 'message' => null, 'type' => null, 'code' => null, 'additional_data' => null ]; public static function swaggerTypes() { return self::$swaggerTypes; } public static function swaggerFormats() { return self::$swaggerFormats; } /** * Array of attributes where the key is the local name, and the value is the original name * @var string[] */ protected static $attributeMap = [ 'message' => 'message', 'type' => 'type', 'code' => 'code', 'additional_data' => 'additional_data' ]; /** * Array of attributes to setter functions (for deserialization of responses) * @var string[] */ protected static $setters = [ 'message' => 'setMessage', 'type' => 'setType', 'code' => 'setCode', 'additional_data' => 'setAdditionalData' ]; /** * Array of attributes to getter functions (for serialization of requests) * @var string[] */ protected static $getters = [ 'message' => 'getMessage', 'type' => 'getType', 'code' => 'getCode', 'additional_data' => 'getAdditionalData' ]; public static function attributeMap() { return self::$attributeMap; } public static function setters() { return self::$setters; } public static function getters() { return self::$getters; } /** * Associative array for storing property values * @var mixed[] */ protected $container = []; /** * Constructor * @param mixed[] $data Associated array of property values initializing the model */ public function __construct(array $data = null) { $this->container['message'] = isset($data['message']) ? $data['message'] : null; $this->container['type'] = isset($data['type']) ? $data['type'] : null; $this->container['code'] = isset($data['code']) ? $data['code'] : null; $this->container['additional_data'] = isset($data['additional_data']) ? $data['additional_data'] : null; } /** * show all the invalid properties with reasons. * * @return array invalid properties with reasons */ public function listInvalidProperties() { $invalid_properties = []; return $invalid_properties; } /** * validate all the properties in the model * return true if all passed * * @return bool True if all properties are valid */ public function valid() { return true; } /** * Gets message * @return string */ public function getMessage() { return $this->container['message']; } /** * Sets message * @param string $message Message describing the code * @return $this */ public function setMessage($message) { $this->container['message'] = $message; return $this; } /** * Gets type * @return string */ public function getType() { return $this->container['type']; } /** * Sets type * @param string $type Ready * @return $this */ public function setType($type) { $this->container['type'] = $type; return $this; } /** * Gets code * @return float */ public function getCode() { return $this->container['code']; } /** * Sets code * @param float $code * @return $this */ public function setCode($code) { $this->container['code'] = $code; return $this; } /** * Gets additional_data * @return object */ public function getAdditionalData() { return $this->container['additional_data']; } /** * Sets additional_data * @param object $additional_data * @return $this */ public function setAdditionalData($additional_data) { $this->container['additional_data'] = $additional_data; return $this; } /** * Returns true if offset exists. False otherwise. * @param integer $offset Offset * @return boolean */ public function offsetExists($offset) { return isset($this->container[$offset]); } /** * Gets offset. * @param integer $offset Offset * @return mixed */ public function offsetGet($offset) { return isset($this->container[$offset]) ? $this->container[$offset] : null; } /** * Sets value based on offset. * @param integer $offset Offset * @param mixed $value Value to be set * @return void */ public function offsetSet($offset, $value) { if (is_null($offset)) { $this->container[] = $value; } else { $this->container[$offset] = $value; } } /** * Unsets offset. * @param integer $offset Offset * @return void */ public function offsetUnset($offset) { unset($this->container[$offset]); } /** * Gets the string presentation of the object * @return string */ public function __toString() { if (defined('JSON_PRETTY_PRINT')) { // use JSON pretty print return json_encode(\BumbalClient\ObjectSerializer::sanitizeForSerialization($this), JSON_PRETTY_PRINT); } return json_encode(\BumbalClient\ObjectSerializer::sanitizeForSerialization($this)); } }
freightlive/bumbal-client-api-php
src/Model/ApiResponse36.php
PHP
apache-2.0
7,021
/* * Copyright © 2016 Cask Data, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); you may not * use this file except in compliance with the License. You may obtain a copy of * the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the * License for the specific language governing permissions and limitations under * the License. */ package co.cask.hydrator.plugin; import co.cask.cdap.api.data.format.StructuredRecord; import co.cask.cdap.api.data.schema.Schema; import co.cask.cdap.api.dataset.table.Table; import co.cask.cdap.etl.api.Transform; import co.cask.cdap.etl.batch.mapreduce.ETLMapReduce; import co.cask.cdap.etl.mock.batch.MockSink; import co.cask.cdap.etl.mock.batch.MockSource; import co.cask.cdap.etl.mock.common.MockPipelineConfigurer; import co.cask.cdap.etl.proto.v2.ETLBatchConfig; import co.cask.cdap.etl.proto.v2.ETLPlugin; import co.cask.cdap.etl.proto.v2.ETLStage; import co.cask.cdap.proto.artifact.AppRequest; import co.cask.cdap.proto.id.ApplicationId; import co.cask.cdap.proto.id.NamespaceId; import co.cask.cdap.test.ApplicationManager; import co.cask.cdap.test.DataSetManager; import co.cask.cdap.test.MapReduceManager; import com.google.common.collect.ImmutableList; import com.google.common.collect.ImmutableMap; import org.junit.Assert; import org.junit.BeforeClass; import org.junit.Test; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.concurrent.TimeUnit; /** * Test case for {@link Normalize}. */ public class NormalizeTest extends TransformPluginsTestBase { private static final String CUSTOMER_ID = "CustomerId"; private static final String ITEM_ID = "ItemId"; private static final String ITEM_COST = "ItemCost"; private static final String PURCHASE_DATE = "PurchaseDate"; private static final String ID = "Id"; private static final String DATE = "Date"; private static final String ATTRIBUTE_TYPE = "AttributeType"; private static final String ATTRIBUTE_VALUE = "AttributeValue"; private static final String CUSTOMER_ID_FIRST = "S23424242"; private static final String CUSTOMER_ID_SECOND = "R45764646"; private static final String ITEM_ID_ROW1 = "UR-AR-243123-ST"; private static final String ITEM_ID_ROW2 = "SKU-234294242942"; private static final String ITEM_ID_ROW3 = "SKU-567757543532"; private static final String PURCHASE_DATE_ROW1 = "08/09/2015"; private static final String PURCHASE_DATE_ROW2 = "10/12/2015"; private static final String PURCHASE_DATE_ROW3 = "06/09/2014"; private static final double ITEM_COST_ROW1 = 245.67; private static final double ITEM_COST_ROW2 = 67.90; private static final double ITEM_COST_ROW3 = 14.15; private static final Map<String, Object> dataMap = new HashMap<String, Object>(); private static final Schema INPUT_SCHEMA = Schema.recordOf("inputSchema", Schema.Field.of(CUSTOMER_ID, Schema.of(Schema.Type.STRING)), Schema.Field.of(ITEM_ID, Schema.nullableOf(Schema.of(Schema.Type.STRING))), Schema.Field.of(ITEM_COST, Schema.nullableOf(Schema.of(Schema.Type.DOUBLE))), Schema.Field.of(PURCHASE_DATE, Schema.of(Schema.Type.STRING))); private static final Schema OUTPUT_SCHEMA = Schema.recordOf("outputSchema", Schema.Field.of(ID, Schema.of(Schema.Type.STRING)), Schema.Field.of(DATE, Schema.of(Schema.Type.STRING)), Schema.Field.of(ATTRIBUTE_TYPE, Schema.of(Schema.Type.STRING)), Schema.Field.of(ATTRIBUTE_VALUE, Schema.of(Schema.Type.STRING))); private static String validFieldMapping; private static String validFieldNormalizing; @BeforeClass public static void initialiseData() { dataMap.put(CUSTOMER_ID_FIRST + PURCHASE_DATE_ROW1 + ITEM_ID, ITEM_ID_ROW1); dataMap.put(CUSTOMER_ID_FIRST + PURCHASE_DATE_ROW2 + ITEM_ID, ITEM_ID_ROW2); dataMap.put(CUSTOMER_ID_SECOND + PURCHASE_DATE_ROW3 + ITEM_ID, ITEM_ID_ROW3); dataMap.put(CUSTOMER_ID_FIRST + PURCHASE_DATE_ROW1 + ITEM_COST, String.valueOf(ITEM_COST_ROW1)); dataMap.put(CUSTOMER_ID_FIRST + PURCHASE_DATE_ROW2 + ITEM_COST, String.valueOf(ITEM_COST_ROW2)); dataMap.put(CUSTOMER_ID_SECOND + PURCHASE_DATE_ROW3 + ITEM_COST, String.valueOf(ITEM_COST_ROW3)); validFieldMapping = CUSTOMER_ID + ":" + ID + "," + PURCHASE_DATE + ":" + DATE; validFieldNormalizing = ITEM_ID + ":" + ATTRIBUTE_TYPE + ":" + ATTRIBUTE_VALUE + "," + ITEM_COST + ":" + ATTRIBUTE_TYPE + ":" + ATTRIBUTE_VALUE; } private String getKeyFromRecord(StructuredRecord record) { return record.get(ID).toString() + record.get(DATE) + record.get(ATTRIBUTE_TYPE); } private ApplicationManager deployApplication(Map<String, String> sourceProperties, String inputDatasetName, String outputDatasetName, String applicationName) throws Exception { ETLStage source = new ETLStage("source", MockSource.getPlugin(inputDatasetName)); ETLStage transform = new ETLStage("normalize", new ETLPlugin("Normalize", Transform.PLUGIN_TYPE, sourceProperties, null)); ETLStage sink = new ETLStage("sink", MockSink.getPlugin(outputDatasetName)); ETLBatchConfig etlConfig = ETLBatchConfig.builder("* * * * *") .addStage(source) .addStage(transform) .addStage(sink) .addConnection(source.getName(), transform.getName()) .addConnection(transform.getName(), sink.getName()) .build(); AppRequest<ETLBatchConfig> appRequest = new AppRequest<>(ETLBATCH_ARTIFACT, etlConfig); ApplicationId appId = NamespaceId.DEFAULT.app(applicationName); return deployApplication(appId.toId(), appRequest); } private void startMapReduceJob(ApplicationManager appManager) throws Exception { MapReduceManager mrManager = appManager.getMapReduceManager(ETLMapReduce.NAME); mrManager.start(); mrManager.waitForFinish(5, TimeUnit.MINUTES); } @Test public void testOutputSchema() throws Exception { Normalize.NormalizeConfig config = new Normalize.NormalizeConfig(validFieldMapping, validFieldNormalizing, OUTPUT_SCHEMA.toString()); MockPipelineConfigurer configurer = new MockPipelineConfigurer(INPUT_SCHEMA); new Normalize(config).configurePipeline(configurer); Assert.assertEquals(OUTPUT_SCHEMA, configurer.getOutputSchema()); } @Test(expected = IllegalArgumentException.class) public void testEmptyFieldMapping() throws Exception { Normalize.NormalizeConfig config = new Normalize.NormalizeConfig(null, validFieldNormalizing, OUTPUT_SCHEMA.toString()); MockPipelineConfigurer configurer = new MockPipelineConfigurer(INPUT_SCHEMA); new Normalize(config).configurePipeline(configurer); } @Test(expected = IllegalArgumentException.class) public void testEmptyFieldNormalizing() throws Exception { Normalize.NormalizeConfig config = new Normalize.NormalizeConfig(validFieldMapping, null, OUTPUT_SCHEMA.toString()); MockPipelineConfigurer configurer = new MockPipelineConfigurer(INPUT_SCHEMA); new Normalize(config).configurePipeline(configurer); } @Test(expected = IllegalArgumentException.class) public void testEmptyOutputSchema() throws Exception { Normalize.NormalizeConfig config = new Normalize.NormalizeConfig(validFieldMapping, validFieldNormalizing, null); MockPipelineConfigurer configurer = new MockPipelineConfigurer(INPUT_SCHEMA); new Normalize(config).configurePipeline(configurer); } @Test(expected = IllegalArgumentException.class) public void testInvalidMappingValues() throws Exception { Normalize.NormalizeConfig config = new Normalize.NormalizeConfig("CustomerId,PurchaseDate:Date", validFieldNormalizing, OUTPUT_SCHEMA.toString()); MockPipelineConfigurer configurer = new MockPipelineConfigurer(INPUT_SCHEMA); new Normalize(config).configurePipeline(configurer); } @Test(expected = IllegalArgumentException.class) public void testInvalidNormalizingValues() throws Exception { Normalize.NormalizeConfig config = new Normalize.NormalizeConfig(validFieldMapping, "ItemId:AttributeType," + "ItemCost:AttributeType:AttributeValue", OUTPUT_SCHEMA.toString()); MockPipelineConfigurer configurer = new MockPipelineConfigurer(INPUT_SCHEMA); new Normalize(config).configurePipeline(configurer); } @Test(expected = IllegalArgumentException.class) public void testInvalidOutputSchema() throws Exception { //schema with no ID field Schema outputSchema = Schema.recordOf("outputSchema", Schema.Field.of(DATE, Schema.of(Schema.Type.STRING)), Schema.Field.of(ATTRIBUTE_TYPE, Schema.of(Schema.Type.STRING)), Schema.Field.of(ATTRIBUTE_VALUE, Schema.of(Schema.Type.STRING))); Normalize.NormalizeConfig config = new Normalize.NormalizeConfig(validFieldMapping, validFieldNormalizing, outputSchema.toString()); MockPipelineConfigurer configurer = new MockPipelineConfigurer(INPUT_SCHEMA); new Normalize(config).configurePipeline(configurer); } @Test(expected = IllegalArgumentException.class) public void testInvalidOutputSchemaFieldType() throws Exception { //schema with ID field as long Schema outputSchema = Schema.recordOf("outputSchema", Schema.Field.of(ID, Schema.of(Schema.Type.LONG)), Schema.Field.of(DATE, Schema.of(Schema.Type.STRING)), Schema.Field.of(ATTRIBUTE_TYPE, Schema.of(Schema.Type.STRING)), Schema.Field.of(ATTRIBUTE_VALUE, Schema.of(Schema.Type.STRING))); Normalize.NormalizeConfig config = new Normalize.NormalizeConfig(validFieldMapping, validFieldNormalizing, outputSchema.toString()); MockPipelineConfigurer configurer = new MockPipelineConfigurer(INPUT_SCHEMA); new Normalize(config).configurePipeline(configurer); } @Test(expected = IllegalArgumentException.class) public void testInvalidMappingsFromInputSchema() throws Exception { Normalize.NormalizeConfig config = new Normalize.NormalizeConfig("Purchaser:Id,PurchaseDate:Date", validFieldNormalizing, OUTPUT_SCHEMA.toString()); MockPipelineConfigurer configurer = new MockPipelineConfigurer(INPUT_SCHEMA); new Normalize(config).configurePipeline(configurer); } @Test(expected = IllegalArgumentException.class) public void testInvalidNormalizingFromInputSchema() throws Exception { Normalize.NormalizeConfig config = new Normalize.NormalizeConfig(validFieldMapping, "ObjectId:AttributeType:AttributeValue," + "ItemCost:AttributeType:AttributeValue", OUTPUT_SCHEMA.toString()); MockPipelineConfigurer configurer = new MockPipelineConfigurer(INPUT_SCHEMA); new Normalize(config).configurePipeline(configurer); } @Test(expected = IllegalArgumentException.class) public void testInvalidNormalizeTypeAndValue() throws Exception { Normalize.NormalizeConfig config = new Normalize.NormalizeConfig(validFieldMapping, "ItemId:AttributeType:AttributeValue," + "ItemCost:ExpenseType:ExpenseValue", OUTPUT_SCHEMA.toString()); MockPipelineConfigurer configurer = new MockPipelineConfigurer(INPUT_SCHEMA); new Normalize(config).configurePipeline(configurer); } @Test public void testNormalize() throws Exception { String inputTable = "inputNormalizeTable"; Map<String, String> sourceproperties = new ImmutableMap.Builder<String, String>() .put("fieldMapping", validFieldMapping) .put("fieldNormalizing", validFieldNormalizing) .put("outputSchema", OUTPUT_SCHEMA.toString()) .build(); String outputTable = "outputNormalizeTable"; ApplicationManager applicationManager = deployApplication(sourceproperties, inputTable, outputTable, "normalizeTest"); DataSetManager<Table> inputManager = getDataset(inputTable); List<StructuredRecord> input = ImmutableList.of( StructuredRecord.builder(INPUT_SCHEMA).set(ITEM_ID, ITEM_ID_ROW1).set(CUSTOMER_ID, CUSTOMER_ID_FIRST) .set(ITEM_COST, ITEM_COST_ROW1).set(PURCHASE_DATE, PURCHASE_DATE_ROW1).build(), StructuredRecord.builder(INPUT_SCHEMA).set(ITEM_ID, ITEM_ID_ROW2).set(CUSTOMER_ID, CUSTOMER_ID_FIRST) .set(ITEM_COST, ITEM_COST_ROW2).set(PURCHASE_DATE, PURCHASE_DATE_ROW2).build(), StructuredRecord.builder(INPUT_SCHEMA).set(ITEM_ID, ITEM_ID_ROW3).set(CUSTOMER_ID, CUSTOMER_ID_SECOND) .set(ITEM_COST, ITEM_COST_ROW3).set(PURCHASE_DATE, PURCHASE_DATE_ROW3).build() ); MockSource.writeInput(inputManager, input); startMapReduceJob(applicationManager); DataSetManager<Table> outputManager = getDataset(outputTable); List<StructuredRecord> outputRecords = MockSink.readOutput(outputManager); Assert.assertEquals(6, outputRecords.size()); Assert.assertEquals(outputRecords.get(0).get(ATTRIBUTE_VALUE), dataMap.get(getKeyFromRecord(outputRecords.get(0)))); Assert.assertEquals(outputRecords.get(1).get(ATTRIBUTE_VALUE), dataMap.get(getKeyFromRecord(outputRecords.get(1)))); Assert.assertEquals(outputRecords.get(2).get(ATTRIBUTE_VALUE), dataMap.get(getKeyFromRecord(outputRecords.get(2)))); Assert.assertEquals(outputRecords.get(3).get(ATTRIBUTE_VALUE), dataMap.get(getKeyFromRecord(outputRecords.get(3)))); Assert.assertEquals(outputRecords.get(4).get(ATTRIBUTE_VALUE), dataMap.get(getKeyFromRecord(outputRecords.get(4)))); Assert.assertEquals(outputRecords.get(5).get(ATTRIBUTE_VALUE), dataMap.get(getKeyFromRecord(outputRecords.get(5)))); } @Test public void testNormalizeWithEmptyAttributeValue() throws Exception { String inputTable = "inputNormalizeWithEmptyValueTable"; Map<String, String> sourceproperties = new ImmutableMap.Builder<String, String>() .put("fieldMapping", validFieldMapping) .put("fieldNormalizing", validFieldNormalizing) .put("outputSchema", OUTPUT_SCHEMA.toString()) .build(); String outputTable = "outputNormalizeWithEmptyValueTable"; ApplicationManager applicationManager = deployApplication(sourceproperties, inputTable, outputTable, "normalizeWithEmptyValueTest"); DataSetManager<Table> inputManager = getDataset(inputTable); //ItemId for first row and ItemCost for second row is null. List<StructuredRecord> input = ImmutableList.of( StructuredRecord.builder(INPUT_SCHEMA).set(ITEM_ID, null).set(CUSTOMER_ID, CUSTOMER_ID_FIRST) .set(ITEM_COST, ITEM_COST_ROW1).set(PURCHASE_DATE, PURCHASE_DATE_ROW1).build(), StructuredRecord.builder(INPUT_SCHEMA).set(ITEM_ID, ITEM_ID_ROW2).set(CUSTOMER_ID, CUSTOMER_ID_FIRST) .set(ITEM_COST, null).set(PURCHASE_DATE, PURCHASE_DATE_ROW2).build(), StructuredRecord.builder(INPUT_SCHEMA).set(ITEM_ID, ITEM_ID_ROW3).set(CUSTOMER_ID, CUSTOMER_ID_SECOND) .set(ITEM_COST, ITEM_COST_ROW3).set(PURCHASE_DATE, PURCHASE_DATE_ROW3).build() ); MockSource.writeInput(inputManager, input); startMapReduceJob(applicationManager); DataSetManager<Table> outputManager = getDataset(outputTable); List<StructuredRecord> outputRecords = MockSink.readOutput(outputManager); //there should be 4 records only, null value record must not emit. Assert.assertEquals(4, outputRecords.size()); Assert.assertEquals(outputRecords.get(0).get(ATTRIBUTE_VALUE), dataMap.get(getKeyFromRecord(outputRecords.get(0)))); Assert.assertEquals(outputRecords.get(1).get(ATTRIBUTE_VALUE), dataMap.get(getKeyFromRecord(outputRecords.get(1)))); Assert.assertEquals(outputRecords.get(2).get(ATTRIBUTE_VALUE), dataMap.get(getKeyFromRecord(outputRecords.get(2)))); Assert.assertEquals(outputRecords.get(3).get(ATTRIBUTE_VALUE), dataMap.get(getKeyFromRecord(outputRecords.get(3)))); } }
romy-khetan/hydrator-plugins
transform-plugins/src/test/java/co/cask/hydrator/plugin/NormalizeTest.java
Java
apache-2.0
17,233
using System.Reflection; using System.Resources; using System.Runtime.CompilerServices; using System.Runtime.InteropServices; using System.Windows; // 有关程序集的一般信息由以下 // 控制。更改这些特性值可修改 // 与程序集关联的信息。 [assembly: AssemblyTitle("Timer")] [assembly: AssemblyDescription("")] [assembly: AssemblyConfiguration("")] [assembly: AssemblyCompany("")] [assembly: AssemblyProduct("Timer")] [assembly: AssemblyCopyright("Copyright © 2017")] [assembly: AssemblyTrademark("")] [assembly: AssemblyCulture("")] //将 ComVisible 设置为 false 将使此程序集中的类型 //对 COM 组件不可见。 如果需要从 COM 访问此程序集中的类型, //请将此类型的 ComVisible 特性设置为 true。 [assembly: ComVisible(false)] //若要开始生成可本地化的应用程序,请 //<PropertyGroup> 中的 .csproj 文件中 //例如,如果您在源文件中使用的是美国英语, //使用的是美国英语,请将 <UICulture> 设置为 en-US。 然后取消 //对以下 NeutralResourceLanguage 特性的注释。 更新 //以下行中的“en-US”以匹配项目文件中的 UICulture 设置。 //[assembly: NeutralResourcesLanguage("en-US", UltimateResourceFallbackLocation.Satellite)] [assembly: ThemeInfo( ResourceDictionaryLocation.None, //主题特定资源词典所处位置 //(当资源未在页面 //或应用程序资源字典中找到时使用) ResourceDictionaryLocation.SourceAssembly //常规资源词典所处位置 //(当资源未在页面 //、应用程序或任何主题专用资源字典中找到时使用) )] // 程序集的版本信息由下列四个值组成: // // 主版本 // 次版本 // 生成号 // 修订号 // //可以指定所有这些值,也可以使用“生成号”和“修订号”的默认值, // 方法是按如下所示使用“*”: : // [assembly: AssemblyVersion("1.0.*")] [assembly: AssemblyVersion("1.0.0.0")] [assembly: AssemblyFileVersion("1.0.0.0")]
kingwangboss/WPFDemo
WPFDemo/Timer/Properties/AssemblyInfo.cs
C#
apache-2.0
2,190
using System; namespace Jal.Router.Model { public class ContentContext { public string Data { get; private set; } public string ClaimCheckId { get; private set; } public MessageContext Context { get; private set; } public bool UseClaimCheck { get; private set; } public object ReplyData { get; set; } public ContentContext(MessageContext context, string claimcheckid, bool useclaimcheck, string data) { Data = data; ClaimCheckId = claimcheckid; Context = context; UseClaimCheck = useclaimcheck; } public ContentContextEntity ToEntity() { return new ContentContextEntity(Data, ClaimCheckId); } } }
raulnq/Jal.Router
Jal.Router/Model/Contexts/ContentContext.cs
C#
apache-2.0
766
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>PCC Pizza</title> </head> <body> <h1>WELCOME TO PCC PIZZA</h1> </body> </html>
gemfire/cf-gemfire-connector-examples
pcc-pizza-store/src/main/resources/static/index.html
HTML
apache-2.0
160
<!-- plain text link --> <a class="formatting-link plain-text-link" href="/talks/id/{{ event.slug }}?format=txt" title="View a plain text version of this talk"> <span class="fa fa-lg fa-file-text-o" aria-hidden="true"></span> </a>
ox-it/talks.ox
talks/templates/events/event_plain_text_link.html
HTML
apache-2.0
235
/** * @license * Copyright 2014 Google Inc. All Rights Reserved. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ goog.provide('lf.query.InsertBuilder'); goog.require('lf.Binder'); goog.require('lf.Exception'); goog.require('lf.query.BaseBuilder'); goog.require('lf.query.Insert'); goog.require('lf.query.InsertContext'); /** * @extends {lf.query.BaseBuilder.<!lf.query.InsertContext>} * @implements {lf.query.Insert} * @struct * @constructor * * @param {!lf.Global} global * @param {boolean=} opt_allowReplace Whether the generated query should allow * replacing an existing record. */ lf.query.InsertBuilder = function(global, opt_allowReplace) { lf.query.InsertBuilder.base( this, 'constructor', global, new lf.query.InsertContext()); this.query.allowReplace = opt_allowReplace || false; }; goog.inherits(lf.query.InsertBuilder, lf.query.BaseBuilder); /** @override */ lf.query.InsertBuilder.prototype.assertExecPreconditions = function() { lf.query.InsertBuilder.base(this, 'assertExecPreconditions'); var context = this.query; if (!goog.isDefAndNotNull(context.into) || !goog.isDefAndNotNull(context.values)) { throw new lf.Exception( lf.Exception.Type.SYNTAX, 'Invalid usage of insert()'); } // "Insert or replace" makes no sense for tables that do not have a primary // key. if (context.allowReplace && goog.isNull(context.into.getConstraint().getPrimaryKey())) { throw new lf.Exception( lf.Exception.Type.SYNTAX, 'Attemted to insert or replace in a table with no primary key.'); } }; /** @override */ lf.query.InsertBuilder.prototype.into = function(table) { this.assertIntoPreconditions_(); this.query.into = table; return this; }; /** @override */ lf.query.InsertBuilder.prototype.values = function(rows) { this.assertValuesPreconditions_(); if (rows instanceof lf.Binder || rows.some(function(r) { return r instanceof lf.Binder; })) { this.query.binder = rows; } else { this.query.values = rows; } return this; }; /** * Asserts whether the preconditions for calling the into() method are met. * @private */ lf.query.InsertBuilder.prototype.assertIntoPreconditions_ = function() { if (goog.isDefAndNotNull(this.query.into)) { throw new lf.Exception( lf.Exception.Type.SYNTAX, 'into() has already been called.'); } }; /** * Asserts whether the preconditions for calling the values() method are met. * @private */ lf.query.InsertBuilder.prototype.assertValuesPreconditions_ = function() { if (goog.isDefAndNotNull(this.query.values)) { throw new lf.Exception( lf.Exception.Type.SYNTAX, 'values() has already been called.'); } };
nishant8BITS/lovefield
lib/query/insert_builder.js
JavaScript
apache-2.0
3,243
package io.virtdata.libbasics.shared.from_double.to_unset; import io.virtdata.annotations.Categories; import io.virtdata.annotations.Category; import io.virtdata.annotations.ThreadSafeMapper; import io.virtdata.api.VALUE; import java.util.function.DoubleFunction; @ThreadSafeMapper @Categories(Category.nulls) public class UnsetIfLt implements DoubleFunction<Object> { private final double compareto; public UnsetIfLt(double compareto) { this.compareto = compareto; } @Override public Object apply(double value) { if (value < compareto) return VALUE.unset; return value; } }
virtualdataset/metagen-java
virtdata-lib-basics/src/main/java/io/virtdata/libbasics/shared/from_double/to_unset/UnsetIfLt.java
Java
apache-2.0
628
/* * Licensed to the Ted Dunning under one or more contributor license * agreements. See the NOTICE file that may be * distributed with this work for additional information * regarding copyright ownership. Ted Dunning licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package com.mapr.synth.drive; public class Constants { static final double EARTH_RADIUS_KM = 6371.39; // acceleration of gravity in m/s/s static final double G = 9.80665; // convert MPH to m/s static final double MPH = (5280 * 12 * 2.54) / (100 * 3600); // how far away must points be to be considered distinct? static final double GEO_FUZZ = 0.005; }
smarthi/log-synth
src/main/java/com/mapr/synth/drive/Constants.java
Java
apache-2.0
1,185
<!DOCTYPE html> <html lang="de"> <head> <title>2 m Funkpeilen in Preding</title> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta name="description" content=""> <meta name="keywords" content=""> <meta name="robots" content="index, follow"> <meta name="revisit-after" content="7 days"> <link rel="apple-touch-icon" href="/export/sites/oe6/adl622/favicon.png" /> <link rel="icon" href="/export/sites/oe6/adl622/favicon.png" type="image/png" /> <link rel="stylesheet" href="/export/system/modules/at.oevsv.apollo.extensions/resources/css/style-oevsv.min.css" /> <link rel="alternate" type="application/rss+xml" href="/adl622/rss-events.xml" title="Veranstaltungen & Termine"> <link rel="alternate" type="application/rss+xml" href="/adl622/rss-news.xml" title="Aktuelles"> </head> <body> <div class="wrapper"> <div id="page-complete" > <div class=""><div ><div class="mb-20"> <div class="header topheader-oevsv"> <div class="container oevsv"> <!--=== Top ===--> <div class="topbar"> <ul class="loginbar pull-right"> <li class="hoverSelector"> <i class="fa fa-globe"></i> <a>ÖVSV - LANDESVERBÄNDE</a> <ul class="languages hoverSelectorBlock"> <li><a href="http://oe1.oevsv.at">OE1 Wien</a></li> <li><a href="http://oe2.oevsv.at">OE2 Salzburg</a></li> <li><a href="http://oe3.oevsv.at">OE3 Niederösterreich</a></li> <li><a href="http://oe4.oevsv.at">OE4 Burgenland</a></li> <li><a href="http://oe5.oevsv.at">OE5 Oberösterreich</a></li> <li class="active"><a href="http://oe6.oevsv.at">OE6 Steiermark&nbsp;<i class="fa fa-check"></i></a></li> <li><a href="http://oe7.oevsv.at">OE7 Tirol</a></li> <li><a href="http://oe8.oevsv.at">OE8 Kärnten</a></li> <li><a href="http://oe9.oevsv.at">OE9 Vorarlberg</a></li> <li><a href="http://amrs.oevsv.at">AMRS</a></li> <li><a href="http://www.oevsv.at">Dachverband</a></li> </ul> </li> <li class="topbar-devider"></li> <li class="hoverSelector"> <i class="fa fa-key"></i> <a href="http://workplace.oevsv.at/system/login/">Login</a> </li> </ul> </div> <!--=== End Top ===--> </div> </div> <div class="container oevsv"> <div class="row"> <div class="col-xs-12"> <a href="/"> <img src="/export/shared/.content/.galleries/logos/Kopfleisten-OE6/Kopfleiste-Steiermark-ADL622.png" alt="" class="img-responsive"> </a> </div> </div> </div> <div class="header"> <div class="container oevsv"> <button type="button" class="navbar-toggle" data-toggle="collapse" data-target=".navbar-responsive-collapse"> <span class="sr-only">Toggle navigation</span> <span class="fa fa-bars"></span> </button> </div> <!--/end container--> <!-- Menu --> <div class="collapse navbar-collapse mega-menu navbar-responsive-collapse"> <div class="container mt-10"> <ul class="nav navbar-nav"> <li > <a href="/adl622/home/" >Home</a> </li><li > <a href="/adl622/ehrungen/" >Ehrungen</a> </li><li > <a href="/adl622/geschichte/" >Geschichte</a> </li><li > <a href="/adl622/trx-nostalgie/" >TRX-Nostalgie</a> </li><li class="active"> <a href="/adl622/veranstaltungen/" >Veranstaltungen</a> </li><li > <a href="/adl622/videos/" >Videos</a> </li><li id="searchButtonHeader"> <i class="search fa fa-search search-btn"></i> <div class="search-open"> <form class="form-inline" name="searchFormHeader" action="/adl622/suche/" method="post"> <div class="input-group animated fadeInDown" id="searchContentHeader"> <input type="text" name="q" class="form-control" placeholder="Search" id="searchWidgetAutoCompleteHeader" /> <span class="input-group-btn"> <button class="btn-u" type="button" onclick="this.form.submit(); return false;">Go</button> </span> </div> </form> </div> </li> </ul> </div><!--/end container--> </div><!-- /collapse --> </div> <!--/header --> </div> </div></div> <div > <div class="container "><div > <div class="row "><div class="col-xs-12" > <div class="row "><div class="col-xs-12" ><div class="row ap-sec "> <div class="col-sm-12"> <div class="headline"> <h2 > Rennfeldrunde</h2> </div> </div> <div class="col-xs-2 col-sm-3"> <div class="ap-img ap-img-v0"> <div class="ap-img-pic "> <a data-gallery="true" class="zoomer" data-size="w:800,h:600" href="/export/sites/oe6/adl622/.galleries/Bilder/Rennfeld-Schutzhaus_10.jpg" title=" " data-rel="fancybox-button-98cb02dc-4ead-11e8-8f2c-ab9d06357455" id="fancyboxzoom98cb02dc-4ead-11e8-8f2c-ab9d06357455"> <span class="overlay-zoom"> <span > <img src="/export/sites/oe6/adl622/.galleries/Bilder/Rennfeld-Schutzhaus_10.jpg" class="img-responsive " alt=" " title=" " /> </span> <span class="zoom-icon"></span> </span> </a> </div> </div> </div> <div class="col-xs-10 col-sm-9"> <div class="ap-plain" > <div > <p>Datum: jeden Freitag</p> <p>Zeit: ab 18:00 LT</p> <p>Mode: FM anschließend C4FM</p> <p>QRG: R2 OE6XBG 145.650 Mhz</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p></div> </div> </div> </div><div class="row ap-sec "> <div class="col-sm-12"> <div class="headline"> <h2 > Ferienpass 2020</h2> </div> </div> <div class="col-xs-2 col-sm-3"> <div class="ap-img ap-img-v0"> <div class="ap-img-pic "> <a data-gallery="true" class="zoomer" data-size="w:1024,h:580" href="/export/sites/oe6/adl622/.galleries/Bilder/ADL622_Ferienpass_BM-Logo_04.jpg" title=" " data-rel="fancybox-button-f074e8fb-3d58-11e8-96d2-ab9d06357455" id="fancyboxzoomf074e8fb-3d58-11e8-96d2-ab9d06357455"> <span class="overlay-zoom"> <span > <img src="/export/sites/oe6/adl622/.galleries/Bilder/ADL622_Ferienpass_BM-Logo_04.jpg" class="img-responsive " alt=" " title=" " /> </span> <span class="zoom-icon"></span> </span> </a> </div> </div> </div> <div class="col-xs-10 col-sm-9"> <div class="ap-plain" > <div > <p>Datum wird bekannt gegeben.</p> <p>Zeit: ab 14:00 LT</p> <p>Ort: Brucker Uhrturm</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p></div> </div> </div> </div><div class="row ap-sec "> <div class="col-sm-12"> <div class="headline"> <h2 > 30 Jahre Feier ADL622</h2> </div> </div> <div class="col-xs-2 col-sm-3"> <div class="ap-img ap-img-v0"> <div class="ap-img-pic "> <a data-gallery="true" class="zoomer" data-size="w:1024,h:737" href="/export/sites/oe6/adl622/.galleries/Bilder/30-Jahre-ADL622_01.jpg" title=" " data-rel="fancybox-button-1051f420-3d59-11e8-96d2-ab9d06357455" id="fancyboxzoom1051f420-3d59-11e8-96d2-ab9d06357455"> <span class="overlay-zoom"> <span > <img src="/export/sites/oe6/adl622/.galleries/Bilder/30-Jahre-ADL622_01.jpg" class="img-responsive " alt=" " title=" " /> </span> <span class="zoom-icon"></span> </span> </a> </div> </div> </div> <div class="col-xs-10 col-sm-9"> <div class="ap-plain" > <div > <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p></div> </div> </div> </div></div></div> <div class="row "><div class="col-sm-6" ></div><div class="col-sm-6" ></div></div> </div></div> </div></div> </div> <div id="footer-default" class="footer-default mt-10"> <div class="footer footer-oevsv"> <div class="container"><div > <div class="row mb-20"><div class="hidden-xs hidden-sm col-md-3" ><div class="ap-sec "> <div class="ap-img ap-img-v0"> <div class="ap-img-pic "> <a data-gallery="true" class="zoomer" data-size="w:200,h:145" href="/export/shared/.content/.galleries/logos/OeVSV-Logo-weiss.png" title=" " data-rel="fancybox-button-87644cfd-a1ec-11e6-9755-ab9d06357455" id="fancyboxzoom87644cfd-a1ec-11e6-9755-ab9d06357455"> <span class="overlay-zoom"> <span > <img src="/export/shared/.content/.galleries/logos/OeVSV-Logo-weiss.png" class="img-responsive " alt=" " title=" " /> </span> <span class="zoom-icon"></span> </span> </a> </div> </div> </div> </div><div class="col-xs-12 col-sm-4 col-md-3" ></div><div class="col-xs-12 col-sm-4 col-md-3" ></div><div class="col-xs-12 col-sm-4 col-md-3" ><div class="ap-sec "> <div class="headline"> <h2 > Kontakt</h2> </div> <div class="ap-plain" > <div > <p>Landesleiter:<br />Ing. Thomas Zurk, OE6TZE<br /><em class="fa fa-envelope">&nbsp;</em> <a href="mailto:oe6tzg@oevsv.at">oe6tzg@oevsv.at</a></p> <p>Web ADL622:<br /><em class="fa fa-envelope">&nbsp; <a href="mailto:mailto oe6swd@oevsv.at">oe6swd@oevsv.at</a></em></p></div> </div> </div></div></div> <div> <div class="row"> <div class="col-sm-4 col-md-3 mb-20"> <a href="#top"><i class="fa fa-caret-square-o-up"></i> zum Seitenanfang</a> </div> <div class="col-sm-4 col-md-3 mb-20"> <a href="http://www.oevsv.at/map/"><i class="fa fa-folder-open"></i> Sitemap</a> </div> <div class="col-sm-4 col-md-3 mb-20"> <ul class="social-oevsv"> <li><a href=""><img src="/export/shared/.content/.galleries/icons/Facebook.png" alt="" /></a></li> <li><a href=""><img src="/export/shared/.content/.galleries/icons/Twitter.png" alt="" /></a></li> <li><a href=""><img src="/export/shared/.content/.galleries/icons/Instagram.png" alt="" /></a></li> <li><a href=""><img src="/export/shared/.content/.galleries/icons/YouTube.png" alt="" /></a></li> <li><a href=""><img src="/export/shared/.content/.galleries/icons/Google.png" alt="" /></a></li> <li><a href=""><img src="/export/shared/.content/.galleries/icons/RSS.png" alt="" /></a></li> <li><a href=""><img src="/export/shared/.content/.galleries/icons/Pinterest.png" alt="" /></a></li> </ul> <div class="clearfix"></div> </div> <div class="col-sm-12 col-md-3 mb-20"> <a href="https://oevsv.at/impressum/">Impressum</a> </div> </div></div> <div> <!-- Piwik --> <script type="text/javascript"> var _paq = _paq || []; _paq.push(["setDomains", ["*.www.oe6.oevsv.at"]]); _paq.push(['trackPageView']); _paq.push(['enableLinkTracking']); (function() { var u="//piwik.oevsv.at/"; _paq.push(['setTrackerUrl', u+'piwik.php']); _paq.push(['setSiteId', '7']); var d=document, g=d.createElement('script'), s=d.getElementsByTagName('script')[0]; g.type='text/javascript'; g.async=true; g.defer=true; g.src=u+'piwik.js'; s.parentNode.insertBefore(g,s); })(); </script> <noscript><p><img src="/piwik.php?idsite=7" style="border:0;" alt="" /></p></noscript> <!-- End Piwik Code --> </div> </div></div> </div> </div> </div></div> <!--/wrapper--> <script type="text/javascript" src="/export/system/modules/org.opencms.apollo.template.basics/resources/js/scripts-all.min.js"></script> <script type="text/javascript" src="/export/system/modules/org.opencms.apollo.template.basics/resources/js/ics.min.js"></script> <script type="text/javascript" src="/export/system/modules/org.opencms.apollo.template.basics/resources/js/ics.deps.min.js"></script> <script type="text/javascript"> jQuery(document).ready(function() { initIcalDownload(); }); </script> <script type="text/javascript"> jQuery(document).ready(function() { App.init(); try { createBanner(); } catch (e) {} try { $("#list_pagination").bootstrapPaginator(options); } catch (e) {} }); </script> <!--[if lt IE 9]> <script src="/export/system/modules/org.opencms.apollo.template.basics/resources/compatibility/respond.js"></script> <script src="/export/system/modules/org.opencms.apollo.template.basics/resources/compatibility/html5shiv.js"></script> <script src="/export/system/modules/org.opencms.apollo.template.basics/resources/compatibility/placeholder-IE-fixes.js"></script> <![endif]--> </body> </html>
oevsv-lv6/homepage-backup
oe6.oevsv.at/adl622/veranstaltungen/2-m-Funkpeilen-in-Preding/index.html
HTML
apache-2.0
14,559
________________________________________________________________________ This file is part of Logtalk <https://logtalk.org/> Copyright 1998-2022 Paulo Moura <pmoura@logtalk.org> SPDX-License-Identifier: Apache-2.0 Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ________________________________________________________________________ The examples in this folder are adapted from the SICStus Prolog manual. To load this example and for sample queries, please see the `SCRIPT.txt` file.
LogtalkDotOrg/logtalk3
examples/sicstus/NOTES.md
Markdown
apache-2.0
971
// This snippet file was generated by processing the source file: // ./storage-next/list-files.js // // To update the snippets in this file, edit the source and then run // 'npm run snippets'. // [START storage_list_paginate_modular] import { getStorage, ref, list } from "firebase/storage"; async function pageTokenExample(){ // Create a reference under which you want to list const storage = getStorage(); const listRef = ref(storage, 'files/uid'); // Fetch the first page of 100. const firstPage = await list(listRef, { maxResults: 100 }); // Use the result. // processItems(firstPage.items) // processPrefixes(firstPage.prefixes) // Fetch the second page if there are more elements. if (firstPage.nextPageToken) { const secondPage = await list(listRef, { maxResults: 100, pageToken: firstPage.nextPageToken, }); // processItems(secondPage.items) // processPrefixes(secondPage.prefixes) } } // [END storage_list_paginate_modular]
firebase/snippets-web
snippets/storage-next/list-files/storage_list_paginate.js
JavaScript
apache-2.0
987
/* Copyright 2021 The TensorFlow Authors. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ==============================================================================*/ #include "pybind11/pybind11.h" #include "tensorflow/compiler/mlir/lite/flatbuffer_to_mlir.h" #include "tensorflow/lite/python/analyzer_wrapper/model_analyzer.h" PYBIND11_MODULE(_pywrap_analyzer_wrapper, m) { m.def( "ModelAnalyzer", [](const std::string& model_path, bool input_is_filepath) { return ::tflite::model_analyzer(model_path, input_is_filepath); }, R"pbdoc( Returns txt dump of the given TFLite file. )pbdoc"); m.def( "FlatBufferToMlir", [](const std::string& model_path, bool input_is_filepath) { return ::mlir::TFL::FlatBufferFileToMlir(model_path, input_is_filepath); }, R"pbdoc( Returns MLIR dump of the given TFLite file. )pbdoc"); }
sarvex/tensorflow
tensorflow/lite/python/analyzer_wrapper/analyzer_wrapper.cc
C++
apache-2.0
1,404
/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.cloudera.sqoop; import com.cloudera.sqoop.testutil.ExportJobTestCase; import com.google.common.collect.Lists; import org.apache.avro.Schema; import org.apache.avro.Schema.Field; import org.apache.avro.generic.GenericData; import org.apache.avro.generic.GenericRecord; import org.junit.Rule; import org.junit.Test; import org.junit.rules.ExpectedException; import org.kitesdk.data.*; import java.io.IOException; import java.nio.ByteBuffer; import java.sql.Connection; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Statement; import java.util.ArrayList; import java.util.List; import static org.junit.Assert.assertArrayEquals; import static org.junit.Assert.assertEquals; /** * Test that we can export Parquet Data Files from HDFS into databases. */ public class TestParquetExport extends ExportJobTestCase { @Rule public ExpectedException thrown = ExpectedException.none(); /** * @return an argv for the CodeGenTool to use when creating tables to export. */ protected String [] getCodeGenArgv(String... extraArgs) { List<String> codeGenArgv = new ArrayList<String>(); if (null != extraArgs) { for (String arg : extraArgs) { codeGenArgv.add(arg); } } codeGenArgv.add("--table"); codeGenArgv.add(getTableName()); codeGenArgv.add("--connect"); codeGenArgv.add(getConnectString()); return codeGenArgv.toArray(new String[0]); } /** When generating data for export tests, each column is generated according to a ColumnGenerator. Methods exist for determining what to put into Parquet objects in the files to export, as well as what the object representation of the column as returned by the database should look like. */ public interface ColumnGenerator { /** For a row with id rowNum, what should we write into that Parquet record to export? */ Object getExportValue(int rowNum); /** Return the Parquet schema for the field. */ Schema getColumnParquetSchema(); /** For a row with id rowNum, what should the database return for the given column's value? */ Object getVerifyValue(int rowNum); /** Return the column type to put in the CREATE TABLE statement. */ String getColumnType(); } private ColumnGenerator colGenerator(final Object exportValue, final Schema schema, final Object verifyValue, final String columnType) { return new ColumnGenerator() { @Override public Object getVerifyValue(int rowNum) { return verifyValue; } @Override public Object getExportValue(int rowNum) { return exportValue; } @Override public String getColumnType() { return columnType; } @Override public Schema getColumnParquetSchema() { return schema; } }; } /** * Create a data file that gets exported to the db. * @param fileNum the number of the file (for multi-file export) * @param numRecords how many records to write to the file. */ protected void createParquetFile(int fileNum, int numRecords, ColumnGenerator... extraCols) throws IOException { String uri = "dataset:file:" + getTablePath(); Schema schema = buildSchema(extraCols); DatasetDescriptor descriptor = new DatasetDescriptor.Builder() .schema(schema) .format(Formats.PARQUET) .build(); Dataset dataset = Datasets.create(uri, descriptor); DatasetWriter writer = dataset.newWriter(); try { for (int i = 0; i < numRecords; i++) { GenericRecord record = new GenericData.Record(schema); record.put("id", i); record.put("msg", getMsgPrefix() + i); addExtraColumns(record, i, extraCols); writer.write(record); } } finally { writer.close(); } } private Schema buildSchema(ColumnGenerator... extraCols) { List<Field> fields = new ArrayList<Field>(); fields.add(buildField("id", Schema.Type.INT)); fields.add(buildField("msg", Schema.Type.STRING)); int colNum = 0; if (null != extraCols) { for (ColumnGenerator gen : extraCols) { if (gen.getColumnParquetSchema() != null) { fields.add(buildParquetField(forIdx(colNum++), gen.getColumnParquetSchema())); } } } Schema schema = Schema.createRecord("myschema", null, null, false); schema.setFields(fields); return schema; } private void addExtraColumns(GenericRecord record, int rowNum, ColumnGenerator[] extraCols) { int colNum = 0; if (null != extraCols) { for (ColumnGenerator gen : extraCols) { if (gen.getColumnParquetSchema() != null) { record.put(forIdx(colNum++), gen.getExportValue(rowNum)); } } } } private Field buildField(String name, Schema.Type type) { return new Field(name, Schema.create(type), null, null); } private Field buildParquetField(String name, Schema schema) { return new Field(name, schema, null, null); } /** Return the column name for a column index. * Each table contains two columns named 'id' and 'msg', and then an * arbitrary number of additional columns defined by ColumnGenerators. * These columns are referenced by idx 0, 1, 2... * @param idx the index of the ColumnGenerator in the array passed to * createTable(). * @return the name of the column */ protected String forIdx(int idx) { return "col" + idx; } /** * Return a SQL statement that drops a table, if it exists. * @param tableName the table to drop. * @return the SQL statement to drop that table. */ protected String getDropTableStatement(String tableName) { return "DROP TABLE " + tableName + " IF EXISTS"; } /** Create the table definition to export to, removing any prior table. By specifying ColumnGenerator arguments, you can add extra columns to the table of arbitrary type. */ private void createTable(ColumnGenerator... extraColumns) throws SQLException { Connection conn = getConnection(); PreparedStatement statement = conn.prepareStatement( getDropTableStatement(getTableName()), ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY); try { statement.executeUpdate(); conn.commit(); } finally { statement.close(); } StringBuilder sb = new StringBuilder(); sb.append("CREATE TABLE "); sb.append(getTableName()); sb.append(" (\"ID\" INT NOT NULL PRIMARY KEY, \"MSG\" VARCHAR(64)"); int colNum = 0; for (ColumnGenerator gen : extraColumns) { if (gen.getColumnType() != null) { sb.append(", \"" + forIdx(colNum++) + "\" " + gen.getColumnType()); } } sb.append(")"); statement = conn.prepareStatement(sb.toString(), ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY); try { statement.executeUpdate(); conn.commit(); } finally { statement.close(); } } /** * Create the table definition to export and also inserting one records for * identifying the updates. Issue [SQOOP-2846] */ private void createTableWithInsert() throws SQLException { Connection conn = getConnection(); PreparedStatement statement = conn.prepareStatement(getDropTableStatement(getTableName()), ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY); try { statement.executeUpdate(); conn.commit(); } finally { statement.close(); } StringBuilder sb = new StringBuilder(); sb.append("CREATE TABLE "); sb.append(getTableName()); sb.append(" (id INT NOT NULL PRIMARY KEY, msg VARCHAR(64)"); sb.append(")"); statement = conn.prepareStatement(sb.toString(), ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY); try { statement.executeUpdate(); Statement statement2 = conn.createStatement(); String insertCmd = "INSERT INTO " + getTableName() + " (ID,MSG) VALUES(" + 0 + ",'testMsg');"; statement2.execute(insertCmd); conn.commit(); } finally { statement.close(); } } /** Verify that on a given row, a column has a given value. * @param id the id column specifying the row to test. */ private void assertColValForRowId(int id, String colName, Object expectedVal) throws SQLException { Connection conn = getConnection(); LOG.info("Verifying column " + colName + " has value " + expectedVal); PreparedStatement statement = conn.prepareStatement( "SELECT \"" + colName + "\" FROM " + getTableName() + " WHERE \"ID\" = " + id, ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY); Object actualVal = null; try { ResultSet rs = statement.executeQuery(); try { rs.next(); actualVal = rs.getObject(1); } finally { rs.close(); } } finally { statement.close(); } if (expectedVal != null && expectedVal instanceof byte[]) { assertArrayEquals((byte[]) expectedVal, (byte[]) actualVal); } else { assertEquals("Got unexpected column value", expectedVal, actualVal); } } /** Verify that for the max and min values of the 'id' column, the values for a given column meet the expected values. */ protected void assertColMinAndMax(String colName, ColumnGenerator generator) throws SQLException { Connection conn = getConnection(); int minId = getMinRowId(conn); int maxId = getMaxRowId(conn); LOG.info("Checking min/max for column " + colName + " with type " + generator.getColumnType()); Object expectedMin = generator.getVerifyValue(minId); Object expectedMax = generator.getVerifyValue(maxId); assertColValForRowId(minId, colName, expectedMin); assertColValForRowId(maxId, colName, expectedMax); } @Test public void testSupportedParquetTypes() throws IOException, SQLException { String[] argv = {}; final int TOTAL_RECORDS = 1 * 10; byte[] b = new byte[] { (byte) 1, (byte) 2 }; Schema fixed = Schema.createFixed("myfixed", null, null, 2); Schema enumeration = Schema.createEnum("myenum", null, null, Lists.newArrayList("a", "b")); ColumnGenerator[] gens = new ColumnGenerator[] { colGenerator(true, Schema.create(Schema.Type.BOOLEAN), true, "BIT"), colGenerator(100, Schema.create(Schema.Type.INT), 100, "INTEGER"), colGenerator(200L, Schema.create(Schema.Type.LONG), 200L, "BIGINT"), // HSQLDB maps REAL to double, not float: colGenerator(1.0f, Schema.create(Schema.Type.FLOAT), 1.0d, "REAL"), colGenerator(2.0d, Schema.create(Schema.Type.DOUBLE), 2.0d, "DOUBLE"), colGenerator("s", Schema.create(Schema.Type.STRING), "s", "VARCHAR(8)"), colGenerator(ByteBuffer.wrap(b), Schema.create(Schema.Type.BYTES), b, "VARBINARY(8)"), colGenerator(new GenericData.Fixed(fixed, b), fixed, b, "BINARY(2)"), colGenerator(new GenericData.EnumSymbol(enumeration, "a"), enumeration, "a", "VARCHAR(8)"), }; createParquetFile(0, TOTAL_RECORDS, gens); createTable(gens); runExport(getArgv(true, 10, 10, newStrArray(argv, "-m", "" + 1))); verifyExport(TOTAL_RECORDS); for (int i = 0; i < gens.length; i++) { assertColMinAndMax(forIdx(i), gens[i]); } } @Test public void testNullableField() throws IOException, SQLException { String[] argv = {}; final int TOTAL_RECORDS = 1 * 10; List<Schema> childSchemas = new ArrayList<Schema>(); childSchemas.add(Schema.create(Schema.Type.NULL)); childSchemas.add(Schema.create(Schema.Type.STRING)); Schema schema = Schema.createUnion(childSchemas); ColumnGenerator gen0 = colGenerator(null, schema, null, "VARCHAR(64)"); ColumnGenerator gen1 = colGenerator("s", schema, "s", "VARCHAR(64)"); createParquetFile(0, TOTAL_RECORDS, gen0, gen1); createTable(gen0, gen1); runExport(getArgv(true, 10, 10, newStrArray(argv, "-m", "" + 1))); verifyExport(TOTAL_RECORDS); assertColMinAndMax(forIdx(0), gen0); assertColMinAndMax(forIdx(1), gen1); } @Test public void testParquetRecordsNotSupported() throws IOException, SQLException { String[] argv = {}; final int TOTAL_RECORDS = 1; Schema schema = Schema.createRecord("nestedrecord", null, null, false); schema.setFields(Lists.newArrayList(buildField("myint", Schema.Type.INT))); GenericRecord record = new GenericData.Record(schema); record.put("myint", 100); // DB type is not used so can be anything: ColumnGenerator gen = colGenerator(record, schema, null, "VARCHAR(64)"); createParquetFile(0, TOTAL_RECORDS, gen); createTable(gen); thrown.expect(Exception.class); thrown.reportMissingExceptionWithMessage("Expected Exception as Parquet records are not supported"); runExport(getArgv(true, 10, 10, newStrArray(argv, "-m", "" + 1))); } @Test public void testMissingDatabaseFields() throws IOException, SQLException { String[] argv = {}; final int TOTAL_RECORDS = 1; // null column type means don't create a database column // the Parquet value will not be exported ColumnGenerator gen = colGenerator(100, Schema.create(Schema.Type.INT), null, null); createParquetFile(0, TOTAL_RECORDS, gen); createTable(gen); runExport(getArgv(true, 10, 10, newStrArray(argv, "-m", "" + 1))); verifyExport(TOTAL_RECORDS); } @Test public void testParquetWithUpdateKey() throws IOException, SQLException { String[] argv = { "--update-key", "ID" }; final int TOTAL_RECORDS = 1; createParquetFile(0, TOTAL_RECORDS, null); createTableWithInsert(); runExport(getArgv(true, 10, 10, newStrArray(argv, "-m", "" + 1))); verifyExport(getMsgPrefix() + "0"); } // Test Case for Issue [SQOOP-2846] @Test public void testParquetWithUpsert() throws IOException, SQLException { String[] argv = { "--update-key", "ID", "--update-mode", "allowinsert" }; final int TOTAL_RECORDS = 2; // ColumnGenerator gen = colGenerator("100", // Schema.create(Schema.Type.STRING), null, "VARCHAR(64)"); createParquetFile(0, TOTAL_RECORDS, null); createTableWithInsert(); thrown.expect(Exception.class); thrown.reportMissingExceptionWithMessage("Expected Exception during Parquet export with --update-mode"); runExport(getArgv(true, 10, 10, newStrArray(argv, "-m", "" + 1))); } @Test public void testMissingParquetFields() throws IOException, SQLException { String[] argv = {}; final int TOTAL_RECORDS = 1; // null Parquet schema means don't create an Parquet field ColumnGenerator gen = colGenerator(null, null, null, "VARCHAR(64)"); createParquetFile(0, TOTAL_RECORDS, gen); createTable(gen); thrown.expect(Exception.class); thrown.reportMissingExceptionWithMessage("Expected Exception on missing Parquet fields"); runExport(getArgv(true, 10, 10, newStrArray(argv, "-m", "" + 1))); } }
bonnetb/sqoop
src/test/com/cloudera/sqoop/TestParquetExport.java
Java
apache-2.0
15,893
# laravel-ajax-destroy Allows requesting REST `destroy()` action without sending FORM (by clicking on link) - This is extension of **https://www.github.com/WhipsterCZ/laravel-ajax** library. - Action `destroy()` will be called with **AJAX** - User have to confirm deletion. Custom message can be provided. Installation ------------ 1) Install **https://github.com/whipsterCZ/laravel-ajax** 2) Copy source code to your public directory 3) Add script to your template ~~~~~ html <script src="/js/laravel.ajax.js"></script> <script src="/js/laravel.ajax.destroy.js"></script> ~~~~~ or use browserify ~~~~~ js require('laravel-ajax'); require('laravel-ajax.destroy'); ~~~~~ ## Usage HTML ~~~~~ html <a href="{{ route('model.destroy', $model) }}" class="destroy" data-confirm="Are you sure?" >Delete</a> ~~~~~ Laravel action ~~~~~ php public function destroy($id) { Model::find($id)->delete(); return \Ajax::redirectBack(); } ~~~~~
whipsterCZ/laravel-ajax-destroy
README.md
Markdown
apache-2.0
955
using System.Reflection; using System.Runtime.CompilerServices; using System.Runtime.InteropServices; // General Information about an assembly is controlled through the following // set of attributes. Change these attribute values to modify the information // associated with an assembly. [assembly: AssemblyTitle("Data.Web.JobMine")] [assembly: AssemblyDescription("")] [assembly: AssemblyConfiguration("")] [assembly: AssemblyCompany("")] [assembly: AssemblyProduct("Data.Web.JobMine")] [assembly: AssemblyCopyright("Copyright © 2014")] [assembly: AssemblyTrademark("")] [assembly: AssemblyCulture("")] // Setting ComVisible to false makes the types in this assembly not visible // to COM components. If you need to access a type in this assembly from // COM, set the ComVisible attribute to true on that type. [assembly: ComVisible(false)] // The following GUID is for the ID of the typelib if this project is exposed to COM [assembly: Guid("c471204b-13e3-466d-8c72-4010481f1d8e")] // Version information for an assembly consists of the following four values: // // Major Version // Minor Version // Build Number // Revision // // You can specify all the values or you can default the Build and Revision Numbers // by using the '*' as shown below: // [assembly: AssemblyVersion("1.0.*")] [assembly: AssemblyVersion("1.0.0.0")] [assembly: AssemblyFileVersion("1.0.0.0")]
BillWenChaoJiang/JobSearchEnhancer
Data.Web.JobMine/Properties/AssemblyInfo.cs
C#
apache-2.0
1,408
<== [Pengenalan Sistem Sandbox Veritrans](../02-sandbox/README.md) 3. Veritrans Payment API ========================================= ## 3.1 RESTful API ## 3.2 Payment Request ## 3.3 Payment Response ## 3.4 Transaction Status ## 3.5 Fraud Status ## 3.6 Payment Type ## 3.7 Status Code ==> [Transaksi Kartu Kredit](../04-kartu-kredit/README.md)
dannypranoto93/papi-docs
03-payment-api/README.md
Markdown
apache-2.0
352
package com.notronix.lw.api.model; import java.time.Instant; import java.util.List; import java.util.UUID; public class OrderRefundHeader { private Integer RefundHeaderId; private UUID OrderId; private Integer NumOrderId; private String ExternalReference; private Instant CreatedDate; private String Currency; private Double Amount; private PostSaleStatus Status; private Boolean Actioned; private Instant LastActionDate; private String OrderSource; private String OrderSubSource; private Boolean ChannelInitiated; private List<VerifiedRefund> RefundLines; private String RefundLink; public Integer getRefundHeaderId() { return RefundHeaderId; } public void setRefundHeaderId(Integer refundHeaderId) { RefundHeaderId = refundHeaderId; } public UUID getOrderId() { return OrderId; } public void setOrderId(UUID orderId) { OrderId = orderId; } public Integer getNumOrderId() { return NumOrderId; } public void setNumOrderId(Integer numOrderId) { NumOrderId = numOrderId; } public String getExternalReference() { return ExternalReference; } public void setExternalReference(String externalReference) { ExternalReference = externalReference; } public Instant getCreatedDate() { return CreatedDate; } public void setCreatedDate(Instant createdDate) { CreatedDate = createdDate; } public String getCurrency() { return Currency; } public void setCurrency(String currency) { Currency = currency; } public Double getAmount() { return Amount; } public void setAmount(Double amount) { Amount = amount; } public PostSaleStatus getStatus() { return Status; } public void setStatus(PostSaleStatus status) { Status = status; } public Boolean getActioned() { return Actioned; } public void setActioned(Boolean actioned) { Actioned = actioned; } public Instant getLastActionDate() { return LastActionDate; } public void setLastActionDate(Instant lastActionDate) { LastActionDate = lastActionDate; } public String getOrderSource() { return OrderSource; } public void setOrderSource(String orderSource) { OrderSource = orderSource; } public String getOrderSubSource() { return OrderSubSource; } public void setOrderSubSource(String orderSubSource) { OrderSubSource = orderSubSource; } public Boolean getChannelInitiated() { return ChannelInitiated; } public void setChannelInitiated(Boolean channelInitiated) { ChannelInitiated = channelInitiated; } public List<VerifiedRefund> getRefundLines() { return RefundLines; } public void setRefundLines(List<VerifiedRefund> refundLines) { RefundLines = refundLines; } public String getRefundLink() { return RefundLink; } public void setRefundLink(String refundLink) { RefundLink = refundLink; } }
Notronix/JaLAPI
src/main/java/com/notronix/lw/api/model/OrderRefundHeader.java
Java
apache-2.0
3,337
/* * Copyright (c) 2016, WSO2 Inc. (http://www.wso2.org) All Rights Reserved. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.wso2.carbon.security.caas.user.core.store.connector; /** * Factory to create @see AuthorizationStoreConnector instances. */ public interface AuthorizationStoreConnectorFactory { /** * Get @see AuthorizationStoreConnector instance. * @return AuthorizationStoreConnector. */ AuthorizationStoreConnector getInstance(); }
thanujalk/carbon-security
components/org.wso2.carbon.security.caas/src/main/java/org/wso2/carbon/security/caas/user/core/store/connector/AuthorizationStoreConnectorFactory.java
Java
apache-2.0
995
/* * Copyright (C) 2014 Alejandro Rodriguez Salamanca. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.amazing_mvp.core.view.recyclerviewrenderers.interfaces; /** * @author Alejandro Rodriguez <https://github.com/Alexrs95> * <p/> * Class containing the data must implement this interface */ public interface Renderable { /** * @return the ID of the Layout to inflate */ int getRenderableId(); }
Pierry/Amazing-MVP
amazingMVP/src/main/java/com/amazing_mvp/core/view/recyclerviewrenderers/interfaces/Renderable.java
Java
apache-2.0
950
using System; using System.Collections.Generic; using System.Text; using System.Runtime.Serialization; namespace PalmeralGenNHibernate.Exceptions { public class DataLayerException : SystemException, ISerializable { public DataLayerException() : base () { // Add implementation (if required) } public DataLayerException(string message) : base (message) { // Add Implementation (if required) } public DataLayerException(string message, System.Exception inner) : base (message, inner) { // Add implementation (if required) } protected DataLayerException(SerializationInfo info, StreamingContext context) : base (info, context) { // Add implementation (if required) } } }
pablovargan/winforms-ooh4ria
PalmeralGenNHibernate/Exceptions/DataLayerException.cs
C#
apache-2.0
736
/* * Copyright 2014-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. * * Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with * the License. A copy of the License is located at * * http://aws.amazon.com/apache2.0 * * or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR * CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions * and limitations under the License. */ package com.amazonaws.services.apigatewayv2.model; import java.io.Serializable; import javax.annotation.Generated; import com.amazonaws.AmazonWebServiceRequest; @Generated("com.amazonaws:aws-java-sdk-code-generator") public class GetModelTemplateRequest extends com.amazonaws.AmazonWebServiceRequest implements Serializable, Cloneable { /** * <p> * The API identifier. * </p> */ private String apiId; /** * <p> * The model ID. * </p> */ private String modelId; /** * <p> * The API identifier. * </p> * * @param apiId * The API identifier. */ public void setApiId(String apiId) { this.apiId = apiId; } /** * <p> * The API identifier. * </p> * * @return The API identifier. */ public String getApiId() { return this.apiId; } /** * <p> * The API identifier. * </p> * * @param apiId * The API identifier. * @return Returns a reference to this object so that method calls can be chained together. */ public GetModelTemplateRequest withApiId(String apiId) { setApiId(apiId); return this; } /** * <p> * The model ID. * </p> * * @param modelId * The model ID. */ public void setModelId(String modelId) { this.modelId = modelId; } /** * <p> * The model ID. * </p> * * @return The model ID. */ public String getModelId() { return this.modelId; } /** * <p> * The model ID. * </p> * * @param modelId * The model ID. * @return Returns a reference to this object so that method calls can be chained together. */ public GetModelTemplateRequest withModelId(String modelId) { setModelId(modelId); return this; } /** * Returns a string representation of this object. This is useful for testing and debugging. Sensitive data will be * redacted from this string using a placeholder value. * * @return A string representation of this object. * * @see java.lang.Object#toString() */ @Override public String toString() { StringBuilder sb = new StringBuilder(); sb.append("{"); if (getApiId() != null) sb.append("ApiId: ").append(getApiId()).append(","); if (getModelId() != null) sb.append("ModelId: ").append(getModelId()); sb.append("}"); return sb.toString(); } @Override public boolean equals(Object obj) { if (this == obj) return true; if (obj == null) return false; if (obj instanceof GetModelTemplateRequest == false) return false; GetModelTemplateRequest other = (GetModelTemplateRequest) obj; if (other.getApiId() == null ^ this.getApiId() == null) return false; if (other.getApiId() != null && other.getApiId().equals(this.getApiId()) == false) return false; if (other.getModelId() == null ^ this.getModelId() == null) return false; if (other.getModelId() != null && other.getModelId().equals(this.getModelId()) == false) return false; return true; } @Override public int hashCode() { final int prime = 31; int hashCode = 1; hashCode = prime * hashCode + ((getApiId() == null) ? 0 : getApiId().hashCode()); hashCode = prime * hashCode + ((getModelId() == null) ? 0 : getModelId().hashCode()); return hashCode; } @Override public GetModelTemplateRequest clone() { return (GetModelTemplateRequest) super.clone(); } }
jentfoo/aws-sdk-java
aws-java-sdk-apigatewayv2/src/main/java/com/amazonaws/services/apigatewayv2/model/GetModelTemplateRequest.java
Java
apache-2.0
4,405
/* * Copyright (c) 2017. ThanksMister LLC * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software distributed * under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.thanksmister.iot.mqtt.alarmpanel.network.fetchers; import android.support.annotation.NonNull; import com.thanksmister.iot.mqtt.alarmpanel.network.ImageApi; import com.thanksmister.iot.mqtt.alarmpanel.network.model.ImageResponse; import retrofit2.Call; public class ImageFetcher { private final ImageApi networkApi; public ImageFetcher(@NonNull ImageApi networkApi) { this.networkApi = networkApi; } public Call<ImageResponse> getImagesByTag(final String clientId, final String tag) { return networkApi.getImagesByTag(clientId, tag); } }
thanksmister/androidthings-mqtt-alarm-panel
app/src/main/java/com/thanksmister/iot/mqtt/alarmpanel/network/fetchers/ImageFetcher.java
Java
apache-2.0
1,193
/** * Copyright 2015 CANAL+ Group * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ /** * /!\ This file is feature-switchable. * It always should be imported through the `features` object. */ import parseTTMLStringToVTT from "./parse_ttml_to_vtt"; export default parseTTMLStringToVTT;
canalplus/rx-player
src/parsers/texttracks/ttml/native/index.ts
TypeScript
apache-2.0
803
<?xml version="1.0" encoding="{0}"?> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="content-type" content="text/html; charset={0}" /> <title>{1}</title> <style type="text/css"> a:link, a:visited '{' text-decoration: none; '}' a:hover '{' background-color: white; '}' .Satz '{' background-color: #f2cca3; font-family: monospace; margin-bottom: 1px; white-space: pre; '}' .AlphaNumFeld '{' background-color: #f1cca3; '}' .Betrag '{' background-color: #e0f1b5; '}' .BetragMitVorzeichen '{' background-color: #f8f1b5; '}' .Bezeichner '{' background-color: #f2d991; '}' .Datum '{' background-color: #ccedf1; '}' .NumFeld '{' background-color: #f1f1cd; '}' .Undefiniert '{' background-color: #ffa0a0; '}' .Version '{' background-color: #f1ccd8; '}' .VUNummer '{' background-color: #ffffc0; '}' .Zeichen '{' background-color: #f9eeda; '}' </style> </head> <body> <h2>{1}</h2> {2} <h3>Legende</h3> <p> <span class="AlphaNumFeld" style="white-space: pre;"> Alphanumerisches Feld </span> <span class="Betrag" style="white-space: pre;"> Betrag </span> <span class="BetragMitVorzeichen" style="white-space: pre;"> Betrag mit Vorzeichen </span> <span class="NumFeld" style="white-space: pre;"> numerisches Feld </span> <span class="Datum" style="white-space: pre;"> Datum </span> <span class="Bezeichner" style="white-space: pre;"> Bezeichner </span> <span class="Version" style="white-space: pre;"> Version </span> <span class="VUNummer" style="white-space: pre;"> VU-Nummer </span> <span class="Zeichen" style="white-space: pre;"> Einzelnes Zeichen </span> <span class="Undefiniert" style="white-space: pre;"> Undefiniert </span> </p> <hr/> <h2>Details</h2> {3} <h2>Legende</h2> <p> <span class="AlphaNumFeld" style="white-space: pre;"> Alphanumerisches Feld </span> <span class="Betrag" style="white-space: pre;"> Betrag </span> <span class="BetragMitVorzeichen" style="white-space: pre;"> Betrag mit Vorzeichen </span> <span class="NumFeld" style="white-space: pre;"> numerisches Feld </span> <span class="Datum" style="white-space: pre;"> Datum </span> <span class="Bezeichner" style="white-space: pre;"> Bezeichner </span> <span class="Version" style="white-space: pre;"> Version </span> <span class="VUNummer" style="white-space: pre;"> VU-Nummer </span> <span class="Zeichen" style="white-space: pre;"> Einzelnes Zeichen </span> <span class="Undefiniert" style="white-space: pre;"> Undefiniert </span> </p> </body> </html>
oboehm/gdv.xport
lib/src/main/resources/gdv/xport/util/template.html
HTML
apache-2.0
2,672
function toggleMenuItem(item, selector) { if ($(item).parent().next(selector).hasClass('in')) { { $(selector).collapse('hide'); } } else { { $(selector).collapse('show'); } }; } function refreshView() { RaiseXafCallback(globalCallbackControl, "", "XafParentWindowRefresh", "", false); }
Terricks/XAFBootstrap
XAF Bootstrap/Content/bootstrap_js/bootstrap-dx.js
JavaScript
apache-2.0
298
#!/usr/bin/env bash # Copyright 2018 The Knative Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # This is a collection of useful bash functions and constants, intended # to be used in test scripts and the like. It doesn't do anything when # called from command line. # GCP project where all tests related resources live readonly KNATIVE_TESTS_PROJECT=knative-tests # Conveniently set GOPATH if unset if [[ ! -v GOPATH ]]; then export GOPATH="$(go env GOPATH)" if [[ -z "${GOPATH}" ]]; then echo "WARNING: GOPATH not set and go binary unable to provide it" fi fi # Useful environment variables [[ -v PROW_JOB_ID ]] && IS_PROW=1 || IS_PROW=0 readonly IS_PROW [[ ! -v REPO_ROOT_DIR ]] && REPO_ROOT_DIR="$(git rev-parse --show-toplevel)" readonly REPO_ROOT_DIR readonly REPO_NAME="$(basename ${REPO_ROOT_DIR})" # Useful flags about the current OS IS_LINUX=0 IS_OSX=0 IS_WINDOWS=0 case "${OSTYPE}" in darwin*) IS_OSX=1 ;; linux*) IS_LINUX=1 ;; msys*) IS_WINDOWS=1 ;; *) echo "** Internal error in library.sh, unknown OS '${OSTYPE}'" ; exit 1 ;; esac readonly IS_LINUX readonly IS_OSX readonly IS_WINDOWS # Set ARTIFACTS to an empty temp dir if unset if [[ -z "${ARTIFACTS:-}" ]]; then export ARTIFACTS="$(mktemp -d)" fi # On a Prow job, redirect stderr to stdout so it's synchronously added to log (( IS_PROW )) && exec 2>&1 # Print error message and exit 1 # Parameters: $1..$n - error message to be displayed function abort() { echo "error: $*" exit 1 } # Display a box banner. # Parameters: $1 - character to use for the box. # $2 - banner message. function make_banner() { local msg="$1$1$1$1 $2 $1$1$1$1" local border="${msg//[-0-9A-Za-z _.,\/()\']/$1}" echo -e "${border}\n${msg}\n${border}" # TODO(adrcunha): Remove once logs have timestamps on Prow # For details, see https://github.com/kubernetes/test-infra/issues/10100 echo -e "$1$1$1$1 $(TZ='America/Los_Angeles' date)\n${border}" } # Simple header for logging purposes. function header() { local upper="$(echo $1 | tr a-z A-Z)" make_banner "=" "${upper}" } # Simple subheader for logging purposes. function subheader() { make_banner "-" "$1" } # Simple warning banner for logging purposes. function warning() { make_banner "!" "$1" } # Checks whether the given function exists. function function_exists() { [[ "$(type -t $1)" == "function" ]] } # Waits until the given object doesn't exist. # Parameters: $1 - the kind of the object. # $2 - object's name. # $3 - namespace (optional). function wait_until_object_does_not_exist() { local KUBECTL_ARGS="get $1 $2" local DESCRIPTION="$1 $2" if [[ -n $3 ]]; then KUBECTL_ARGS="get -n $3 $1 $2" DESCRIPTION="$1 $3/$2" fi echo -n "Waiting until ${DESCRIPTION} does not exist" for i in {1..150}; do # timeout after 5 minutes if ! kubectl ${KUBECTL_ARGS} > /dev/null 2>&1; then echo -e "\n${DESCRIPTION} does not exist" return 0 fi echo -n "." sleep 2 done echo -e "\n\nERROR: timeout waiting for ${DESCRIPTION} not to exist" kubectl ${KUBECTL_ARGS} return 1 } # Waits until all pods are running in the given namespace. # Parameters: $1 - namespace. function wait_until_pods_running() { echo -n "Waiting until all pods in namespace $1 are up" local failed_pod="" for i in {1..150}; do # timeout after 5 minutes # List all pods. Ignore Terminating pods as those have either been replaced through # a deployment or terminated on purpose (through chaosduck for example). local pods="$(kubectl get pods --no-headers -n $1 2>/dev/null | grep -v Terminating)" # All pods must be running (ignore ImagePull error to allow the pod to retry) local not_running_pods=$(echo "${pods}" | grep -v Running | grep -v Completed | grep -v ErrImagePull | grep -v ImagePullBackOff) if [[ -n "${pods}" ]] && [[ -z "${not_running_pods}" ]]; then # All Pods are running or completed. Verify the containers on each Pod. local all_ready=1 while read pod ; do local status=(`echo -n ${pod} | cut -f2 -d' ' | tr '/' ' '`) # Set this Pod as the failed_pod. If nothing is wrong with it, then after the checks, set # failed_pod to the empty string. failed_pod=$(echo -n "${pod}" | cut -f1 -d' ') # All containers must be ready [[ -z ${status[0]} ]] && all_ready=0 && break [[ -z ${status[1]} ]] && all_ready=0 && break [[ ${status[0]} -lt 1 ]] && all_ready=0 && break [[ ${status[1]} -lt 1 ]] && all_ready=0 && break [[ ${status[0]} -ne ${status[1]} ]] && all_ready=0 && break # All the tests passed, this is not a failed pod. failed_pod="" done <<< "$(echo "${pods}" | grep -v Completed)" if (( all_ready )); then echo -e "\nAll pods are up:\n${pods}" return 0 fi elif [[ -n "${not_running_pods}" ]]; then # At least one Pod is not running, just save the first one's name as the failed_pod. failed_pod="$(echo "${not_running_pods}" | head -n 1 | cut -f1 -d' ')" fi echo -n "." sleep 2 done echo -e "\n\nERROR: timeout waiting for pods to come up\n${pods}" if [[ -n "${failed_pod}" ]]; then echo -e "\n\nFailed Pod (data in YAML format) - ${failed_pod}\n" kubectl -n $1 get pods "${failed_pod}" -oyaml echo -e "\n\nPod Logs\n" kubectl -n $1 logs "${failed_pod}" --all-containers fi return 1 } # Waits until all batch jobs complete in the given namespace. # Parameters: $1 - namespace. function wait_until_batch_job_complete() { echo -n "Waiting until all batch jobs in namespace $1 run to completion." for i in {1..150}; do # timeout after 5 minutes local jobs=$(kubectl get jobs -n $1 --no-headers \ -ocustom-columns='n:{.metadata.name},c:{.spec.completions},s:{.status.succeeded}') # All jobs must be complete local not_complete=$(echo "${jobs}" | awk '{if ($2!=$3) print $0}' | wc -l) if [[ ${not_complete} -eq 0 ]]; then echo -e "\nAll jobs are complete:\n${jobs}" return 0 fi echo -n "." sleep 2 done echo -e "\n\nERROR: timeout waiting for jobs to complete\n${jobs}" return 1 } # Waits until the given service has an external address (IP/hostname). # Parameters: $1 - namespace. # $2 - service name. function wait_until_service_has_external_ip() { echo -n "Waiting until service $2 in namespace $1 has an external address (IP/hostname)" for i in {1..150}; do # timeout after 15 minutes local ip=$(kubectl get svc -n $1 $2 -o jsonpath="{.status.loadBalancer.ingress[0].ip}") if [[ -n "${ip}" ]]; then echo -e "\nService $2.$1 has IP $ip" return 0 fi local hostname=$(kubectl get svc -n $1 $2 -o jsonpath="{.status.loadBalancer.ingress[0].hostname}") if [[ -n "${hostname}" ]]; then echo -e "\nService $2.$1 has hostname $hostname" return 0 fi echo -n "." sleep 6 done echo -e "\n\nERROR: timeout waiting for service $2.$1 to have an external address" kubectl get pods -n $1 return 1 } # Waits until the given service has an external address (IP/hostname) that allow HTTP connections. # Parameters: $1 - namespace. # $2 - service name. function wait_until_service_has_external_http_address() { local ns=$1 local svc=$2 local sleep_seconds=6 local attempts=150 echo -n "Waiting until service $ns/$svc has an external address (IP/hostname)" for attempt in $(seq 1 $attempts); do # timeout after 15 minutes local address=$(kubectl get svc $svc -n $ns -o jsonpath="{.status.loadBalancer.ingress[0].ip}") if [[ -n "${address}" ]]; then echo -e "Service $ns/$svc has IP $address" else address=$(kubectl get svc $svc -n $ns -o jsonpath="{.status.loadBalancer.ingress[0].hostname}") if [[ -n "${address}" ]]; then echo -e "Service $ns/$svc has hostname $address" fi fi if [[ -n "${address}" ]]; then local status=$(curl -s -o /dev/null -w "%{http_code}" http://"${address}") if [[ $status != "" && $status != "000" ]]; then echo -e "$address is ready: prober observed HTTP $status" return 0 else echo -e "$address is not ready: prober observed HTTP $status" fi fi echo -n "." sleep $sleep_seconds done echo -e "\n\nERROR: timeout waiting for service $ns/$svc to have an external HTTP address" return 1 } # Waits for the endpoint to be routable. # Parameters: $1 - External ingress IP address. # $2 - cluster hostname. function wait_until_routable() { echo -n "Waiting until cluster $2 at $1 has a routable endpoint" for i in {1..150}; do # timeout after 5 minutes local val=$(curl -H "Host: $2" "http://$1" 2>/dev/null) if [[ -n "$val" ]]; then echo -e "\nEndpoint is now routable" return 0 fi echo -n "." sleep 2 done echo -e "\n\nERROR: Timed out waiting for endpoint to be routable" return 1 } # Returns the name of the first pod of the given app. # Parameters: $1 - app name. # $2 - namespace (optional). function get_app_pod() { local pods=($(get_app_pods $1 $2)) echo "${pods[0]}" } # Returns the name of all pods of the given app. # Parameters: $1 - app name. # $2 - namespace (optional). function get_app_pods() { local namespace="" [[ -n $2 ]] && namespace="-n $2" kubectl get pods ${namespace} --selector=app=$1 --output=jsonpath="{.items[*].metadata.name}" } # Capitalize the first letter of each word. # Parameters: $1..$n - words to capitalize. function capitalize() { local capitalized=() for word in $@; do local initial="$(echo ${word:0:1}| tr 'a-z' 'A-Z')" capitalized+=("${initial}${word:1}") done echo "${capitalized[@]}" } # Dumps pod logs for the given app. # Parameters: $1 - app name. # $2 - namespace. function dump_app_logs() { echo ">>> ${REPO_NAME_FORMATTED} $1 logs:" for pod in $(get_app_pods "$1" "$2") do echo ">>> Pod: $pod" kubectl -n "$2" logs "$pod" --all-containers done } # Sets the given user as cluster admin. # Parameters: $1 - user # $2 - cluster name # $3 - cluster region # $4 - cluster zone, optional function acquire_cluster_admin_role() { echo "Acquiring cluster-admin role for user '$1'" local geoflag="--region=$3" [[ -n $4 ]] && geoflag="--zone=$3-$4" # Get the password of the admin and use it, as the service account (or the user) # might not have the necessary permission. local password=$(gcloud --format="value(masterAuth.password)" \ container clusters describe $2 ${geoflag}) if [[ -n "${password}" ]]; then # Cluster created with basic authentication kubectl config set-credentials cluster-admin \ --username=admin --password=${password} else local cert=$(mktemp) local key=$(mktemp) echo "Certificate in ${cert}, key in ${key}" gcloud --format="value(masterAuth.clientCertificate)" \ container clusters describe $2 ${geoflag} | base64 --decode > ${cert} gcloud --format="value(masterAuth.clientKey)" \ container clusters describe $2 ${geoflag} | base64 --decode > ${key} kubectl config set-credentials cluster-admin \ --client-certificate=${cert} --client-key=${key} fi kubectl config set-context $(kubectl config current-context) \ --user=cluster-admin kubectl create clusterrolebinding cluster-admin-binding \ --clusterrole=cluster-admin \ --user=$1 # Reset back to the default account gcloud container clusters get-credentials \ $2 ${geoflag} --project $(gcloud config get-value project) } # Run a command through tee and capture its output. # Parameters: $1 - file where the output will be stored. # $2... - command to run. function capture_output() { local report="$1" shift "$@" 2>&1 | tee "${report}" local failed=( ${PIPESTATUS[@]} ) [[ ${failed[0]} -eq 0 ]] && failed=${failed[1]} || failed=${failed[0]} return ${failed} } # Print failed step, which could be highlighted by spyglass. # Parameters: $1...n - description of step that failed function step_failed() { local spyglass_token="Step failed:" echo "${spyglass_token} $@" } # Create a temporary file with the given extension in a way that works on both Linux and macOS. # Parameters: $1 - file name without extension (e.g. 'myfile_XXXX') # $2 - file extension (e.g. 'xml') function mktemp_with_extension() { local nameprefix local fullname nameprefix="$(mktemp $1)" fullname="${nameprefix}.$2" mv ${nameprefix} ${fullname} echo ${fullname} } # Create a JUnit XML for a test. # Parameters: $1 - check class name as an identifier (e.g. BuildTests) # $2 - check name as an identifier (e.g., GoBuild) # $3 - failure message (can contain newlines), optional (means success) function create_junit_xml() { local xml xml="$(mktemp_with_extension "${ARTIFACTS}"/junit_XXXXXXXX xml)" echo "JUnit file ${xml} is created for reporting the test result" run_kntest junit --suite="$1" --name="$2" --err-msg="$3" --dest="${xml}" || return 1 } # Runs a go test and generate a junit summary. # Parameters: $1... - parameters to go test function report_go_test() { local go_test_args=( "$@" ) # Install gotestsum if necessary. run_go_tool gotest.tools/gotestsum gotestsum --help > /dev/null 2>&1 # Capture the test output to the report file. local report report="$(mktemp)" local xml xml="$(mktemp_with_extension "${ARTIFACTS}"/junit_XXXXXXXX xml)" local json json="$(mktemp_with_extension "${ARTIFACTS}"/json_XXXXXXXX json)" echo "Running go test with args: ${go_test_args[*]}" # TODO(chizhg): change to `--format testname`? capture_output "${report}" gotestsum --format standard-verbose \ --junitfile "${xml}" --junitfile-testsuite-name relative --junitfile-testcase-classname relative \ --jsonfile "${json}" \ -- "${go_test_args[@]}" local failed=$? echo "Finished run, return code is ${failed}" echo "XML report written to ${xml}" if [[ -n "$(grep '<testsuites></testsuites>' "${xml}")" ]]; then # XML report is empty, something's wrong; use the output as failure reason create_junit_xml _go_tests "GoTests" "$(cat "${report}")" fi # Capture and report any race condition errors local race_errors race_errors="$(sed -n '/^WARNING: DATA RACE$/,/^==================$/p' "${report}")" create_junit_xml _go_tests "DataRaceAnalysis" "${race_errors}" if (( ! IS_PROW )); then # Keep the suffix, so files are related. local logfile=${xml/junit_/go_test_} logfile=${logfile/.xml/.log} cp "${report}" "${logfile}" echo "Test log written to ${logfile}" fi return ${failed} } # Install Knative Serving in the current cluster. # Parameters: $1 - Knative Serving crds manifest. # $2 - Knative Serving core manifest. # $3 - Knative net-istio manifest. function start_knative_serving() { header "Starting Knative Serving" subheader "Installing Knative Serving" echo "Installing Serving CRDs from $1" kubectl apply -f "$1" echo "Installing Serving core components from $2" kubectl apply -f "$2" echo "Installing net-istio components from $3" kubectl apply -f "$3" wait_until_pods_running knative-serving || return 1 } # Install Knative Monitoring in the current cluster. # Parameters: $1 - Knative Monitoring manifest. function start_knative_monitoring() { header "Starting Knative Monitoring" subheader "Installing Knative Monitoring" # namespace istio-system needs to be created first, due to the comment # mentioned in # https://github.com/knative/serving/blob/4202efc0dc12052edc0630515b101cbf8068a609/config/monitoring/tracing/zipkin/100-zipkin.yaml#L21 kubectl create namespace istio-system 2>/dev/null echo "Installing Monitoring from $1" kubectl apply -f "$1" || return 1 wait_until_pods_running knative-monitoring || return 1 wait_until_pods_running istio-system || return 1 } # Install the stable release Knative/serving in the current cluster. # Parameters: $1 - Knative Serving version number, e.g. 0.6.0. function start_release_knative_serving() { start_knative_serving "https://storage.googleapis.com/knative-releases/serving/previous/v$1/serving-crds.yaml" \ "https://storage.googleapis.com/knative-releases/serving/previous/v$1/serving-core.yaml" \ "https://storage.googleapis.com/knative-releases/net-istio/previous/v$1/net-istio.yaml" } # Install the latest stable Knative Serving in the current cluster. function start_latest_knative_serving() { start_knative_serving "${KNATIVE_SERVING_RELEASE_CRDS}" "${KNATIVE_SERVING_RELEASE_CORE}" "${KNATIVE_NET_ISTIO_RELEASE}" } # Install Knative Eventing in the current cluster. # Parameters: $1 - Knative Eventing manifest. function start_knative_eventing() { header "Starting Knative Eventing" subheader "Installing Knative Eventing" echo "Installing Eventing CRDs from $1" kubectl apply --selector knative.dev/crd-install=true -f "$1" echo "Installing the rest of eventing components from $1" kubectl apply -f "$1" wait_until_pods_running knative-eventing || return 1 } # Install the stable release Knative/eventing in the current cluster. # Parameters: $1 - Knative Eventing version number, e.g. 0.6.0. function start_release_knative_eventing() { start_knative_eventing "https://storage.googleapis.com/knative-releases/eventing/previous/v$1/eventing.yaml" } # Install the latest stable Knative Eventing in the current cluster. function start_latest_knative_eventing() { start_knative_eventing "${KNATIVE_EVENTING_RELEASE}" } # Install Knative Eventing extension in the current cluster. # Parameters: $1 - Knative Eventing extension manifest. # $2 - Namespace to look for ready pods into function start_knative_eventing_extension() { header "Starting Knative Eventing Extension" echo "Installing Extension CRDs from $1" kubectl apply -f "$1" wait_until_pods_running "$2" || return 1 } # Install the stable release of eventing extension sugar controller in the current cluster. # Parameters: $1 - Knative Eventing release version, e.g. 0.16.0 function start_release_eventing_sugar_controller() { start_knative_eventing_extension "https://storage.googleapis.com/knative-releases/eventing/previous/v$1/eventing-sugar-controller.yaml" "knative-eventing" } # Install the sugar cotroller eventing extension function start_latest_eventing_sugar_controller() { start_knative_eventing_extension "${KNATIVE_EVENTING_SUGAR_CONTROLLER_RELEASE}" "knative-eventing" } # Run a go tool, installing it first if necessary. # Parameters: $1 - tool package/dir for go get/install. # $2 - tool to run. # $3..$n - parameters passed to the tool. function run_go_tool() { local tool=$2 local install_failed=0 if [[ -z "$(which ${tool})" ]]; then local action=get [[ $1 =~ ^[\./].* ]] && action=install # Avoid running `go get` from root dir of the repository, as it can change go.sum and go.mod files. # See discussions in https://github.com/golang/go/issues/27643. if [[ ${action} == "get" && $(pwd) == "${REPO_ROOT_DIR}" ]]; then local temp_dir="$(mktemp -d)" # Swallow the output as we are returning the stdout in the end. pushd "${temp_dir}" > /dev/null 2>&1 GOFLAGS="" go ${action} "$1" || install_failed=1 popd > /dev/null 2>&1 else GOFLAGS="" go ${action} "$1" || install_failed=1 fi fi (( install_failed )) && return ${install_failed} shift 2 ${tool} "$@" } # Add function call to trap # Parameters: $1 - Function to call # $2...$n - Signals for trap function add_trap { local cmd=$1 shift for trap_signal in "$@"; do local current_trap current_trap="$(trap -p "$trap_signal" | cut -d\' -f2)" local new_cmd="($cmd)" [[ -n "${current_trap}" ]] && new_cmd="${current_trap};${new_cmd}" trap -- "${new_cmd}" "$trap_signal" done } # Run kntest tool, error out and ask users to install it if it's not currently installed. # Parameters: $1..$n - parameters passed to the tool. function run_kntest() { # If the current repo is test-infra, run kntest from source. if [[ "${REPO_NAME}" == "test-infra" ]]; then go run "${REPO_ROOT_DIR}"/kntest/cmd/kntest "$@" # Otherwise kntest must be installed. else if [[ ! -x "$(command -v kntest)" ]]; then echo "--- FAIL: kntest not installed, please clone test-infra repo and run \`go install ./kntest/cmd/kntest\` to install it"; return 1; fi kntest "$@" fi } # Run go-licenses to update licenses. # Parameters: $1 - output file, relative to repo root dir. # $2 - directory to inspect. function update_licenses() { cd "${REPO_ROOT_DIR}" || return 1 local dst=$1 local dir=$2 shift run_go_tool github.com/google/go-licenses go-licenses save "${dir}" --save_path="${dst}" --force || \ { echo "--- FAIL: go-licenses failed to update licenses"; return 1; } # Hack to make sure directories retain write permissions after save. This # can happen if the directory being copied is a Go module. # See https://github.com/google/go-licenses/issues/11 chmod -R +w "${dst}" } # Run go-licenses to check for forbidden licenses. function check_licenses() { # Check that we don't have any forbidden licenses. run_go_tool github.com/google/go-licenses go-licenses check "${REPO_ROOT_DIR}/..." || \ { echo "--- FAIL: go-licenses failed the license check"; return 1; } } # Run the given linter on the given files, checking it exists first. # Parameters: $1 - tool # $2 - tool purpose (for error message if tool not installed) # $3 - tool parameters (quote if multiple parameters used) # $4..$n - files to run linter on function run_lint_tool() { local checker=$1 local params=$3 if ! hash ${checker} 2>/dev/null; then warning "${checker} not installed, not $2" return 127 fi shift 3 local failed=0 for file in $@; do ${checker} ${params} ${file} || failed=1 done return ${failed} } # Check links in the given markdown files. # Parameters: $1...$n - files to inspect function check_links_in_markdown() { # https://github.com/raviqqe/liche local config="${REPO_ROOT_DIR}/test/markdown-link-check-config.rc" [[ ! -e ${config} ]] && config="${_TEST_INFRA_SCRIPTS_DIR}/markdown-link-check-config.rc" local options="$(grep '^-' ${config} | tr \"\n\" ' ')" run_lint_tool liche "checking links in markdown files" "-d ${REPO_ROOT_DIR} ${options}" $@ } # Check format of the given markdown files. # Parameters: $1..$n - files to inspect function lint_markdown() { # https://github.com/markdownlint/markdownlint local config="${REPO_ROOT_DIR}/test/markdown-lint-config.rc" [[ ! -e ${config} ]] && config="${_TEST_INFRA_SCRIPTS_DIR}/markdown-lint-config.rc" run_lint_tool mdl "linting markdown files" "-c ${config}" $@ } # Return whether the given parameter is an integer. # Parameters: $1 - integer to check function is_int() { [[ -n $1 && $1 =~ ^[0-9]+$ ]] } # Return whether the given parameter is the knative release/nightly GCF. # Parameters: $1 - full GCR name, e.g. gcr.io/knative-foo-bar function is_protected_gcr() { [[ -n $1 && $1 =~ ^gcr.io/knative-(releases|nightly)/?$ ]] } # Return whether the given parameter is any cluster under ${KNATIVE_TESTS_PROJECT}. # Parameters: $1 - Kubernetes cluster context (output of kubectl config current-context) function is_protected_cluster() { # Example: gke_knative-tests_us-central1-f_prow [[ -n $1 && $1 =~ ^gke_${KNATIVE_TESTS_PROJECT}_us\-[a-zA-Z0-9]+\-[a-z]+_[a-z0-9\-]+$ ]] } # Return whether the given parameter is ${KNATIVE_TESTS_PROJECT}. # Parameters: $1 - project name function is_protected_project() { [[ -n $1 && "$1" == "${KNATIVE_TESTS_PROJECT}" ]] } # Remove symlinks in a path that are broken or lead outside the repo. # Parameters: $1 - path name, e.g. vendor function remove_broken_symlinks() { for link in $(find $1 -type l); do # Remove broken symlinks if [[ ! -e ${link} ]]; then unlink ${link} continue fi # Get canonical path to target, remove if outside the repo local target="$(ls -l ${link})" target="${target##* -> }" [[ ${target} == /* ]] || target="./${target}" target="$(cd `dirname "${link}"` && cd "${target%/*}" && echo "$PWD"/"${target##*/}")" if [[ ${target} != *github.com/knative/* && ${target} != *knative.dev/* ]]; then unlink "${link}" continue fi done } # Returns the canonical path of a filesystem object. # Parameters: $1 - path to return in canonical form # $2 - base dir for relative links; optional, defaults to current function get_canonical_path() { # We don't use readlink because it's not available on every platform. local path=$1 local pwd=${2:-.} [[ ${path} == /* ]] || path="${pwd}/${path}" echo "$(cd "${path%/*}" && echo "$PWD"/"${path##*/}")" } # List changed files in the current PR. # This is implemented as a function so it can be mocked in unit tests. # It will fail if a file name ever contained a newline character (which is bad practice anyway) function list_changed_files() { if [[ -v PULL_BASE_SHA ]] && [[ -v PULL_PULL_SHA ]]; then # Avoid warning when there are more than 1085 files renamed: # https://stackoverflow.com/questions/7830728/warning-on-diff-renamelimit-variable-when-doing-git-push git config diff.renames 0 git --no-pager diff --name-only "${PULL_BASE_SHA}".."${PULL_PULL_SHA}" else # Do our best if not running in Prow git diff --name-only HEAD^ fi } # Returns the current branch. function current_branch() { local branch_name="" # Get the branch name from Prow's env var, see https://github.com/kubernetes/test-infra/blob/master/prow/jobs.md. # Otherwise, try getting the current branch from git. (( IS_PROW )) && branch_name="${PULL_BASE_REF:-}" [[ -z "${branch_name}" ]] && branch_name="$(git rev-parse --abbrev-ref HEAD)" echo "${branch_name}" } # Returns whether the current branch is a release branch. function is_release_branch() { [[ $(current_branch) =~ ^release-[0-9\.]+$ ]] } # Returns the URL to the latest manifest for the given Knative project. # Parameters: $1 - repository name of the given project # $2 - name of the yaml file, without extension function get_latest_knative_yaml_source() { local repo_name="$1" local yaml_name="$2" # If it's a release branch, the yaml source URL should point to a specific version. if is_release_branch; then # Extract the release major&minor version from the branch name. local branch_name="$(current_branch)" local major_minor="${branch_name##release-}" # Find the latest release manifest with the same major&minor version. local yaml_source_path="$( gsutil ls "gs://knative-releases/${repo_name}/previous/v${major_minor}.*/${yaml_name}.yaml" 2> /dev/null \ | sort \ | tail -n 1 \ | cut -b6-)" # The version does exist, return it. if [[ -n "${yaml_source_path}" ]]; then echo "https://storage.googleapis.com/${yaml_source_path}" return fi # Otherwise, fall back to nightly. fi echo "https://storage.googleapis.com/knative-nightly/${repo_name}/latest/${yaml_name}.yaml" } function shellcheck_new_files() { declare -a array_of_files local failed=0 readarray -t -d '\n' array_of_files < <(list_changed_files) for filename in "${array_of_files[@]}"; do if echo "${filename}" | grep -q "^vendor/"; then continue fi if file "${filename}" | grep -q "shell script"; then # SC1090 is "Can't follow non-constant source"; we will scan files individually if shellcheck -e SC1090 "${filename}"; then echo "--- PASS: shellcheck on ${filename}" else echo "--- FAIL: shellcheck on ${filename}" failed=1 fi fi done if [[ ${failed} -eq 1 ]]; then fail_script "shellcheck failures" fi } # Initializations that depend on previous functions. # These MUST come last. readonly _TEST_INFRA_SCRIPTS_DIR="$(dirname $(get_canonical_path "${BASH_SOURCE[0]}"))" readonly REPO_NAME_FORMATTED="Knative $(capitalize "${REPO_NAME//-/ }")" # Public latest nightly or release yaml files. readonly KNATIVE_SERVING_RELEASE_CRDS="$(get_latest_knative_yaml_source "serving" "serving-crds")" readonly KNATIVE_SERVING_RELEASE_CORE="$(get_latest_knative_yaml_source "serving" "serving-core")" readonly KNATIVE_NET_ISTIO_RELEASE="$(get_latest_knative_yaml_source "net-istio" "net-istio")" readonly KNATIVE_EVENTING_RELEASE="$(get_latest_knative_yaml_source "eventing" "eventing")" readonly KNATIVE_MONITORING_RELEASE="$(get_latest_knative_yaml_source "serving" "monitoring")" readonly KNATIVE_EVENTING_SUGAR_CONTROLLER_RELEASE="$(get_latest_knative_yaml_source "eventing" "eventing-sugar-controller")"
googleinterns/knative-source-mongodb
vendor/knative.dev/test-infra/scripts/library.sh
Shell
apache-2.0
29,502
package uk.co.ourfriendirony.medianotifier.clients.rawg.game.search; import com.fasterxml.jackson.annotation.JsonAnyGetter; import com.fasterxml.jackson.annotation.JsonAnySetter; import com.fasterxml.jackson.annotation.JsonIgnore; import com.fasterxml.jackson.annotation.JsonInclude; import com.fasterxml.jackson.annotation.JsonProperty; import com.fasterxml.jackson.annotation.JsonPropertyOrder; import java.util.HashMap; import java.util.List; import java.util.Map; @JsonInclude(JsonInclude.Include.NON_NULL) @JsonPropertyOrder({ "count", "next", "previous", "results", "user_platforms" }) public class GameSearch { @JsonProperty("count") private Integer count; @JsonProperty("next") private String next; @JsonProperty("previous") private Object previous; @JsonProperty("results") private List<GameSearchResult> results = null; @JsonProperty("user_platforms") private Boolean userPlatforms; @JsonIgnore private Map<String, Object> additionalProperties = new HashMap<String, Object>(); @JsonProperty("count") public Integer getCount() { return count; } @JsonProperty("count") public void setCount(Integer count) { this.count = count; } @JsonProperty("next") public String getNext() { return next; } @JsonProperty("next") public void setNext(String next) { this.next = next; } @JsonProperty("previous") public Object getPrevious() { return previous; } @JsonProperty("previous") public void setPrevious(Object previous) { this.previous = previous; } @JsonProperty("results") public List<GameSearchResult> getResults() { return results; } @JsonProperty("results") public void setResults(List<GameSearchResult> results) { this.results = results; } @JsonProperty("user_platforms") public Boolean getUserPlatforms() { return userPlatforms; } @JsonProperty("user_platforms") public void setUserPlatforms(Boolean userPlatforms) { this.userPlatforms = userPlatforms; } @JsonAnyGetter public Map<String, Object> getAdditionalProperties() { return this.additionalProperties; } @JsonAnySetter public void setAdditionalProperty(String name, Object value) { this.additionalProperties.put(name, value); } }
OurFriendIrony/MediaNotifier
app/src/main/java/uk/co/ourfriendirony/medianotifier/clients/rawg/game/search/GameSearch.java
Java
apache-2.0
2,438
<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>Payment System</title> <!-- Bootstrap --> <link rel="stylesheet" href="css/bootstrap.min.css"> <link rel="stylesheet" href="css/bootstrap-theme.min.css"> <link rel="stylesheet" href="css/datepicker.css"> <link rel="stylesheet" href="css/datepicker3.css"> <link rel="stylesheet" href="css/rentpayment.css"> <!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries --> <!-- WARNING: Respond.js doesn't work if you view the page via file:// --> <!--[if lt IE 9]> <script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script> <script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script> <![endif]--> </head> <body> <div class="navbar navbar-inverse navbar-fixed-top" role="navigation"> <div class="container"> <div class="navbar-header"> <button type="button" class="navbar-toggle" data-toggle="collapse" data-target=".navbar-collapse"> <span class="sr-only">Navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <a class="navbar-brand" href="accounthome.html">Payment System</a> </div> <div class="collapse navbar-collapse navbar-right"> <ul class="nav navbar-nav"> <li><a href="index.html">Logout</a></li> <li><a href="profile.html">Jermaine Davis</a></li> </ul> </div> </div> </div> <div class="container"> <div class="col-md-2"></div> <div class="col-md-8"> <form class="form-signin" role="form"> <h2 class="form-signin-heading">Schedule Payment</h2> <div class="input-group form-input-group"> <span class="input-group-addon">$</span> <input id="paymentamount" type="text" class="form-control" placeholder="Payment Amount" required autofocus> </div> <div class="input-group form-input-group"> <div id="accountContainer" class="input-group-btn dropdown"> <button type="button" class="btn btn-default dropdown-toggle form-control" id="accountDropdown" data-toggle="dropdown"><span class="caret"></span></button> <ul class="dropdown-menu" role="menu" aria-labelledby="accountDropdown"> <li role="presentation"><a role="menuitem" tabindex="-1" href="#" data-account="1234">XXXX-XXXX-XXXX-1234</a></li> <li role="presentation"><a role="menuitem" tabindex="-1" href="#" data-account="1478">XXXX-XXXX-XXXX-1478</a></li> <li role="presentation"><a role="menuitem" tabindex="-1" href="#" data-account="9514">XXXX-XXXX-XXXX-9514</a></li> </ul> </div> <input type="text" id="paymentaccount" class="form-control" placeholder="Choose Account" required> </div> <div class="input-group date form-input-group"> <input type="text" id="paymentdate" class="form-control" placeholder="Payment Date" required><span class="input-group-addon"><i class="glyphicon glyphicon-th"></i></span> </div> <button class="btn btn-lg btn-primary btn-block" type="submit">Schedule</button> </form> </div> <div class="col-md-2"></div> </div> <!-- jQuery (necessary for Bootstrap's JavaScript plugins) --> <script src="js/jquery.min.js"></script> <!-- Include all compiled plugins (below), or include individual files as needed --> <script src="js/bootstrap.min.js"></script> <script src="js/bootstrapdatepicker.js"></script> <script src="js/moment.min.js"></script> <script src="js/rentpayment.js"></script> <script type="text/javascript"> $('.input-group.date').datepicker({ daysOfWeekDisabled: "0,6", startDate: moment().format("MM/DD/YYYY"), autoclose: true, todayHighlight: true }); $('.dropdown-menu li a').click(function(){ $('#paymentaccount').val($(this).text()); }); $('#paymentamount').blur(function(){ var e = $(this); if(e.val() !== "" && parseFloat(e.val()) > 0){ //TODO: Fix handling of decimal input that contains .00 var data = e.val().match(/[0-9]/g).join(''); e.val(parseFloat(data).format()); } }); </script> </body> </html>
mainephd/rentpaymentsystem
public/makepayment.html
HTML
apache-2.0
4,661
# Freycinetia simulatrix B.C.Stone SPECIES #### Status ACCEPTED #### According to The Catalogue of Life, 3rd January 2011 #### Published in null #### Original name null ### Remarks null
mdoering/backbone
life/Plantae/Magnoliophyta/Liliopsida/Pandanales/Pandanaceae/Freycinetia/Freycinetia simulatrix/README.md
Markdown
apache-2.0
190
#-- # Copyright (C) 2008 10gen Inc. # # This program is free software: you can redistribute it and/or modify it # under the terms of the GNU Affero General Public License, version 3, as # published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or # FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License # for more details. # # You should have received a copy of the GNU Affero General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. #++ module XGen module Mongo # A Mongo database cursor. XGen::Mongo::Cursor is Enumerable. # # Example: # Person.find(:all).sort({:created_on => 1}).each { |p| puts p.to_s } # n = Thing.find(:all).count() # # note that you can just call Thing.count() instead # # A Mongo cursor is like Schrodenger's cat: it is neither an array nor an # enumerable collection until you use it. It can not be both. Once you # reference it as an array (by retrieving a record via index or asking for # the length or count), you can't iterate over the contents using +each+. # Likewise, once you start iterating over the contents using +each+ you # can't ask for the count of the number of records. # # The sort, limit, and skip methods must be called before resolving the # quantum state of a cursor. # # See XGen::Mongo::Base#find for more information. class Cursor include Enumerable # Forward missing methods to the cursor itself. def method_missing(sym, *args, &block) return @cursor.send(sym, *args) end def initialize(db_cursor, model_class) @cursor, @model_class = db_cursor, model_class end # Iterate over the records returned by the query. Each row is turned # into the proper XGen::Mongo::Base subclass instance. def each @cursor.forEach { |row| yield @model_class.new(row) } end # Return thie +index+'th row. The row is turned into the proper # XGen::Mongo::Base subclass instance. def [](index) @model_class.new(@cursor[index]) end # Sort, limit, and skip methods that return self (the cursor) instead of # whatever those methods return. %w(sort limit skip).each { |name| eval "def #{name}(*args); @cursor.#{name}(*args); return self; end" } # This is for JavaScript code that needs to call toArray on the @cursor. def toArray @cursor.toArray end end end end
babble/babble
src/main/ed/lang/ruby/xgen/mongo/cursor.rb
Ruby
apache-2.0
2,654
<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>FST - Adrian Zorriketa</title> <meta name="description" content="Keep track of the statistics from Adrian Zorriketa. Average heat score, heat wins, heat wins percentage, epic heats road to the final"> <meta name="author" content=""> <link rel="apple-touch-icon" sizes="57x57" href="/favicon/apple-icon-57x57.png"> <link rel="apple-touch-icon" sizes="60x60" href="/favicon/apple-icon-60x60.png"> <link rel="apple-touch-icon" sizes="72x72" href="/favicon/apple-icon-72x72.png"> <link rel="apple-touch-icon" sizes="76x76" href="/favicon/apple-icon-76x76.png"> <link rel="apple-touch-icon" sizes="114x114" href="/favicon/apple-icon-114x114.png"> <link rel="apple-touch-icon" sizes="120x120" href="/favicon/apple-icon-120x120.png"> <link rel="apple-touch-icon" sizes="144x144" href="/favicon/apple-icon-144x144.png"> <link rel="apple-touch-icon" sizes="152x152" href="/favicon/apple-icon-152x152.png"> <link rel="apple-touch-icon" sizes="180x180" href="/favicon/apple-icon-180x180.png"> <link rel="icon" type="image/png" sizes="192x192" href="/favicon/android-icon-192x192.png"> <link rel="icon" type="image/png" sizes="32x32" href="/favicon/favicon-32x32.png"> <link rel="icon" type="image/png" sizes="96x96" href="/favicon/favicon-96x96.png"> <link rel="icon" type="image/png" sizes="16x16" href="/favicon/favicon-16x16.png"> <link rel="manifest" href="/manifest.json"> <meta name="msapplication-TileColor" content="#ffffff"> <meta name="msapplication-TileImage" content="/ms-icon-144x144.png"> <meta name="theme-color" content="#ffffff"> <meta property="og:title" content="Fantasy Surfing tips"/> <meta property="og:image" content="https://fantasysurfingtips.com/img/just_waves.png"/> <meta property="og:description" content="See how great Adrian Zorriketa is surfing this year"/> <!-- Bootstrap Core CSS - Uses Bootswatch Flatly Theme: https://bootswatch.com/flatly/ --> <link href="https://fantasysurfingtips.com/css/bootstrap.css" rel="stylesheet"> <!-- Custom CSS --> <link href="https://fantasysurfingtips.com/css/freelancer.css" rel="stylesheet"> <link href="https://cdn.datatables.net/plug-ins/1.10.7/integration/bootstrap/3/dataTables.bootstrap.css" rel="stylesheet" /> <!-- Custom Fonts --> <link href="https://fantasysurfingtips.com/font-awesome/css/font-awesome.min.css" rel="stylesheet" type="text/css"> <link href="https://fonts.googleapis.com/css?family=Montserrat:400,700" rel="stylesheet" type="text/css"> <link href="https://fonts.googleapis.com/css?family=Lato:400,700,400italic,700italic" rel="stylesheet" type="text/css"> <link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/morris.js/0.5.1/morris.css"> <script src="https://code.jquery.com/jquery-2.x-git.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery-ujs/1.2.1/rails.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/raphael/2.1.0/raphael-min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/morris.js/0.5.1/morris.min.js"></script> <script src="https://www.w3schools.com/lib/w3data.js"></script> <script async src="//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script> <script> (adsbygoogle = window.adsbygoogle || []).push({ google_ad_client: "ca-pub-2675412311042802", enable_page_level_ads: true }); </script> </head> <body> <div id="fb-root"></div> <script>(function(d, s, id) { var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id; js.src = "//connect.facebook.net/en_GB/sdk.js#xfbml=1&version=v2.6"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk'));</script> <!-- Navigation --> <div w3-include-html="https://fantasysurfingtips.com/layout/header.html"></div> <!-- Header --> <div w3-include-html="https://fantasysurfingtips.com/layout/sponsor.html"></div> <section > <div class="container"> <div class="row"> <div class="col-sm-3 "> <div class="col-sm-2 "> </div> <div class="col-sm-8 "> <!-- <img src="http://fantasysurfingtips.com/img/surfers/azor.png" class="img-responsive" alt=""> --> <h3 style="text-align:center;">Adrian Zorriketa</h3> <a href="https://twitter.com/share" class="" data-via="fansurfingtips"><i class="fa fa-twitter"></i> Share on Twitter</i></a> <br/> <a class="fb-xfbml-parse-ignore" target="_blank" href="https://www.facebook.com/sharer/sharer.php?u=http%3A%2F%2Ffantasysurfingtips.com%2Fsurfers%2Fazor&amp;src=sdkpreparse"><i class="fa fa-facebook"></i> Share on Facebook</a> </div> <div class="col-sm-2 "> </div> </div> <div class="col-sm-3 portfolio-item"> </div> <div class="col-sm-3 portfolio-item"> <h6 style="text-align:center;">Avg Heat Score (FST DATA)</h6> <h1 style="text-align:center;">8.38</h1> </div> </div> <hr/> <h4 style="text-align:center;" >Competitions</h4> <hr/> <h4 style="text-align:center;" >Heat Stats (FST data)</h4> <div class="row"> <div class="col-sm-4 portfolio-item"> <h6 style="text-align:center;">Heats</h6> <h2 style="text-align:center;">8</h2> </div> <div class="col-sm-4 portfolio-item"> <h6 style="text-align:center;">Heat wins</h6> <h2 style="text-align:center;">0</h2> </div> <div class="col-sm-4 portfolio-item"> <h6 style="text-align:center;">HEAT WINS PERCENTAGE</h6> <h2 style="text-align:center;">0.0%</h2> </div> </div> <hr/> <h4 style="text-align:center;">Avg Heat Score progression</h4> <div id="avg_chart" style="height: 250px;"></div> <hr/> <h4 style="text-align:center;">Heat stats progression</h4> <div id="heat_chart" style="height: 250px;"></div> <hr/> <style type="text/css"> .heats-all{ z-index: 3; margin-left: 5px; cursor: pointer; } </style> <div class="container"> <div id="disqus_thread"></div> <script> /** * RECOMMENDED CONFIGURATION VARIABLES: EDIT AND UNCOMMENT THE SECTION BELOW TO INSERT DYNAMIC VALUES FROM YOUR PLATFORM OR CMS. * LEARN WHY DEFINING THESE VARIABLES IS IMPORTANT: https://disqus.com/admin/universalcode/#configuration-variables*/ var disqus_config = function () { this.page.url = "http://fantasysurfingtips.com/surfers/azor"; // Replace PAGE_URL with your page's canonical URL variable this.page.identifier = '2784'; // Replace PAGE_IDENTIFIER with your page's unique identifier variable }; (function() { // DON'T EDIT BELOW THIS LINE var d = document, s = d.createElement('script'); s.src = '//fantasysurfingtips.disqus.com/embed.js'; s.setAttribute('data-timestamp', +new Date()); (d.head || d.body).appendChild(s); })(); </script> <noscript>Please enable JavaScript to view the <a href="https://disqus.com/?ref_noscript">comments powered by Disqus.</a></noscript> </div> </section> <script type="text/javascript"> $('.heats-all').click(function(){ $('.heats-all-stat').css('display', 'none') $('#'+$(this).attr('id')+'-stat').css('display', 'block') }); $('.heats-2016').click(function(){ $('.heats-2016-stat').css('display', 'none') $('#'+$(this).attr('id')+'-stat').css('display', 'block') }); $('document').ready(function(){ new Morris.Line({ // ID of the element in which to draw the chart. element: 'avg_chart', // Chart data records -- each entry in this array corresponds to a point on // the chart. data: [], // The name of the data record attribute that contains x-values. xkey: 'year', // A list of names of data record attributes that contain y-values. ykeys: ['avg', 'avg_all'], // Labels for the ykeys -- will be displayed when you hover over the // chart. labels: ['Avg score in year', 'Avg score FST DATA'] }); new Morris.Bar({ // ID of the element in which to draw the chart. element: 'heat_chart', // Chart data records -- each entry in this array corresponds to a point on // the chart. data: [], // The name of the data record attribute that contains x-values. xkey: 'year', // A list of names of data record attributes that contain y-values. ykeys: ['heats', 'wins', 'percs'], // Labels for the ykeys -- will be displayed when you hover over the // chart. labels: ['Heats surfed', 'Heats won', 'Winning percentage'] }); }); </script> <script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');</script> <!-- Footer --> <div w3-include-html="https://fantasysurfingtips.com/layout/footer.html"></div> <script type="text/javascript"> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-74337819-1', 'auto'); // Replace with your property ID. ga('send', 'pageview'); </script> <script> w3IncludeHTML(); </script> <!-- jQuery --> <script src="https://fantasysurfingtips.com/js/jquery.js"></script> <script src="https://cdn.datatables.net/1.10.7/js/jquery.dataTables.min.js"></script> <!-- Bootstrap Core JavaScript --> <script src="https://fantasysurfingtips.com/js/bootstrap.min.js"></script> <!-- Plugin JavaScript --> <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery-easing/1.3/jquery.easing.min.js"></script> <script src="https://fantasysurfingtips.com/js/classie.js"></script> <script src="https://fantasysurfingtips.com/js/cbpAnimatedHeader.js"></script> <!-- Contact Form JavaScript --> <script src="https://fantasysurfingtips.com/js/jqBootstrapValidation.js"></script> <script src="https://fantasysurfingtips.com/js/contact_me.js"></script> <!-- Custom Theme JavaScript --> <script src="https://fantasysurfingtips.com/js/freelancer.js"></script> <script type="https://cdn.datatables.net/1.10.12/js/jquery.dataTables.min.js"></script> <script type="https://cdn.datatables.net/1.10.12/js/dataTables.bootstrap.min.js"></script> </body> </html>
chicofilho/fst
surfers/mqs/azor.html
HTML
apache-2.0
11,250
<!DOCTYPE html> <html> <head> <link href="js/bootstrap.min.css" rel="stylesheet"type="text/css"> <link href="css/moncss.css" rel="stylesheet"type="text/css"> <script type="text/javascript" src="js/jquery-3.1.1.min.js"> </script> <script type="text/javascript" src="js/typeahead.min.js"></script> <script type="text/javascript" src="js/bootstrap.min.js"></script> <script type="text/javascript" src="js/formulaire.js"></script> <title>Comparison of United States presidential candidates, 2008 - Tax policy</title> <meta charset="utf-8"> </head> <body> <div class="container"> <div class="row"> <div class="col-md-2">&nbsp;</div> <div class="col-md-8"> <h1>Comparison of United States presidential candidates, 2008 - Tax policy</h1> </div> <div class="col-md-2">&nbsp;</div> </div> <div class="row"> <fieldset> <div class="col-md-2">&nbsp;</div> <div class="col-md-8"> <form action="" method="post"> <legend style="color:red;font-weight: bold;font-style:italic;text-align: center;" >Formulaire</legend> </br> <label>Income : </label></br> <input id="0" type="text" name="Feature"></br> </br> <label>Projected Federal income tax changes in 2009 assuming all tax proposals were adopted by congress and the budget remains the same.Yellow is for the projected tax change most favorable to people in that income bracket. :</label></br> <input type='checkbox'><label>autre:</label></br><input type="text"/> </br> </br> <label>$227K-$603K :</label></br> <input type="checkbox" name="Feature" >-$7</br> <input type="checkbox" name="Feature" >871</br> <input type='checkbox'><label>autre:</label></br><input type="text"/> </br> </br> <label>$112K-$161K :</label></br> <input type="checkbox" name="Feature" >-$2</br> <input type="checkbox" name="Feature" >204</br> <input type="checkbox" name="Feature" >614</br> <input type='checkbox'><label>autre:</label></br><input type="text"/> </br> </br> <label>$66K-$112K :</label></br> <input type="checkbox" name="Feature" >-$1</br> <input type="checkbox" name="Feature" >290</br> <input type="checkbox" name="Feature" >009</br> <input type='checkbox'><label>autre:</label></br><input type="text"/> </br> </br> <label>CNN, Tax Policy Center, BarackObama.com, and JohnMcCain.com :</label></br> <input type="checkbox" name="Feature" >CNN</br> <input type="checkbox" name="Feature" >Tax Policy Center</br> <input type="checkbox" name="Feature" >BarackObama.com</br> <input type="checkbox" name="Feature" >and JohnMcCain.com</br> <input type='checkbox'><label>autre:</label></br><input type="text"/> </br> </br> <label>$161K-$227K :</label></br> <input type="checkbox" name="Feature" >-$2</br> <input type="checkbox" name="Feature" >789</br> <input type="checkbox" name="Feature" >-$4</br> <input type="checkbox" name="Feature" >380</br> <input type='checkbox'><label>autre:</label></br><input type="text"/> </br> </br> <label>$603K and up :</label></br> <input type="checkbox" name="Feature" >-$45</br> <input type="checkbox" name="Feature" >361</br> <input type="checkbox" name="Feature" >+$115</br> <input type="checkbox" name="Feature" >974</br> <input type='checkbox'><label>autre:</label></br><input type="text"/> </br> </br> <label>Under $19K : </label></br> <input id="1" type="text" name="Feature"></br> </br> <label>Over $2.9M :</label></br> <input type="checkbox" name="Feature" >+$701</br> <input type="checkbox" name="Feature" >885</br> <input type="checkbox" name="Feature" >-$269</br> <input type="checkbox" name="Feature" >364</br> <input type='checkbox'><label>autre:</label></br><input type="text"/> </br> </br> <label>$38K-$66K :</label></br> <input type="checkbox" name="Feature" >-$1</br> <input type="checkbox" name="Feature" >042</br> <input type='checkbox'><label>autre:</label></br><input type="text"/> </br> </br> <label>$19K-$38K : </label></br> <input id="2" type="text" name="Feature"></br> <script> $("#0").typeahead({ name:"list0", local : ['Average tax bill',''] }); $("#1").typeahead({ name:"list1", local : ['-$567','-$19',''] }); $("#2").typeahead({ name:"list2", local : ['-$892','-$113',''] }); </script> </br><input type="submit" class="btn btn-info" value="Ajouter un produit" /> </form> </div> </fieldset> <div class="col-md-2">&nbsp;</div> </div> </div> </body> </html>
Ophelle/PDL
pcms/Comparison_of_United_States_presidential_candidates,_2008_2.pcm.html
HTML
apache-2.0
5,104
package io.quarkus.narayana.lra.runtime; import javax.enterprise.context.Dependent; import javax.enterprise.inject.Produces; import io.narayana.lra.client.internal.proxy.ParticipantProxyResource; import io.narayana.lra.client.internal.proxy.nonjaxrs.LRAParticipantRegistry; @Dependent public class NarayanaLRAProducers { @Produces public LRAParticipantRegistry lraParticipantRegistry() { return NarayanaLRARecorder.registry; } @Produces public ParticipantProxyResource participantProxyResource() { return new ParticipantProxyResource(); } }
quarkusio/quarkus
extensions/narayana-lra/runtime/src/main/java/io/quarkus/narayana/lra/runtime/NarayanaLRAProducers.java
Java
apache-2.0
585
/* Copyright 2013-2014 IBM Corp. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or * implied. * See the License for the specific language governing permissions and * limitations under the License. */ /* * IBM System P FSP (Flexible Service Processor) */ #ifndef __FSP_H #define __FSP_H #include <skiboot.h> #include <psi.h> /* Current max number of FSPs * one primary and one secondary is all we support */ #define FSP_MAX 2 /* Command protocol. * * Commands have a byte class and a byte subcommand. With the exception * of some HMC related commands (class 0xe0) which we don't support, * only one outstanding command is allowed for a given class. * * Note: 0xCE and 0xCF fall into the same class, ie, only one of them can * be outstanding. * * A command is outstanding until it has been acknowledged. This doesn't * imply a response, the response can come later. */ /* Protocol status error codes used by the protocol */ #define FSP_STATUS_SUCCESS 0x00 /* Command successful */ #define FSP_STATUS_MORE_DATA 0x02 /* Success, EOF not reached */ #define FSP_STATUS_DATA_INLINE 0x11 /* Data inline in mbox */ #define FSP_STATUS_INVALID_SUBCMD 0x20 #define FSP_STATUS_INVALID_MOD 0x21 #define FSP_STATUS_INVALID_DATA 0x22 #define FSP_STATUS_INVALID_DPOSTATE 0x23 #define FSP_STATUS_DMA_ERROR 0x24 #define FSP_STATUS_INVALID_CMD 0x2c #define FSP_STATUS_SEQ_ERROR 0x2d #define FSP_STATUS_BAD_STATE 0x2e #define FSP_STATUS_NOT_SUPPORTED 0x2f #define FSP_STATUS_FILE_TOO_LARGE 0x43 #define FSP_STATUS_FLASH_INPROGRESS 0x61 #define FSP_STATUS_FLASH_NOPROGRESS 0x62 #define FSP_STATUS_FLASH_INVALID_SIDE 0x63 #define FSP_STATUS_GENERIC_ERROR 0xfe #define FSP_STATUS_EOF_ERROR 0x02 #define FSP_STATUS_DMA_ERROR 0x24 #define FSP_STATUS_BUSY 0x3e #define FSP_STATUS_FLASH_BUSY 0x3f #define FSP_STATUS_INVALID_SUBID 0x41 #define FSP_STATUS_LENGTH_ERROR 0x42 #define FSP_STAUS_INVALID_HMC_ID 0x51 #define FSP_STATUS_SPCN_ERROR 0xA8 /* SPCN error */ #define FSP_STATUS_INVALID_LC 0xC0 /* Invalid location code */ #define FSP_STATUS_TOD_RESET 0xA9 /* TOD reset due to invalid state at POR */ #define FSP_STATUS_TOD_PERMANENT_ERROR 0xAF /* Permanent error in TOD */ /* * FSP registers * * All of the below register definitions come from the FSP0 "Black Widow" spec * They are the same for FSP1 except they are presented big-endian vs * little-endian for FSP0 -- which used PCI * all regs are 4 bytes wide, and we read the larger data areas in 4 byte * granularity as well * * there are actually two defined sets of MBX registers * MBX2 can't generate interrupts to the host and only MBX1 is currently * used by firmware running on the FSP, so we're mostly ignoring MBX2 */ /* Device Reset Control Register */ #define FSP_DRCR_REG 0x00 #define FSP_DRCR_CLR_REG 0x04 /* Bit masks for DRCR */ #define FSP_DRCR_CMD_VALID PPC_BIT32(16) #define FSP_DRCR_TERMINATE PPC_BIT32(17) #define FSP_DRCR_PREP_FOR_RESET PPC_BIT32(23) #define FSP_DRCR_CLEAR_DISR PPC_BIT32(30) /* DRCR commands need the CMD_VALID bit set */ #define FSP_PREP_FOR_RESET_CMD (FSP_DRCR_CMD_VALID | \ FSP_DRCR_PREP_FOR_RESET) #define FSP_DRCR_ACK_MASK (0xff << 8) /* Device Immediate Status Register */ #define FSP_DISR_REG 0x08 #define FSP_DISR_CLR_REG 0x0C /* Bit masks for DISR */ #define FSP_DISR_FSP_UNIT_CHECK PPC_BIT32(16) #define FSP_DISR_FSP_RUNTIME_TERM PPC_BIT32(21) #define FSP_DISR_FSP_RR_COMPLETE PPC_BIT32(22) #define FSP_DISR_FSP_FLASH_TERM PPC_BIT32(23) #define FSP_DISR_RUNTIME_STATE_SYNCD PPC_BIT32(24) #define FSP_DISR_DBG_IN_PROGRESS PPC_BIT32(25) #define FSP_DISR_FSP_IN_RR PPC_BIT32(26) #define FSP_DISR_FSP_REBOOT_IN_PROGRESS PPC_BIT32(27) #define FSP_DISR_CRIT_OP_IN_PROGRESS PPC_BIT32(28) #define FSP_DISR_STATUS_ACK_RXD PPC_BIT32(31) #define FSP_DISR_HIR_TRIGGER_MASK (FSP_DISR_FSP_UNIT_CHECK | \ FSP_DISR_FSP_RUNTIME_TERM | \ FSP_DISR_FSP_FLASH_TERM) /* The host version of the control register shares bits with the FSP's * control reg. Those bits are defined such that one side can set * a bit and the other side can clear it */ #define FSP_MBX1_HCTL_REG 0x080 /* AKA DSCR1 */ #define FSP_MBX1_FCTL_REG 0x090 #define FSP_MBX2_HCTL_REG 0x0a0 /* AKA DSCR2 */ #define FSP_MBX2_FCTL_REG 0x0b0 /* Bits in the control reg */ #define FSP_MBX_CTL_PTS (1 << 31) #define FSP_MBX_CTL_ABORT (1 << 30) #define FSP_MBX_CTL_SPPEND (1 << 29) #define FSP_MBX_CTL_HPEND (1 << 28) #define FSP_MBX_CTL_XDN (1 << 26) #define FSP_MBX_CTL_XUP (1 << 25) #define FSP_MBX_CTL_HCHOST_MASK (0xf << 20) #define FSP_MBX_CTL_HCHOST_SHIFT 20 #define FSP_MBX_CTL_DCHOST_MASK (0xff << 12) #define FSP_MBX_CTL_DCHOST_SHIFT 12 #define FSP_MBX_CTL_HCSP_MASK (0xf << 8) #define FSP_MBX_CTL_HCSP_SHIFT 8 #define FSP_MBX_CTL_DCSP_MASK (0xff) #define FSP_MBX_CTL_DCSP_SHIFT 0 /* Three header registers owned by the host */ #define FSP_MBX1_HHDR0_REG 0x84 #define FSP_MBX1_HHDR1_REG 0x88 #define FSP_MBX1_HHDR2_REG 0x8C #define FSP_MBX2_HHDR0_REG 0xa4 #define FSP_MBX2_HHDR1_REG 0xa8 #define FSP_MBX2_HHDR2_REG 0xaC /* SP Doorbell Error Status register */ #define FSP_SDES_REG 0xc0 /* Host Doorbell Error Status register */ #define FSP_HDES_REG 0xc4 /* Bit definitions for both SDES and HDES * * Notes: * * - CLR: is written to clear the status and always reads * as 0. It can be used to detect an error state (a HB * freeze will return all 1's) * - ILLEGAL: illegal operation such as host trying to write * to an FSP only register etc... * - WFULL: set if host tried to write to the SP doorbell while * the pending bit is still set * - REMPTY: tried to read while host pending bit not set * - PAR: SP RAM parity error */ #define FSP_DBERRSTAT_ILLEGAL1 (1 << 27) #define FSP_DBERRSTAT_WFULL1 (1 << 26) #define FSP_DBERRSTAT_REMPTY1 (1 << 25) #define FSP_DBERRSTAT_PAR1 (1 << 24) #define FSP_DBERRSTAT_CLR1 (1 << 16) #define FSP_DBERRSTAT_ILLEGAL2 (1 << 11) #define FSP_DBERRSTAT_WFULL2 (1 << 10) #define FSP_DBERRSTAT_REMPTY2 (1 << 9) #define FSP_DBERRSTAT_PAR2 (1 << 8) #define FSP_DBERRSTAT_CLR2 (1 << 0) /* Host Doorbell Interrupt Register and mask * * Note that while HDIR has bits for MBX2, only * MBX1 can actually generate interrupts. Thus only the * MBX1 bits are implemented in the mask register. */ #define FSP_HDIR_REG 0xc8 #define FSP_HDIM_SET_REG 0xcc #define FSP_HDIM_CLR_REG 0xd0 #define FSP_DBIRQ_ERROR2 (1 << 10) #define FSP_DBIRQ_XUP2 (1 << 9) #define FSP_DBIRQ_HPEND2 (1 << 8) #define FSP_DBIRQ_ERROR1 (1 << 2) #define FSP_DBIRQ_XUP1 (1 << 1) #define FSP_DBIRQ_HPEND1 (1 << 0) #define FSP_DBIRQ_MBOX1 (FSP_DBIRQ_ERROR1 | FSP_DBIRQ_XUP1 | \ FSP_DBIRQ_HPEND1) #define FSP_DBIRQ_MBOX2 (FSP_DBIRQ_ERROR2 | FSP_DBIRQ_XUP2 | \ FSP_DBIRQ_HPEND2) #define FSP_DBIRQ_ALL (FSP_DBIRQ_MBOX1 | FSP_DBIRQ_MBOX2) /* Doorbell Interrupt Register (FSP internal interrupt latch * read-only on host side */ #define FSP_PDIR_REG 0xd4 /* And associated mask */ #define FSP_PDIM_SET_REG 0xd8 #define FSP_PDIM_CLR_REG 0xdc /* Bits for the above */ #define FSP_PDIRQ_ABORT2 (1 << 7) #define FSP_PDIRQ_ABORT1 (1 << 6) #define FSP_PDIRQ_ERROR2 (1 << 5) #define FSP_PDIRQ_ERROR1 (1 << 4) #define FSP_PDIRQ_XDN2 (1 << 3) #define FSP_PDIRQ_XDN1 (1 << 2) #define FSP_PDIRQ_SPPEND2 (1 << 1) #define FSP_PDIRQ_SPPEND1 (1 << 0) /* FSP owned headers */ #define FSP_MBX1_FHDR0_REG 0x094 #define FSP_MBX1_FHDR1_REG 0x098 #define FSP_MBX1_FHDR2_REG 0x09C #define FSP_MBX2_FHDR0_REG 0x0b4 #define FSP_MBX2_FHDR1_REG 0x0b8 #define FSP_MBX2_FHDR2_REG 0x0bC /* Data areas, we can only write to host data, and read from FSP data * * Each area is 0x140 bytes long */ #define FSP_MBX1_HDATA_AREA 0x100 #define FSP_MBX1_FDATA_AREA 0x200 #define FSP_MBX2_HDATA_AREA 0x300 #define FSP_MBX2_FDATA_AREA 0x400 /* These are scratch registers */ #define FSP_SCRATCH0_REG 0xe0 #define FSP_SCRATCH1_REG 0xe4 #define FSP_SCRATCH2_REG 0xe8 #define FSP_SCRATCH3_REG 0xec /* This is what the cmd_sub_mod will have for FSP_MCLASS_RR_EVENT */ #define FSP_RESET_START 0x1 #define FSP_RELOAD_COMPLETE 0x2 /* * Message classes */ /* The FSP_MCLASS_RR_EVENT is a special message class that doesn't * participate in mbox event related activities. Its relevant only * for hypervisor internal use. So, handle it specially for command * class extraction too. */ #define FSP_MCLASS_RR_EVENT 0xaa /* see FSP_R/R defines above */ #define FSP_MCLASS_FIRST 0xce #define FSP_MCLASS_SERVICE 0xce #define FSP_MCLASS_IPL 0xcf #define FSP_MCLASS_PCTRL_MSG 0xd0 #define FSP_MCLASS_PCTRL_ABORTS 0xd1 #define FSP_MCLASS_ERR_LOG 0xd2 #define FSP_MCLASS_CODE_UPDATE 0xd3 #define FSP_MCLASS_FETCH_SPDATA 0xd4 #define FSP_MCLASS_FETCH_HVDATA 0xd5 #define FSP_MCLASS_NVRAM 0xd6 #define FSP_MCLASS_MBOX_SURV 0xd7 #define FSP_MCLASS_RTC 0xd8 #define FSP_MCLASS_SMART_CHIP 0xd9 #define FSP_MCLASS_INDICATOR 0xda #define FSP_MCLASS_HMC_INTFMSG 0xe0 #define FSP_MCLASS_HMC_VT 0xe1 #define FSP_MCLASS_HMC_BUFFERS 0xe2 #define FSP_MCLASS_SHARK 0xe3 #define FSP_MCLASS_MEMORY_ERR 0xe4 #define FSP_MCLASS_CUOD_EVENT 0xe5 #define FSP_MCLASS_HW_MAINT 0xe6 #define FSP_MCLASS_VIO 0xe7 #define FSP_MCLASS_SRC_MSG 0xe8 #define FSP_MCLASS_DATA_COPY 0xe9 #define FSP_MCLASS_TONE 0xea #define FSP_MCLASS_VIRTUAL_NVRAM 0xeb #define FSP_MCLASS_TORRENT 0xec #define FSP_MCLASS_NODE_PDOWN 0xed #define FSP_MCLASS_DIAG 0xee #define FSP_MCLASS_PCIE_LINK_TOPO 0xef #define FSP_MCLASS_OCC 0xf0 #define FSP_MCLASS_LAST 0xf0 /* * Commands are provided in rxxyyzz form where: * * - r is 0: no response or 1: response expected * - xx is class * - yy is subcommand * - zz is mod * * WARNING: We only set the r bit for HV->FSP commands * long run, we want to remove use of that bit * and instead have a table of all commands in * the FSP driver indicating which ones take a * response... */ /* * Class 0xCF */ #define FSP_CMD_OPL 0x0cf7100 /* HV->FSP: Operational Load Compl. */ #define FSP_CMD_HV_STATE_CHG 0x0cf0200 /* FSP->HV: Request HV state change */ #define FSP_RSP_HV_STATE_CHG 0x0cf8200 #define FSP_CMD_SP_NEW_ROLE 0x0cf0700 /* FSP->HV: FSP assuming a new role */ #define FSP_RSP_SP_NEW_ROLE 0x0cf8700 #define FSP_CMD_SP_RELOAD_COMP 0x0cf0102 /* FSP->HV: FSP reload complete */ /* * Class 0xCE */ #define FSP_CMD_ACK_DUMP 0x1ce0200 /* HV->FSP: Dump ack */ #define FSP_CMD_HYP_MDST_TABLE 0x1ce2600 /* HV->FSP: Sapphire MDST table */ #define FSP_CMD_CONTINUE_IPL 0x0ce7000 /* FSP->HV: HV has control */ #define FSP_RSP_SYS_DUMP_OLD 0x0ce7800 /* FSP->HV: Sys Dump Available */ #define FSP_RSP_SYS_DUMP 0x0ce7802 /* FSP->HV: Sys Dump Available */ #define FSP_RSP_RES_DUMP 0x0ce7807 /* FSP->HV: Resource Dump Available */ #define FSP_CMD_CONTINUE_ACK 0x0ce5700 /* HV->FSP: HV acks CONTINUE IPL */ #define FSP_CMD_HV_FUNCTNAL 0x1ce5707 /* HV->FSP: Set HV functional state */ #define FSP_CMD_FSP_FUNCTNAL 0x0ce5708 /* FSP->HV: FSP functional state */ #define FSP_CMD_HV_QUERY_CAPS 0x1ce0400 /* HV->FSP: Query capabilities */ #define FSP_RSP_HV_QUERY_CAPS 0x1ce8400 #define FSP_CMD_SP_QUERY_CAPS 0x0ce0501 /* FSP->HV */ #define FSP_RSP_SP_QUERY_CAPS 0x0ce8500 #define FSP_CMD_QUERY_SPARM 0x1ce1200 /* HV->FSP: System parameter query */ #define FSP_RSP_QUERY_SPARM 0x0ce9200 /* FSP->HV: System parameter resp */ #define FSP_CMD_SET_SPARM_1 0x1ce1301 /* HV->FSP: Set system parameter */ #define FSP_CMD_SET_SPARM_2 0x1ce1302 /* HV->FSP: Set system parameter TCE */ #define FSP_RSP_SET_SPARM 0x0ce9300 /* FSP->HV: Set system parameter resp */ #define FSP_CMD_SP_SPARM_UPD_0 0x0ce1600 /* FSP->HV: Sysparm updated no data */ #define FSP_CMD_SP_SPARM_UPD_1 0x0ce1601 /* FSP->HV: Sysparm updated data */ #define FSP_CMD_POWERDOWN_NORM 0x1ce4d00 /* HV->FSP: Normal power down */ #define FSP_CMD_POWERDOWN_QUICK 0x1ce4d01 /* HV->FSP: Quick power down */ #define FSP_CMD_POWERDOWN_PCIRS 0x1ce4d02 /* HV->FSP: PCI cfg reset power dwn */ #define FSP_CMD_REBOOT 0x1ce4e00 /* HV->FSP: Standard IPL */ #define FSP_CMD_DEEP_REBOOT 0x1ce4e04 /* HV->FSP: Deep IPL */ #define FSP_CMD_INIT_DPO 0x0ce5b00 /* FSP->HV: Initialize Delayed Power Off */ #define FSP_RSP_INIT_DPO 0x0cedb00 /* HV->FSP: Response for DPO init command */ #define FSP_CMD_PANELSTATUS 0x0ce5c00 /* FSP->HV */ #define FSP_CMD_PANELSTATUS_EX1 0x0ce5c02 /* FSP->HV */ #define FSP_CMD_PANELSTATUS_EX2 0x0ce5c03 /* FSP->HV */ #define FSP_CMD_ERRLOG_PHYP_ACK 0x1ce0800 /* HV->FSP */ #define FSP_RSP_ERRLOG_PHYP_ACK 0x0ce8800 /* FSP->HV */ #define FSP_CMD_ERRLOG_GET_PLID 0x0ce0900 /* FSP->HV: Get PLID */ #define FSP_RSP_ERRLOG_GET_PLID 0x0ce8900 /* HV->FSP */ #define FSP_CMD_GET_IPL_SIDE 0x1ce0600 /* HV->FSP: Get IPL side and speed */ #define FSP_CMD_SET_IPL_SIDE 0x1ce0780 /* HV->FSP: Set next IPL side */ #define FSP_CMD_PCI_POWER_CONF 0x1ce1b00 /* HV->FSP: Send PCIe list to FSP */ #define FSP_CMD_STATUS_REQ 0x1ce4800 /* HV->FSP: Request normal panel status */ #define FSP_CMD_STATUS_EX1_REQ 0x1ce4802 /* HV->FSP: Request extended 1 panel status */ #define FSP_CMD_STATUS_EX2_REQ 0x1ce4803 /* HV->FSP: Request extended 2 panel status */ #define FSP_CMD_TPO_WRITE 0x1ce4301 /* HV->FSP */ #define FSP_CMD_TPO_READ 0x1ce4201 /* FSP->HV */ /* * Class 0xD2 */ #define FSP_CMD_CREATE_ERRLOG 0x1d21000 /* HV->FSP */ #define FSP_RSP_CREATE_ERRLOG 0x0d29000 /* FSP->HV */ #define FSP_CMD_ERRLOG_NOTIFICATION 0x0d25a00 /* FSP->HV */ #define FSP_RSP_ERRLOG_NOTIFICATION 0x0d2da00 /* HV->FSP */ #define FSP_RSP_ELOG_NOTIFICATION_ERROR 0x1d2dafe /* HV->FSP */ #define FSP_CMD_FSP_DUMP_INIT 0x1d21200 /* HV->FSP: FSP dump init */ /* * Class 0xD0 */ #define FSP_CMD_SPCN_PASSTHRU 0x1d05400 /* HV->FSP */ #define FSP_RSP_SPCN_PASSTHRU 0x0d0d400 /* FSP->HV */ /* * Class 0xD3 */ #define FSP_CMD_FLASH_START 0x01d30101 /* HV->FSP: Code update start */ #define FSP_CMD_FLASH_COMPLETE 0x01d30201 /* HV->FSP: Code update complete */ #define FSP_CMD_FLASH_ABORT 0x01d302ff /* HV->FSP: Code update complete */ #define FSP_CMD_FLASH_WRITE 0x01d30300 /* HV->FSP: Write LID */ #define FSP_CMD_FLASH_DEL 0x01d30500 /* HV->FSP: Delete LID */ #define FSP_CMD_FLASH_NORMAL 0x01d30401 /* HV->FSP: Commit (T -> P) */ #define FSP_CMD_FLASH_REMOVE 0x01d30402 /* HV->FSP: Reject (P -> T) */ #define FSP_CMD_FLASH_SWAP 0x01d30403 /* HV->FSP: Swap */ #define FSP_CMD_FLASH_OUTC 0x00d30601 /* FSP->HV: Out of band commit */ #define FSP_CMD_FLASH_OUTR 0x00d30602 /* FSP->HV: Out of band reject */ #define FSP_CMD_FLASH_OUTS 0x00d30603 /* FSP->HV: Out of band swap */ #define FSP_CMD_FLASH_OUT_RSP 0x00d38600 /* HV->FSP: Out of band Resp */ #define FSP_CMD_FLASH_CACHE 0x00d30700 /* FSP->HV: Update LID cache */ #define FSP_CMD_FLASH_CACHE_RSP 0x00d38700 /* HV->FSP: Update LID cache Resp */ /* * Class 0xD4 */ #define FSP_CMD_FETCH_SP_DATA 0x1d40101 /* HV->FSP: Fetch & DMA data */ #define FSP_CMD_WRITE_SP_DATA 0x1d40201 /* HV->FSP: Fetch & DMA data */ /* Data set IDs for SP data commands */ #define FSP_DATASET_SP_DUMP 0x01 #define FSP_DATASET_HW_DUMP 0x02 #define FSP_DATASET_ERRLOG 0x03 /* error log entry */ #define FSP_DATASET_MASTER_LID 0x04 #define FSP_DATASET_NONSP_LID 0x05 #define FSP_DATASET_ELID_RDATA 0x06 #define FSP_DATASET_BLADE_PARM 0x07 #define FSP_DATASET_LOC_PORTMAP 0x08 #define FSP_DATASET_SYSIND_CAP 0x09 #define FSP_DATASET_FSP_RSRCDMP 0x0a #define FSP_DATASET_HBRT_BLOB 0x0b /* Adjustment to get T side LIDs */ #define ADJUST_T_SIDE_LID_NO 0x8000 /* * Class 0xD5 */ #define FSP_CMD_ALLOC_INBOUND 0x0d50400 /* FSP->HV: Allocate inbound buf. */ #define FSP_RSP_ALLOC_INBOUND 0x0d58400 /* * Class 0xD7 */ #define FSP_CMD_SURV_HBEAT 0x1d70000 /* ? */ #define FSP_CMD_SURV_ACK 0x0d78000 /* ? */ /* * Class 0xD8 */ #define FSP_CMD_READ_TOD 0x1d82000 /* HV->FSP */ #define FSP_CMD_READ_TOD_EXT 0x1d82001 /* HV->FSP */ #define FSP_CMD_WRITE_TOD 0x1d82100 /* HV->FSP */ #define FSP_CMD_WRITE_TOD_EXT 0x1d82101 /* HV->FSP */ /* * Class 0xDA */ #define FSP_CMD_GET_LED_LIST 0x00da1101 /* Location code information structure */ #define FSP_RSP_GET_LED_LIST 0x00da9100 #define FSP_CMD_RET_LED_BUFFER 0x00da1102 /* Location code buffer information */ #define FSP_RSP_RET_LED_BUFFER 0x00da9100 #define FSP_CMD_GET_LED_STATE 0x00da1103 /* Retrieve Indicator State */ #define FSP_RSP_GET_LED_STATE 0x00da9100 #define FSP_CMD_SET_LED_STATE 0x00da1104 /* Set Service Indicator State */ #define FSP_RSP_SET_LED_STATE 0x00da9100 #define FSP_CMD_GET_MTMS_LIST 0x00da1105 /* Get MTMS and config ID list */ #define FSP_RSP_GET_MTMS_LIST 0x00da9100 #define FSP_CMD_SET_ENCL_MTMS 0x00da1106 /* Set MTMS */ #define FSP_RSP_SET_ENCL_MTMS 0x00da9100 #define FSP_CMD_SET_ENCL_CNFG 0x00da1107 /* Set config ID */ #define FSP_RSP_SET_ENCL_CNFG 0x00da9100 #define FSP_CMD_CLR_INCT_ENCL 0x00da1108 /* Clear inactive address */ #define FSP_RSP_CLR_INCT_ENCL 0x00da9100 #define FSP_CMD_RET_MTMS_BUFFER 0x00da1109 /* Return MTMS buffer */ #define FSP_RSP_RET_MTMS_BUFFER 0x00da9100 #define FSP_CMD_ENCL_MCODE_INIT 0x00da110A /* Mcode update (Initiate download) */ #define FSP_RSP_ENCL_MCODE_INIT 0x00da9100 #define FSP_CMD_ENCL_MCODE_INTR 0x00da110B /* Mcode update (Interrupt download) */ #define FSP_RSP_ENCL_MCODE_INTR 0x00da9100 #define FSP_CMD_ENCL_POWR_TRACE 0x00da110D /* Enclosure power network trace */ #define FSP_RSP_ENCL_POWR_TRACE 0x00da9100 #define FSP_CMD_RET_ENCL_TRACE_BUFFER 0x00da110E /* Return power trace buffer */ #define FSP_RSP_RET_ENCL_TRACE_BUFFER 0x00da9100 #define FSP_CMD_GET_SPCN_LOOP_STATUS 0x00da110F /* Get SPCN loop status */ #define FSP_RSP_GET_SPCN_LOOP_STATUS 0x00da9100 #define FSP_CMD_INITIATE_LAMP_TEST 0x00da1300 /* Initiate LAMP test */ /* * Class 0xE0 * * HACK ALERT: We mark E00A01 (associate serial port) as not needing * a response. We need to do that because the FSP will send as a result * an Open Virtual Serial of the same class *and* expect a reply before * it will respond to associate serial port. That breaks our logic of * supporting only one cmd/resp outstanding per class. */ #define FSP_CMD_HMC_INTF_QUERY 0x0e00100 /* FSP->HV */ #define FSP_RSP_HMC_INTF_QUERY 0x0e08100 /* HV->FSP */ #define FSP_CMD_ASSOC_SERIAL 0x0e00a01 /* HV->FSP: Associate with a port */ #define FSP_RSP_ASSOC_SERIAL 0x0e08a00 /* FSP->HV */ #define FSP_CMD_UNASSOC_SERIAL 0x0e00b01 /* HV->FSP: Deassociate */ #define FSP_RSP_UNASSOC_SERIAL 0x0e08b00 /* FSP->HV */ #define FSP_CMD_OPEN_VSERIAL 0x0e00601 /* FSP->HV: Open serial session */ #define FSP_RSP_OPEN_VSERIAL 0x0e08600 /* HV->FSP */ #define FSP_CMD_CLOSE_VSERIAL 0x0e00701 /* FSP->HV: Close serial session */ #define FSP_RSP_CLOSE_VSERIAL 0x0e08700 /* HV->FSP */ #define FSP_CMD_CLOSE_HMC_INTF 0x0e00300 /* FSP->HV: Close HMC interface */ #define FSP_RSP_CLOSE_HMC_INTF 0x0e08300 /* HV->FSP */ /* * Class E1 */ #define FSP_CMD_VSERIAL_IN 0x0e10100 /* FSP->HV */ #define FSP_CMD_VSERIAL_OUT 0x0e10200 /* HV->FSP */ /* * Class E8 */ #define FSP_CMD_READ_SRC 0x1e84a40 /* HV->FSP */ #define FSP_CMD_DISP_SRC_INDIR 0x1e84a41 /* HV->FSP */ #define FSP_CMD_DISP_SRC_DIRECT 0x1e84a42 /* HV->FSP */ #define FSP_CMD_CLEAR_SRC 0x1e84b00 /* HV->FSP */ #define FSP_CMD_DIS_SRC_ECHO 0x1e87600 /* HV->FSP */ /* * Class EB */ #define FSP_CMD_GET_VNVRAM_SIZE 0x01eb0100 /* HV->FSP */ #define FSP_CMD_OPEN_VNVRAM 0x01eb0200 /* HV->FSP */ #define FSP_CMD_READ_VNVRAM 0x01eb0300 /* HV->FSP */ #define FSP_CMD_WRITE_VNVRAM 0x01eb0400 /* HV->FSP */ #define FSP_CMD_GET_VNV_STATS 0x00eb0500 /* FSP->HV */ #define FSP_RSP_GET_VNV_STATS 0x00eb8500 #define FSP_CMD_FREE_VNV_STATS 0x00eb0600 /* FSP->HV */ #define FSP_RSP_FREE_VNV_STATS 0x00eb8600 /* * Class 0xEE */ #define FSP_RSP_DIAG_LINK_ERROR 0x00ee1100 /* FSP->HV */ #define FSP_RSP_DIAG_ACK_TIMEOUT 0x00ee0000 /* FSP->HV */ /* * Class F0 */ #define FSP_CMD_LOAD_OCC 0x00f00100 /* FSP->HV */ #define FSP_RSP_LOAD_OCC 0x00f08100 /* HV->FSP */ #define FSP_CMD_LOAD_OCC_STAT 0x01f00300 /* HV->FSP */ #define FSP_CMD_RESET_OCC 0x00f00200 /* FSP->HV */ #define FSP_RSP_RESET_OCC 0x00f08200 /* HV->FSP */ #define FSP_CMD_RESET_OCC_STAT 0x01f00400 /* HV->FSP */ /* * Class E4 */ #define FSP_CMD_MEM_RES_CE 0x00e40300 /* FSP->HV: Memory resilience CE */ #define FSP_CMD_MEM_RES_UE 0x00e40301 /* FSP->HV: Memory resilience UE */ #define FSP_CMD_MEM_RES_UE_SCRB 0x00e40302 /* FSP->HV: UE detected by scrub */ #define FSP_RSP_MEM_RES 0x00e48300 /* HV->FSP */ #define FSP_CMD_MEM_DYN_DEALLOC 0x00e40500 /* FSP->HV: Dynamic mem dealloc */ #define FSP_RSP_MEM_DYN_DEALLOC 0x00e48500 /* HV->FSP */ /* * Functions exposed to the rest of skiboot */ /* An FSP message */ enum fsp_msg_state { fsp_msg_unused = 0, fsp_msg_queued, fsp_msg_sent, fsp_msg_wresp, fsp_msg_done, fsp_msg_timeout, fsp_msg_incoming, fsp_msg_response, fsp_msg_cancelled, }; struct fsp_msg { /* * User fields. Don't populate word0.seq (upper 16 bits), this * will be done by fsp_queue_msg() */ u8 dlen; /* not including word0/word1 */ u32 word0; /* seq << 16 | cmd */ u32 word1; /* mod << 8 | sub */ union { u32 words[14]; u8 bytes[56]; } data; /* Completion function. Called with no lock held */ void (*complete)(struct fsp_msg *msg); void *user_data; /* * Driver updated fields */ /* Current msg state */ enum fsp_msg_state state; /* Set if the message expects a response */ bool response; /* Response will be filed by driver when response received */ struct fsp_msg *resp; /* Internal queuing */ struct list_node link; }; /* This checks if a message is still "in progress" in the FSP driver */ static inline bool fsp_msg_busy(struct fsp_msg *msg) { switch(msg->state) { case fsp_msg_unused: case fsp_msg_done: case fsp_msg_timeout: case fsp_msg_response: /* A response is considered a completed msg */ return false; default: break; } return true; } static inline u32 fsp_msg_cmd(const struct fsp_msg *msg) { u32 cmd_sub_mod; cmd_sub_mod = (msg->word0 & 0xff) << 16; cmd_sub_mod |= (msg->word1 & 0xff) << 8; cmd_sub_mod |= (msg->word1 & 0xff00) >> 8; return cmd_sub_mod; } /* Initialize the FSP mailbox driver */ extern void fsp_init(void); /* Perform the OPL sequence */ extern void fsp_opl(void); /* Check if system has an FSP */ extern bool fsp_present(void); /* Allocate and populate an fsp_msg structure * * WARNING: Do _NOT_ use free() on an fsp_msg, use fsp_freemsg() * instead as we will eventually use pre-allocated message pools */ extern struct fsp_msg *fsp_allocmsg(bool alloc_response) __warn_unused_result; extern struct fsp_msg *fsp_mkmsg(u32 cmd_sub_mod, u8 add_words, ...) __warn_unused_result; /* Populate a pre-allocated msg */ extern void fsp_fillmsg(struct fsp_msg *msg, u32 cmd_sub_mod, u8 add_words, ...); /* Free a message * * WARNING: This will also free an attached response if any */ extern void fsp_freemsg(struct fsp_msg *msg); /* Free a message and not the attached reply */ extern void __fsp_freemsg(struct fsp_msg *msg); /* Cancel a message from the msg queue * * WARNING: * This is intended for use only in the FSP r/r scenario. * * This will also free an attached response if any */ extern void fsp_cancelmsg(struct fsp_msg *msg); /* Enqueue it in the appropriate FSP queue * * NOTE: This supports being called with the FSP lock already * held. This is the only function in this module that does so * and is meant to be used that way for sending serial "poke" * commands to the FSP. */ extern int fsp_queue_msg(struct fsp_msg *msg, void (*comp)(struct fsp_msg *msg)) __warn_unused_result; /* Synchronously send a command. If there's a response, the status is * returned as a positive number. A negative result means an error * sending the message. * * If autofree is set, the message and the reply (if any) are freed * after extracting the status. If not set, you are responsible for * freeing both the message and an eventual response * * NOTE: This will call fsp_queue_msg(msg, NULL), hence clearing the * completion field of the message. No synchronous message is expected * to utilize asynchronous completions. */ extern int fsp_sync_msg(struct fsp_msg *msg, bool autofree); /* Handle FSP interrupts */ extern void fsp_interrupt(void); /* An FSP client is interested in messages for a given class */ struct fsp_client { /* Return true to "own" the message (you can free it) */ bool (*message)(u32 cmd_sub_mod, struct fsp_msg *msg); struct list_node link; }; /* WARNING: Command class FSP_MCLASS_IPL is aliased to FSP_MCLASS_SERVICE, * thus a client of one will get both types of messages. * * WARNING: Client register/unregister takes *NO* lock. These are expected * to be called early at boot before CPUs are brought up and before * fsp_poll() can race. The client callback is called with no lock held. */ extern void fsp_register_client(struct fsp_client *client, u8 msgclass); extern void fsp_unregister_client(struct fsp_client *client, u8 msgclass); /* FSP TCE map/unmap functions */ extern void fsp_tce_map(u32 offset, void *addr, u32 size); extern void fsp_tce_unmap(u32 offset, u32 size); extern void *fsp_inbound_buf_from_tce(u32 tce_token); /* Data fetch helper */ extern uint32_t fsp_adjust_lid_side(uint32_t lid_no); extern int fsp_fetch_data(uint8_t flags, uint16_t id, uint32_t sub_id, uint32_t offset, void *buffer, size_t *length); extern int fsp_fetch_data_queue(uint8_t flags, uint16_t id, uint32_t sub_id, uint32_t offset, void *buffer, size_t *length, void (*comp)(struct fsp_msg *msg)) __warn_unused_result; extern bool fsp_load_resource(enum resource_id id, uint32_t subid, void *buf, size_t *size); /* FSP console stuff */ extern void fsp_console_preinit(void); extern void fsp_console_init(void); extern void fsp_console_add_nodes(void); extern void fsp_console_select_stdout(void); extern void fsp_console_reset(void); extern void fsp_console_poll(void *); /* Mark FSP lock */ extern void fsp_used_by_console(void); /* NVRAM */ extern int fsp_nvram_info(uint32_t *total_size); extern int fsp_nvram_start_read(void *dst, uint32_t src, uint32_t len); extern int fsp_nvram_write(uint32_t offset, void *src, uint32_t size); extern void fsp_nvram_wait_open(void); /* RTC */ extern void fsp_rtc_init(void); /* ELOG */ extern void fsp_elog_read_init(void); extern void fsp_elog_write_init(void); /* Code update */ extern void fsp_code_update_init(void); extern void fsp_code_update_wait_vpd(bool is_boot); /* Dump */ extern void fsp_dump_init(void); extern void fsp_fips_dump_notify(uint32_t dump_id, uint32_t dump_len); /* Attention Handler */ extern void fsp_attn_init(void); /* MDST table */ extern void fsp_mdst_table_init(void); /* This can be set by the fsp_opal_update_flash so that it can * get called just reboot we reboot shutdown the machine. */ extern int (*fsp_flash_term_hook)(void); /* Surveillance */ extern void fsp_init_surveillance(void); extern void fsp_surv_query(void); /* Reset/Reload */ extern void fsp_reinit_fsp(void); extern void fsp_trigger_reset(void); /* FSP memory errors */ extern void fsp_memory_err_init(void); /* Sensor */ extern void fsp_init_sensor(void); /* Diagnostic */ extern void fsp_init_diag(void); /* LED */ extern void fsp_led_init(void); /* EPOW */ extern void fsp_epow_init(void); /* DPO */ extern void fsp_dpo_init(void); #endif /* __FSP_H */
cyrilbur-ibm/skiboot
include/fsp.h
C
apache-2.0
27,824
/* * OeScript http://www.oescript.net * Copyright 2012 Ed Sweeney, all rights reserved. */ #ifdef __cplusplus extern "C" { #endif #ifndef OENETCONN_H #define OENETCONN_H /** * API for implementing reading and writing with the async OeNet * API */ #include "config.h" #include "oec_values.h" #define T OeNetConn struct netconn_T; typedef void OENETCONN_free_conn_obj(void *); typedef int OENETCONN_get_pos (struct netconn_T*, char *, size_t); typedef size_t OENETCONN_available (struct netconn_T*); typedef size_t OENETCONN_read (struct netconn_T*, void *, size_t); typedef int OENETCONN_write (struct netconn_T*, char *, size_t); typedef void *OENETCONN_peek (struct netconn_T*, size_t, size_t); typedef void OENETCONN_clear_timeout(void *); struct netconn_T { OENETCONN_free_conn_obj *free_conn_obj_fun; OENETCONN_get_pos *get_pos_fun; OENETCONN_available *available_fun; OENETCONN_read *read_fun; OENETCONN_write *write_fun; OENETCONN_peek *peek_fun; OENETCONN_clear_timeout *clear_timeout_fun; void *conn_obj; void *timeout; }; typedef struct netconn_T *T; extern T OeNetConn_new(OENETCONN_free_conn_obj *free_conn_obj_fun, OENETCONN_get_pos *get_pos_fun, OENETCONN_available *available_fun, OENETCONN_read *read_fun, OENETCONN_write *write_fun, OENETCONN_peek *peek_fun, OENETCONN_clear_timeout *clear_timeout_fun); extern void OeNetConn_free( T* ); //impl package internal apis (do not use) //extern void oenetconn_set_fd( T, int ); //extern int oenetconn_get_fd( T ); extern void oenetconn_set_conn_obj( T, void * ); extern void *oenetconn_get_conn_obj( T ); extern void oenetconn_set_timeout(T, void *); extern void *oenetconn_get_timeout(T); extern void OeNetConn_free_conn_obj(T); //the user's network bytes api extern void *OeNetConn_peek (T, size_t, size_t ); extern int OeNetConn_get_pos (T, char *, size_t ); extern size_t OeNetConn_read (T, void *, size_t ); extern int OeNetConn_write (T, char *, size_t ); extern size_t OeNetConn_available(T ); extern void OeNetConn_clear_timeout(T); #undef T #endif #ifdef __cplusplus } #endif
navicore/oescript_c
corelib/OeNetConn.h
C
apache-2.0
2,438
/* * Solo - A small and beautiful blogging system written in Java. * Copyright (c) 2010-present, b3log.org * * This program is free software: you can redistribute it and/or modify * it under the terms of the GNU Affero General Public License as published by * the Free Software Foundation, either version 3 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU Affero General Public License for more details. * * You should have received a copy of the GNU Affero General Public License * along with this program. If not, see <https://www.gnu.org/licenses/>. */ package org.b3log.solo; import org.b3log.latke.Latkes; import org.b3log.latke.servlet.HttpMethod; import javax.servlet.*; import javax.servlet.http.*; import java.io.BufferedReader; import java.security.Principal; import java.util.*; /** * Mock HTTP servlet request. * * @author <a href="http://88250.b3log.org">Liang Ding</a> * @version 1.0.0.3, Mar 1, 2019 */ public class MockHttpServletRequest implements HttpServletRequest { /** * Header. */ private Map<String, String> headers = new HashMap<>(); /** * Request URI. */ private String requestURI = "/"; /** * Context path. */ private String contextPath = ""; /** * Attributes. */ private Map<String, Object> attributes = new HashMap<>(); @Override public String getAuthType() { throw new UnsupportedOperationException("Not supported yet."); } private Cookie[] cookies; public void setCookies(final Cookie[] cookies) { this.cookies = cookies; } @Override public Cookie[] getCookies() { return cookies; } @Override public long getDateHeader(final String name) { throw new UnsupportedOperationException("Not supported yet."); } /** * Sets header with the specified name and value. * * @param name the specified name * @param value the specified value */ public void setHeader(final String name, final String value) { headers.put(name, value); } @Override public String getHeader(final String name) { return headers.get(name); } @Override public Enumeration getHeaders(final String name) { throw new UnsupportedOperationException("Not supported yet."); } @Override public Enumeration getHeaderNames() { return new Enumeration() { @Override public boolean hasMoreElements() { return false; } @Override public Object nextElement() { return null; } }; } @Override public int getIntHeader(final String name) { throw new UnsupportedOperationException("Not supported yet."); } private String method = HttpMethod.GET.toString(); public void setMethod(final String method) { this.method = method; } @Override public String getMethod() { return method; } @Override public String getPathInfo() { throw new UnsupportedOperationException("Not supported yet."); } @Override public String getPathTranslated() { throw new UnsupportedOperationException("Not supported yet."); } @Override public String getContextPath() { return contextPath; } @Override public String getQueryString() { return ""; } @Override public String getRemoteUser() { throw new UnsupportedOperationException("Not supported yet."); } @Override public boolean isUserInRole(final String role) { throw new UnsupportedOperationException("Not supported yet."); } @Override public Principal getUserPrincipal() { throw new UnsupportedOperationException("Not supported yet."); } @Override public String getRequestedSessionId() { throw new UnsupportedOperationException("Not supported yet."); } @Override public String getRequestURI() { return requestURI; } /** * Sets request URI with the specified request URI. * * @param requestURI the specified request URI */ public void setRequestURI(final String requestURI) { this.requestURI = requestURI; } @Override public StringBuffer getRequestURL() { return new StringBuffer(Latkes.getServePath() + requestURI); } @Override public String getServletPath() { throw new UnsupportedOperationException("Not supported yet."); } @Override public HttpSession getSession(final boolean create) { return null; } @Override public HttpSession getSession() { throw new UnsupportedOperationException("Not supported yet."); } @Override public boolean isRequestedSessionIdValid() { throw new UnsupportedOperationException("Not supported yet."); } @Override public boolean isRequestedSessionIdFromCookie() { throw new UnsupportedOperationException("Not supported yet."); } @Override public boolean isRequestedSessionIdFromURL() { throw new UnsupportedOperationException("Not supported yet."); } @Override public boolean isRequestedSessionIdFromUrl() { throw new UnsupportedOperationException("Not supported yet."); } @Override public Object getAttribute(final String name) { return attributes.get(name); } @Override public Enumeration getAttributeNames() { throw new UnsupportedOperationException("Not supported yet."); } @Override public String getCharacterEncoding() { return "mock character encoding"; } @Override public void setCharacterEncoding(final String env) { } @Override public int getContentLength() { throw new UnsupportedOperationException("Not supported yet."); } @Override public String getContentType() { return "mock content type"; } @Override public ServletInputStream getInputStream() { throw new UnsupportedOperationException("Not supported yet."); } private Map<String, String> param = new HashMap<>(); public void putParameter(final String name, final String value) { param.put(name, value); } @Override public String getParameter(final String name) { return param.get(name); } @Override public Enumeration getParameterNames() { throw new UnsupportedOperationException("Not supported yet."); } @Override public String[] getParameterValues(final String name) { throw new UnsupportedOperationException("Not supported yet."); } @Override public Map getParameterMap() { throw new UnsupportedOperationException("Not supported yet."); } @Override public String getProtocol() { throw new UnsupportedOperationException("Not supported yet."); } @Override public String getScheme() { throw new UnsupportedOperationException("Not supported yet."); } @Override public String getServerName() { throw new UnsupportedOperationException("Not supported yet."); } @Override public int getServerPort() { throw new UnsupportedOperationException("Not supported yet."); } private BufferedReader reader; public void setReader(BufferedReader reader) { this.reader = reader; } @Override public BufferedReader getReader() { return reader; } private String remoteAddr; public void setRemoteAddr(final String remoteAddr) { this.remoteAddr = remoteAddr; } @Override public String getRemoteAddr() { return remoteAddr; } @Override public String getRemoteHost() { return "mock remote host"; } @Override public void setAttribute(final String name, final Object o) { attributes.put(name, o); } @Override public void removeAttribute(final String name) { throw new UnsupportedOperationException("Not supported yet."); } @Override public Locale getLocale() { throw new UnsupportedOperationException("Not supported yet."); } @Override public Enumeration getLocales() { throw new UnsupportedOperationException("Not supported yet."); } @Override public boolean isSecure() { throw new UnsupportedOperationException("Not supported yet."); } @Override public RequestDispatcher getRequestDispatcher(final String path) { throw new UnsupportedOperationException("Not supported yet."); } @Override public String getRealPath(final String path) { throw new UnsupportedOperationException("Not supported yet."); } @Override public int getRemotePort() { return 0; } @Override public String getLocalName() { return "mock local name"; } @Override public String getLocalAddr() { return "mock local addr"; } @Override public int getLocalPort() { return 0; } @Override public String changeSessionId() { throw new UnsupportedOperationException("Not supported yet."); } @Override public boolean authenticate(final HttpServletResponse response) { throw new UnsupportedOperationException("Not supported yet."); } @Override public void login(final String username, final String password) { throw new UnsupportedOperationException("Not supported yet."); } @Override public void logout() { throw new UnsupportedOperationException("Not supported yet."); } @Override public Collection<Part> getParts() { throw new UnsupportedOperationException("Not supported yet."); } @Override public Part getPart(final String name) { throw new UnsupportedOperationException("Not supported yet."); } @Override public <T extends HttpUpgradeHandler> T upgrade(final Class<T> handlerClass) { throw new UnsupportedOperationException("Not supported yet."); } @Override public long getContentLengthLong() { throw new UnsupportedOperationException("Not supported yet."); } @Override public ServletContext getServletContext() { throw new UnsupportedOperationException("Not supported yet."); } @Override public AsyncContext startAsync() throws IllegalStateException { throw new UnsupportedOperationException("Not supported yet."); } @Override public AsyncContext startAsync(final ServletRequest servletRequest, final ServletResponse servletResponse) throws IllegalStateException { throw new UnsupportedOperationException("Not supported yet."); } @Override public boolean isAsyncStarted() { throw new UnsupportedOperationException("Not supported yet."); } @Override public boolean isAsyncSupported() { throw new UnsupportedOperationException("Not supported yet."); } @Override public AsyncContext getAsyncContext() { throw new UnsupportedOperationException("Not supported yet."); } @Override public DispatcherType getDispatcherType() { throw new UnsupportedOperationException("Not supported yet."); } }
b3log/b3log-solo
src/test/java/org/b3log/solo/MockHttpServletRequest.java
Java
apache-2.0
11,583
package manager import ( v3 "github.com/rancher/rancher/pkg/generated/norman/management.cattle.io/v3" helmlib "github.com/rancher/rancher/pkg/helm" "github.com/sirupsen/logrus" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/runtime" "k8s.io/client-go/tools/cache" ) func (m *Manager) updateClusterCatalogError(clusterCatalog *v3.ClusterCatalog, err error) (runtime.Object, error) { setRefreshedError(&clusterCatalog.Catalog, err) m.clusterCatalogClient.Update(clusterCatalog) return nil, err } func (m *Manager) ClusterCatalogSync(key string, obj *v3.ClusterCatalog) (runtime.Object, error) { ns, name, err := cache.SplitMetaNamespaceKey(key) if err != nil { return nil, err } if obj == nil { return nil, m.deleteTemplates(name, ns) } // always get a refresh catalog from etcd clusterCatalog, err := m.clusterCatalogClient.GetNamespaced(ns, name, metav1.GetOptions{}) if err != nil { return nil, err } commit, helm, err := helmlib.NewForceUpdate(&clusterCatalog.Catalog, m.SecretLister) if err != nil { return m.updateClusterCatalogError(clusterCatalog, err) } logrus.Debugf("Chart hash comparison for cluster catalog %v: new -- %v --- current -- %v", clusterCatalog.Name, commit, &clusterCatalog.Catalog.Status.Commit) if isUpToDate(commit, &clusterCatalog.Catalog) { if setRefreshed(&clusterCatalog.Catalog) { m.clusterCatalogClient.Update(clusterCatalog) } return nil, nil } cmt := &CatalogInfo{ catalog: &clusterCatalog.Catalog, clusterCatalog: clusterCatalog, } logrus.Infof("Updating cluster catalog %s", clusterCatalog.Name) return nil, m.traverseAndUpdate(helm, commit, cmt) }
rancher/rancher
pkg/catalog/manager/cluster_catalog_sync.go
GO
apache-2.0
1,674
/* * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {BaseOption} from '../base-option'; import {UIOption} from '../ui-option'; import { AxisLabelType, AxisType, BarMarkType, CHART_STRING_DELIMITER, FontSize, LineMarkType, Orient, UIOrient } from '../define/common'; import {Axis} from '../define/axis'; import * as _ from 'lodash'; import {PivotTableInfo} from '../../base-chart'; import {OptionGenerator} from '../util/option-generator'; /** * 공통 설정 converter */ export class CommonOptionConverter { /*-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= | Public Method |-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/ /** * 축 가로/세로형으로 변경 * @param chartOption * @param uiOption * @param axisType * @param fieldInfo * @returns {BaseOption} */ public static convertCommonAxis(chartOption: BaseOption, uiOption: UIOption, axisType: AxisType, fieldInfo: PivotTableInfo): BaseOption { // type이 없는경우 return if (_.isUndefined(uiOption['align'])) return chartOption; const type = uiOption['align']; const xAxis: Axis[] = chartOption.xAxis; const yAxis: Axis[] = chartOption.yAxis; // Y축 명칭 (x축쪽에서 y축 명칭을 가져오기전에 타게되므로 따로 설정해줌) chartOption.yAxis[0].name = uiOption.yAxis.customName ? uiOption.yAxis.customName : _.join(fieldInfo.aggs, CHART_STRING_DELIMITER); chartOption.yAxis[0].axisName = _.join(fieldInfo.aggs, CHART_STRING_DELIMITER); // 세로모드일때 if (_.eq(type, UIOrient.VERTICAL)) { // 세로 모드로 축 변경 xAxis값 변경 if (_.eq(xAxis[0].type, AxisType.VALUE) && _.eq(AxisType.X, axisType)) this.convertXAxisRotate(chartOption, type, yAxis, xAxis); // 세로 모드로 축 변경 yAxis값 변경 if (_.eq(xAxis[0].type, AxisType.VALUE) && _.eq(AxisType.Y, axisType)) this.convertYAxisRotate(chartOption, type, yAxis); // 가로모드일때 } else { // 가로모드로 축 변경 xAxis값 변경 if (_.eq(xAxis[0].type, AxisType.CATEGORY) && _.eq(AxisType.X, axisType)) this.convertXAxisRotate(chartOption, type, xAxis, yAxis); // 가로모드로 축 변경 yAxis값 변경 if (_.eq(xAxis[0].type, AxisType.CATEGORY) && _.eq(AxisType.Y, axisType)) this.convertYAxisRotate(chartOption, type, xAxis); } // 축 가로/세로형에 따라 축명 위치 변경 if (_.eq(AxisType.X, axisType)) this.convertXAxisRotateName(chartOption, uiOption, fieldInfo); if (_.eq(AxisType.Y, axisType)) this.convertYAxisRotateName(chartOption, uiOption, fieldInfo); return chartOption; } /** * x축 가로/세로에 따라 축명 위치변경 * @param chartOption * @param uiOption * @param fieldInfo */ public static convertXAxisRotateName(chartOption: BaseOption, uiOption: UIOption, fieldInfo: PivotTableInfo): BaseOption { // type이 없는경우 return if (!uiOption || _.isUndefined(uiOption['align'])) return chartOption; const axisList = _.compact(_.concat(uiOption.xAxis, uiOption.yAxis, uiOption.secondaryAxis)); const type = uiOption['align']; const yAxis = axisList.filter((item) => { return _.eq(item.mode, AxisLabelType.COLUMN) || _.eq(item.mode, AxisLabelType.SUBCOLUMN); }); // 앞에서 category / value위치를 변경하였으므로 변경된 type에 따라서 위치변경 const copiedOption = _.cloneDeep(chartOption); let yAxisType: AxisType; // default일때(세로모드, x축 category, y축 value)에는 변경하지않음 if (_.eq(type, UIOrient.VERTICAL) && copiedOption.yAxis[0].type === AxisType.VALUE && copiedOption.xAxis[0].type === AxisType.CATEGORY) return chartOption; // 세로모드일때 if (_.eq(type, UIOrient.VERTICAL)) { // y축이 value이면 => y축 값을 x축으로 넣기 yAxisType = AxisType.VALUE; // 가로모드일때 } else { // y축이 category이면 => y축 값을 x축으로 넣기 yAxisType = AxisType.CATEGORY; } // Y축 명칭 const yName = uiOption.yAxis.customName ? uiOption.yAxis.customName : _.join(fieldInfo.aggs, CHART_STRING_DELIMITER); const yAxisName = _.join(fieldInfo.aggs, CHART_STRING_DELIMITER); // y축이 yAxisType이면 => y축 값을 x축으로 넣기 copiedOption.yAxis.forEach((axis, axisIndex) => { chartOption.xAxis.forEach((item, index) => { if (axis.type === yAxisType && copiedOption.yAxis[index].axisName) { item.axisName = yAxisName; // customName이 없을때 if (!yAxis[axisIndex].customName && copiedOption.yAxis[index].name) { item.name = yName; } } }); }) return chartOption; } /** * y축 가로/세로에 따라 축명 위치변경 * @param chartOption * @param uiOption * @param fieldInfo */ public static convertYAxisRotateName(chartOption: BaseOption, uiOption: UIOption, fieldInfo: PivotTableInfo): BaseOption { const axisList = _.compact(_.concat(uiOption.xAxis, uiOption.yAxis, uiOption.secondaryAxis)); const type = uiOption['align']; // type이 없는경우 return if (_.isUndefined(type)) return chartOption; const xAxis = axisList.filter((item) => { return _.eq(item.mode, AxisLabelType.ROW) || _.eq(item.mode, AxisLabelType.SUBROW); }); // 앞에서 category / value위치를 변경하였으므로 변경된 type에 따라서 위치변경 const copiedOption = _.cloneDeep(chartOption); let xAxisType: AxisType; // default일때(세로모드, x축 category, y축 value)에는 변경하지않음 if (_.eq(type, UIOrient.VERTICAL) && copiedOption.yAxis[0].type === AxisType.VALUE && copiedOption.xAxis[0].type === AxisType.CATEGORY) return chartOption; // 세로모드일때 if (_.eq(type, UIOrient.VERTICAL)) { // x축이 category이면 => x축값을 y축으로 넣기 xAxisType = AxisType.CATEGORY; // 가로모드일때 } else { // x축이 value이면 => x축값을 y축으로 넣기 xAxisType = AxisType.VALUE; } // X축 명칭 const xName = uiOption.xAxis.customName ? uiOption.xAxis.customName : _.join(fieldInfo.cols, CHART_STRING_DELIMITER); const xAxisName = _.join(fieldInfo.cols, CHART_STRING_DELIMITER); // x축이 xAxisType이면 => x축값을 y축으로 넣기 copiedOption.xAxis.forEach((axis, axisIndex) => { chartOption.yAxis.forEach((item, index) => { if (axis.type === xAxisType && copiedOption.xAxis[index].axisName) { item.axisName = xAxisName; // customName이 없을때 if (!xAxis[axisIndex].customName && copiedOption.xAxis[index].name) { item.name = xName; } } }); }) return chartOption; } /** * x축 rotate * @param chartOption * @param orient * @param categoryAxis * @param valueAxis * @returns {BaseOption} */ public static convertXAxisRotate(chartOption: BaseOption, orient: UIOrient, categoryAxis, valueAxis): BaseOption { // orient 값이 없는경우 return if (_.isUndefined(orient)) return chartOption; // 수치를 표현하던 축은 카테고리를 표현하는 축으로 변경 valueAxis.map((axis, idx) => { if (_.eq(idx, 0)) { axis.type = AxisType.CATEGORY; axis.data = _.cloneDeep(categoryAxis[0].data); } }); if (_.eq(orient, Orient.VERTICAL)) chartOption.xAxis = [valueAxis[0]]; if (!_.eq(orient, Orient.VERTICAL)) chartOption.yAxis = [valueAxis[0]]; // 가로 모드일 경우에는 단위라벨 순서를 역으로 정렬 if (_.eq(orient, Orient.VERTICAL)) delete chartOption.yAxis[0].inverse; else chartOption.yAxis[0].inverse = true; return chartOption; } /** * y축 rotate * @param chartOption * @param orient * @param categoryAxis * @returns {BaseOption} */ public static convertYAxisRotate(chartOption: BaseOption, orient: UIOrient, categoryAxis): BaseOption { // orient 값이 없는경우 return if (_.isUndefined(orient)) return chartOption; const valueAxisType = AxisType.VALUE; const subAxis: Axis[] = []; // 카테로리를 표현하던 축은 수치를 표현하는 축으로 변경 categoryAxis.map((axis) => { axis.type = valueAxisType; delete axis.data; }); if (!_.eq(orient, Orient.VERTICAL)) chartOption.xAxis = _.concat(categoryAxis, subAxis); if (_.eq(orient, Orient.VERTICAL)) chartOption.yAxis = _.concat(categoryAxis, subAxis); // 가로 모드일 경우에는 단위라벨 순서를 역으로 정렬 if (_.eq(orient, Orient.VERTICAL)) delete chartOption.yAxis[0].inverse; else chartOption.yAxis[0].inverse = true; return chartOption; } /** * 공통옵션의 시리즈 데이터 설정 * @param chartOption * @param uiOption * @param fieldInfo */ public static convertCommonSeries(chartOption: BaseOption, uiOption: UIOption, fieldInfo: PivotTableInfo): BaseOption { if (!uiOption || !uiOption['mark']) return chartOption; const type = uiOption['mark']; // TODO 고급분석은 나중에 // if (this.isAnalysisPredictionLineEmpty()) { const series = chartOption.series; series.map((obj) => { // area 타입 설정 obj.areaStyle = _.eq(type, LineMarkType.AREA) ? OptionGenerator.AreaStyle.customAreaStyle(0.5) : undefined; // stack 설정 let stackName: string = ''; // 모드에 따라 스택명, 수치값 라벨 위치 변경 if (_.eq(uiOption['mark'], BarMarkType.STACKED)) { // 시리즈명을 delimiter로 분리, 현재 시리즈의 측정값 필드명 추출 stackName = _.last(_.split(obj.name, CHART_STRING_DELIMITER)); obj.stack = _.isEmpty(fieldInfo.rows) ? 'measureStack' : stackName; } else { delete obj.stack; } }); // } else { // this.chartOption = optCon.LineSeries.exceptPredictionLineViewType(this.analysis, this.chartOption, type); // } return chartOption; } /** * 공통옵션의 폰트사이즈 설정 * @param chartOption * @param uiOption */ public static convertCommonFont(chartOption: BaseOption, uiOption: UIOption): BaseOption { if (!uiOption.fontSize) return chartOption; const uiFontSize = uiOption.fontSize; let fontSize: number; switch (uiFontSize) { case FontSize.NORMAL: fontSize = 13; break; case FontSize.SMALL: fontSize = 11; break; case FontSize.LARGE: fontSize = 15; break; } // x축 폰트 사이즈 설정 _.each(chartOption.xAxis, (item) => { item.axisLabel.fontSize = fontSize; item.nameTextStyle.fontSize = fontSize; }); // y축 폰트 사이즈 설정 _.each(chartOption.yAxis, (item) => { item.axisLabel.fontSize = fontSize; item.nameTextStyle.fontSize = fontSize; }); // 범례 폰트 사이즈 설정 if (chartOption.legend) chartOption.legend.textStyle.fontSize = fontSize; // 데이터 라벨의 폰트 사이즈 설정 _.each(chartOption.series, (item) => { if (item.label) { // rich가 있는경우 if (item.label.normal.rich && item.label.normal.rich['align']) { item.label.normal.rich['align']['fontSize'] = fontSize; // rich가 없는경우 } else { item.label.normal.fontSize = fontSize; } } }); // visualMap 폰트 사이즈 설정 if (chartOption.visualMap) { if (!chartOption.visualMap.textStyle) chartOption.visualMap.textStyle = {}; chartOption.visualMap.textStyle.fontSize = fontSize; } // large인 경우 dataZoom을 하위로 이동시킨다 if (_.eq(FontSize.LARGE, uiFontSize) && (!uiOption['align'] || (uiOption['align'] && _.eq(UIOrient.VERTICAL, uiOption['align'])))) { if (chartOption.dataZoom && chartOption.dataZoom.length > 0) chartOption.dataZoom[0].bottom = chartOption.dataZoom[0].bottom - 5; } return chartOption; } /*-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= | Private Method |-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/ }
metatron-app/metatron-discovery
discovery-frontend/src/app/common/component/chart/option/converter/common-option-converter.ts
TypeScript
apache-2.0
12,826
/* Copyright 2015 The TensorFlow Authors. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ==============================================================================*/ // See docs in ../ops/data_flow_ops.cc. #include "tensorflow/core/framework/bounds_check.h" #include "tensorflow/core/framework/op_kernel.h" #include "tensorflow/core/framework/register_types.h" #include "tensorflow/core/framework/tensor.h" #include "tensorflow/core/lib/core/threadpool.h" #if GOOGLE_CUDA || TENSORFLOW_USE_ROCM #include "tensorflow/core/kernels/gpu_device_array.h" #endif // GOOGLE_CUDA || TENSORFLOW_USE_ROCM namespace tensorflow { typedef Eigen::ThreadPoolDevice CPUDevice; #if GOOGLE_CUDA || TENSORFLOW_USE_ROCM typedef Eigen::GpuDevice GPUDevice; #endif // GOOGLE_CUDA || TENSORFLOW_USE_ROCM template <class T> class DynamicStitchOpImplBase : public OpKernel { public: explicit DynamicStitchOpImplBase(OpKernelConstruction* c, const string& op_name) : OpKernel(c) { // Compute expected input signature const DataType dt = DataTypeToEnum<T>::v(); const int n = c->num_inputs() / 2; DataTypeVector expected; for (int i = 0; i < n; i++) { expected.push_back(DT_INT32); } for (int i = 0; i < n; i++) { expected.push_back(dt); } OP_REQUIRES_OK(c, c->MatchSignature(expected, {dt})); OP_REQUIRES(c, c->num_inputs() > 0, errors::InvalidArgument(op_name + ": Must have some inputs")); OP_REQUIRES(c, c->num_inputs() % 2 == 0, errors::InvalidArgument( op_name + ": Must have even number of arguments")); } protected: // Check if data0.shape[indices0.dims():] == data1.shape[indices1.dims():] static bool SameExtraShape(const Tensor& data0, const Tensor& indices0, const Tensor& data1, const Tensor& indices1) { const int extra0 = data0.dims() - indices0.dims(); const int extra1 = data1.dims() - indices1.dims(); if (extra0 != extra1) return false; for (int i = 0; i < extra0; i++) { if (data0.dim_size(indices0.dims() + i) != data1.dim_size(indices1.dims() + i)) { return false; } } return true; } void CheckArgsAndAllocateResult(OpKernelContext* c, OpInputList* indices_inputs, OpInputList* data_inputs, int* first_dim_size, int* data_elements_size, Tensor** result_ptr) { // Find maximum index in the indices vectors OP_REQUIRES_OK(c, c->input_list("indices", indices_inputs)); int32 max_index = -1; if (data_elements_size) { *data_elements_size = 0; } for (const Tensor& indices : *indices_inputs) { if (indices.NumElements() > 0) { Eigen::Tensor<int32, 0, Eigen::RowMajor> m = indices.flat<int32>().maximum(); max_index = std::max(m(), max_index); } if (data_elements_size) { *data_elements_size += indices.NumElements(); } } *first_dim_size = max_index + 1; // Validate that data[i].shape = indices[i].shape + constant OP_REQUIRES_OK(c, c->input_list("data", data_inputs)); const Tensor& data0 = (*data_inputs)[0]; const Tensor& indices0 = (*indices_inputs)[0]; for (int input_num = 0; input_num < indices_inputs->size(); input_num++) { const Tensor& indices = (*indices_inputs)[input_num]; const Tensor& data = (*data_inputs)[input_num]; OP_REQUIRES( c, TensorShapeUtils::StartsWith(data.shape(), indices.shape()), errors::InvalidArgument("data[", input_num, "].shape = ", data.shape().DebugString(), " does not start with indices[", input_num, "].shape = ", indices.shape().DebugString())); OP_REQUIRES( c, input_num == 0 || SameExtraShape(data0, indices0, data, indices), errors::InvalidArgument( "Need data[0].shape[", indices0.dims(), ":] = data[", input_num, "].shape[", indices.dims(), ":], got data[0].shape = ", data0.shape().DebugString(), ", data[", input_num, "].shape = ", data.shape().DebugString(), ", indices[0].shape = ", indices0.shape().DebugString(), ", indices[", input_num, "].shape = ", indices.shape().DebugString())); } // Allocate result tensor of shape // [*first_dim_size] + data.shape[indices.dims:] TensorShape result_shape; result_shape.AddDim(*first_dim_size); for (int d = indices0.dims(); d < data0.dims(); d++) { result_shape.AddDim(data0.dim_size(d)); } OP_REQUIRES_OK(c, c->allocate_output(0, result_shape, result_ptr)); } }; #if GOOGLE_CUDA || TENSORFLOW_USE_ROCM template <typename T> void DynamicStitchGPUImpl(const Eigen::GpuDevice& gpu_device, const int32 slice_size, const int32 first_dim_size, const GpuDeviceArrayStruct<int>& input_indices, const GpuDeviceArrayStruct<const T*>& input_ptrs, T* output); #define REGISTER_GPU(T) \ extern template void DynamicStitchGPUImpl( \ const Eigen::GpuDevice& gpu_device, const int32 slice_size, \ const int32 first_dim_size, \ const GpuDeviceArrayStruct<int32>& input_indices, \ const GpuDeviceArrayStruct<const T*>& input_ptrs, T* output); TF_CALL_int32(REGISTER_GPU); TF_CALL_int64(REGISTER_GPU); TF_CALL_GPU_NUMBER_TYPES(REGISTER_GPU); TF_CALL_COMPLEX_TYPES(REGISTER_GPU); #undef REGISTER_GPU template <class T> class DynamicStitchOpGPU : public DynamicStitchOpImplBase<T> { public: explicit DynamicStitchOpGPU(OpKernelConstruction* c) : DynamicStitchOpImplBase<T>(c, "DynamicStitchOp") {} void Compute(OpKernelContext* c) override { OpInputList indices_inputs; OpInputList data_inputs; int first_dim_size; int data_elements_size; Tensor* merged = nullptr; this->CheckArgsAndAllocateResult(c, &indices_inputs, &data_inputs, &first_dim_size, &data_elements_size, &merged); if (!c->status().ok()) { // Avoid segmentation faults if merged cannot be allocated and an error is // passed back in the context. return; } // TODO(jeff): Currently we leave uninitialized any portions of // merged that aren't covered by an index in indices. What should we do? if (first_dim_size > 0) { // because the collision requirements, we have to deal with // collision first before send data to gpu kernel. // TODO(ekelsen): Instead of doing a serial scan on the CPU to pick the // last of duplicated indices, it could instead be done of the GPU // implicitly using atomics to make sure the last index is the final // write. const int slice_size = merged->flat_outer_dims<T>().dimension(1); GpuDeviceArrayOnHost<int32> indices_flat(c, first_dim_size); GpuDeviceArrayOnHost<const T*> data_flat(c, data_elements_size); OP_REQUIRES_OK(c, indices_flat.Init()); OP_REQUIRES_OK(c, data_flat.Init()); // initialize the indices_flat (-1 represents missing indices) for (int i = 0; i < first_dim_size; ++i) { indices_flat.Set(i, -1); } // data_flat index int32 idx = 0; // sum of indices_inputs[i].NumElements() for compute indices_flat value. int32 base_size = 0; for (int i = 0; i < indices_inputs.size(); ++i) { auto indices_vec = indices_inputs[i].flat<int32>(); auto data_ptr_base = data_inputs[i].template flat<T>().data(); for (int j = 0; j < indices_vec.size(); ++j) { // indices_flat's indices represent the indices of output. // indices_flat's values represent the indices of input_data where the // data located. indices_flat.Set(indices_vec(j), base_size + j); data_flat.Set( idx, const_cast<T*>(reinterpret_cast<const T*>(data_ptr_base) + j * slice_size)); ++idx; } base_size += indices_vec.size(); } OP_REQUIRES_OK(c, indices_flat.Finalize()); OP_REQUIRES_OK(c, data_flat.Finalize()); auto output = merged->template flat<T>().data(); DynamicStitchGPUImpl<T>(c->eigen_gpu_device(), slice_size, first_dim_size, indices_flat.data(), data_flat.data(), output); } } }; #endif // GOOGLE_CUDA || TENSORFLOW_USE_ROCM template <class T, bool Parallel> class DynamicStitchOpImplCPU : public DynamicStitchOpImplBase<T> { public: explicit DynamicStitchOpImplCPU(OpKernelConstruction* c) : DynamicStitchOpImplBase<T>( c, (Parallel ? "ParallelDynamicStitchOp" : "DynamicStitchOp")) {} void Compute(OpKernelContext* c) override { OpInputList indices_inputs; OpInputList data_inputs; int first_dim_size; Tensor* merged = nullptr; this->CheckArgsAndAllocateResult(c, &indices_inputs, &data_inputs, &first_dim_size, nullptr, &merged); if (!c->status().ok()) { // Avoid segmentation faults if merged cannot be allocated and an error is // passed back in the context. return; } // TODO(jeff): Currently we leave uninitialized any portions of // merged that aren't covered by an index in indices. What should we do? if (first_dim_size > 0) { auto merged_flat = merged->flat_outer_dims<T>(); // slice_size must not be stored as int for cases of tensors over 2GB. const auto slice_size = merged_flat.dimension(1); const size_t slice_bytes = slice_size * sizeof(T); auto OnInputNumber = [&](int input_num) { const Tensor& indices = indices_inputs[input_num]; auto indices_vec = indices.flat<int32>(); const Tensor& data = data_inputs[input_num]; auto data_flat = data.shaped<T, 2>({indices_vec.dimension(0), slice_size}); if (DataTypeCanUseMemcpy(DataTypeToEnum<T>::v())) { T* merged_base = merged_flat.data(); const T* data_base = data_flat.data(); for (int i = 0; i < indices_vec.size(); i++) { int32 index = internal::SubtleMustCopy(indices_vec(i)); OP_REQUIRES( c, FastBoundsCheck(index, first_dim_size), errors::InvalidArgument("indices[", i, "] is out of range")); memcpy(merged_base + index * slice_size, data_base + i * slice_size, slice_bytes); } } else { Eigen::DSizes<Eigen::DenseIndex, 2> sizes(1, slice_size); for (int i = 0; i < indices_vec.size(); i++) { // Copy slice data[i] to merged[indices[i]] Eigen::DSizes<Eigen::DenseIndex, 2> data_indices(i, 0); int32 index = internal::SubtleMustCopy(indices_vec(i)); OP_REQUIRES( c, FastBoundsCheck(index, first_dim_size), errors::InvalidArgument("indices[", i, "] is out of range")); Eigen::DSizes<Eigen::DenseIndex, 2> merged_indices(index, 0); merged_flat.slice(merged_indices, sizes) = data_flat.slice(data_indices, sizes); } } }; if (Parallel && c->device()->tensorflow_cpu_worker_threads()->num_threads > 1) { auto thread_pool = c->device()->tensorflow_cpu_worker_threads()->workers; size_t total_indices_size = 0; for (int input_num = 0; input_num < indices_inputs.size(); ++input_num) { total_indices_size += indices_inputs[input_num].NumElements(); } const double avg_indices_size = static_cast<double>(total_indices_size) / indices_inputs.size(); auto bytes_processed = slice_bytes * avg_indices_size; auto LoopBody = [&](int first, int last) { for (int input_num = first; input_num < last; ++input_num) { OnInputNumber(input_num); } }; thread_pool->ParallelFor(indices_inputs.size(), bytes_processed, LoopBody); } else { for (int input_num = 0; input_num < indices_inputs.size(); input_num++) { OnInputNumber(input_num); } } } } }; // Using inheritance rather than a typedef so that these classes might have more // functionality later. template <typename T> struct DynamicStitchOpCPU : DynamicStitchOpImplCPU<T, false> { using DynamicStitchOpImplCPU<T, false>::DynamicStitchOpImplCPU; }; template <typename T> struct ParallelDynamicStitchOpCPU : DynamicStitchOpImplCPU<T, true> { using DynamicStitchOpImplCPU<T, true>::DynamicStitchOpImplCPU; }; #define REGISTER_DYNAMIC_STITCH(type) \ REGISTER_KERNEL_BUILDER(Name("DynamicStitch") \ .Device(DEVICE_CPU) \ .TypeConstraint<type>("T") \ .HostMemory("indices"), \ DynamicStitchOpCPU<type>) \ REGISTER_KERNEL_BUILDER(Name("ParallelDynamicStitch") \ .Device(DEVICE_CPU) \ .TypeConstraint<type>("T") \ .HostMemory("indices"), \ ParallelDynamicStitchOpCPU<type>) TF_CALL_POD_STRING_TYPES(REGISTER_DYNAMIC_STITCH); TF_CALL_variant(REGISTER_DYNAMIC_STITCH); TF_CALL_QUANTIZED_TYPES(REGISTER_DYNAMIC_STITCH); #undef REGISTER_DYNAMIC_STITCH #if GOOGLE_CUDA || TENSORFLOW_USE_ROCM #define REGISTER_DYNAMIC_STITCH_GPU(type) \ REGISTER_KERNEL_BUILDER(Name("DynamicStitch") \ .Device(DEVICE_GPU) \ .TypeConstraint<type>("T") \ .HostMemory("indices"), \ DynamicStitchOpGPU<type>) \ REGISTER_KERNEL_BUILDER(Name("ParallelDynamicStitch") \ .Device(DEVICE_GPU) \ .TypeConstraint<type>("T") \ .HostMemory("indices") \ .HostMemory("data") \ .HostMemory("merged"), \ ParallelDynamicStitchOpCPU<type>) TF_CALL_int32(REGISTER_DYNAMIC_STITCH_GPU); TF_CALL_int64(REGISTER_DYNAMIC_STITCH_GPU); TF_CALL_GPU_NUMBER_TYPES(REGISTER_DYNAMIC_STITCH_GPU); TF_CALL_COMPLEX_TYPES(REGISTER_DYNAMIC_STITCH_GPU); #undef REGISTER_DYNAMIC_STITCH_GPU #endif // GOOGLE_CUDA || TENSORFLOW_USE_ROCM } // namespace tensorflow
sarvex/tensorflow
tensorflow/core/kernels/dynamic_stitch_op.cc
C++
apache-2.0
15,510
process.stdin.pipe(process.stdout);
iproduct/course-node-express-react
05-nodejs-demo/readable0.js
JavaScript
apache-2.0
37
<!doctype html> <!--[if lt IE 7]> <html class="no-js lt-ie9 lt-ie8 lt-ie7"> <![endif]--> <!--[if IE 7]> <html class="no-js lt-ie9 lt-ie8"> <![endif]--> <!--[if IE 8]> <html class="no-js lt-ie9"> <![endif]--> <!--[if gt IE 8]><!--> <html class="no-js"> <!--<![endif]--> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <title></title> <meta name="description" content=""> <meta name="viewport" content="width=device-width"> <!-- Place favicon.ico and apple-touch-icon.png in the root directory --> <!-- build:css(.tmp) styles/main.css --> <link rel="stylesheet" href="styles/main.css"> <!-- endbuild --> </head> <body ng-app="clientApp"> <!--[if lt IE 7]> <p class="browsehappy">You are using an <strong>outdated</strong> browser. Please <a href="http://browsehappy.com/">upgrade your browser</a> to improve your experience.</p> <![endif]--> <!--[if lt IE 9]> <script src="bower_components/es5-shim/es5-shim.js"></script> <script src="bower_components/json3/lib/json3.min.js"></script> <![endif]--> <!-- Add your site or application content here --> <div class="container" ng-view=""></div> <!-- Google Analytics: change UA-XXXXX-X to be your site's ID --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-XXXXX-X'); ga('send', 'pageview'); </script> <script src="bower_components/jquery/jquery.js"></script> <script src="bower_components/angular/angular.js"></script> <!-- build:js scripts/plugins.js --> <script src="bower_components/bootstrap-sass/js/bootstrap-affix.js"></script> <script src="bower_components/bootstrap-sass/js/bootstrap-alert.js"></script> <script src="bower_components/bootstrap-sass/js/bootstrap-dropdown.js"></script> <script src="bower_components/bootstrap-sass/js/bootstrap-tooltip.js"></script> <script src="bower_components/bootstrap-sass/js/bootstrap-modal.js"></script> <script src="bower_components/bootstrap-sass/js/bootstrap-transition.js"></script> <script src="bower_components/bootstrap-sass/js/bootstrap-button.js"></script> <script src="bower_components/bootstrap-sass/js/bootstrap-popover.js"></script> <script src="bower_components/bootstrap-sass/js/bootstrap-typeahead.js"></script> <script src="bower_components/bootstrap-sass/js/bootstrap-carousel.js"></script> <script src="bower_components/bootstrap-sass/js/bootstrap-scrollspy.js"></script> <script src="bower_components/bootstrap-sass/js/bootstrap-collapse.js"></script> <script src="bower_components/bootstrap-sass/js/bootstrap-tab.js"></script> <!-- endbuild --> <!-- build:js scripts/modules.js --> <script src="bower_components/angular-resource/angular-resource.js"></script> <script src="bower_components/angular-cookies/angular-cookies.js"></script> <script src="bower_components/angular-sanitize/angular-sanitize.js"></script> <!-- endbuild --> <!-- build:js({.tmp,app}) scripts/scripts.js --> <script src="scripts/app.js"></script> <script src="scripts/controllers/main.js"></script> <!-- endbuild --> </body> </html>
sepmein/weiboBot
client/app/index.html
HTML
apache-2.0
3,627
package org.apache.maven.archiva.consumers; /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ /** * ConsumerMonitor - a monitor for consumers. * * @version $Id$ */ public interface ConsumerMonitor { /** * A consumer error event. * * @param consumer the consumer that caused the error. * @param type the type of error. * @param message the message about the error. */ public void consumerError( Consumer consumer, String type, String message ); /** * A consumer warning event. * * @param consumer the consumer that caused the warning. * @param type the type of warning. * @param message the message about the warning. */ public void consumerWarning( Consumer consumer, String type, String message ); /** * A consumer informational event. * * @param consumer the consumer that caused the informational message. * @param message the message. */ public void consumerInfo( Consumer consumer, String message ); }
hiredman/archiva
archiva-modules/archiva-base/archiva-consumers/archiva-consumer-api/src/main/java/org/apache/maven/archiva/consumers/ConsumerMonitor.java
Java
apache-2.0
1,789
/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.s2graph.core.rest import java.net.URL import org.apache.s2graph.core.GraphExceptions.BadQueryException import org.apache.s2graph.core._ import org.apache.s2graph.core.mysqls.{Bucket, Experiment, Service} import org.apache.s2graph.core.utils.logger import play.api.libs.json._ import scala.concurrent.{ExecutionContext, Future} object RestHandler { case class HandlerResult(body: Future[JsValue], headers: (String, String)*) } /** * Public API, only return Future.successful or Future.failed * Don't throw exception */ class RestHandler(graph: Graph)(implicit ec: ExecutionContext) { import RestHandler._ val requestParser = new RequestParser(graph.config) /** * Public APIS */ def doPost(uri: String, body: String, impKeyOpt: => Option[String] = None): HandlerResult = { try { val jsQuery = Json.parse(body) uri match { case "/graphs/getEdges" => HandlerResult(getEdgesAsync(jsQuery)(PostProcess.toSimpleVertexArrJson)) case "/graphs/getEdges/grouped" => HandlerResult(getEdgesAsync(jsQuery)(PostProcess.summarizeWithListFormatted)) case "/graphs/getEdgesExcluded" => HandlerResult(getEdgesExcludedAsync(jsQuery)(PostProcess.toSimpleVertexArrJson)) case "/graphs/getEdgesExcluded/grouped" => HandlerResult(getEdgesExcludedAsync(jsQuery)(PostProcess.summarizeWithListExcludeFormatted)) case "/graphs/checkEdges" => checkEdges(jsQuery) case "/graphs/getEdgesGrouped" => HandlerResult(getEdgesAsync(jsQuery)(PostProcess.summarizeWithList)) case "/graphs/getEdgesGroupedExcluded" => HandlerResult(getEdgesExcludedAsync(jsQuery)(PostProcess.summarizeWithListExclude)) case "/graphs/getEdgesGroupedExcludedFormatted" => HandlerResult(getEdgesExcludedAsync(jsQuery)(PostProcess.summarizeWithListExcludeFormatted)) case "/graphs/getVertices" => HandlerResult(getVertices(jsQuery)) case uri if uri.startsWith("/graphs/experiment") => val Array(accessToken, experimentName, uuid) = uri.split("/").takeRight(3) experiment(jsQuery, accessToken, experimentName, uuid, impKeyOpt) case _ => throw new RuntimeException("route is not found") } } catch { case e: Exception => HandlerResult(Future.failed(e)) } } // TODO: Refactor to doGet def checkEdges(jsValue: JsValue): HandlerResult = { try { val (quads, isReverted) = requestParser.toCheckEdgeParam(jsValue) HandlerResult(graph.checkEdges(quads).map { case queryRequestWithResultLs => val edgeJsons = for { queryRequestWithResult <- queryRequestWithResultLs (queryRequest, queryResult) = QueryRequestWithResult.unapply(queryRequestWithResult).get edgeWithScore <- queryResult.edgeWithScoreLs (edge, score) = EdgeWithScore.unapply(edgeWithScore).get convertedEdge = if (isReverted) edge.duplicateEdge else edge edgeJson = PostProcess.edgeToJson(convertedEdge, score, queryRequest.query, queryRequest.queryParam) } yield Json.toJson(edgeJson) Json.toJson(edgeJsons) }) } catch { case e: Exception => HandlerResult(Future.failed(e)) } } /** * Private APIS */ private def experiment(contentsBody: JsValue, accessToken: String, experimentName: String, uuid: String, impKeyOpt: => Option[String]): HandlerResult = { try { val bucketOpt = for { service <- Service.findByAccessToken(accessToken) experiment <- Experiment.findBy(service.id.get, experimentName) bucket <- experiment.findBucket(uuid, impKeyOpt) } yield bucket val bucket = bucketOpt.getOrElse(throw new RuntimeException("bucket is not found")) if (bucket.isGraphQuery) { val ret = buildRequestInner(contentsBody, bucket, uuid) HandlerResult(ret.body, Experiment.impressionKey -> bucket.impressionId) } else throw new RuntimeException("not supported yet") } catch { case e: Exception => HandlerResult(Future.failed(e)) } } private def buildRequestInner(contentsBody: JsValue, bucket: Bucket, uuid: String): HandlerResult = { if (bucket.isEmpty) HandlerResult(Future.successful(PostProcess.emptyResults)) else { val body = buildRequestBody(Option(contentsBody), bucket, uuid) val url = new URL(bucket.apiPath) val path = url.getPath // dummy log for sampling val experimentLog = s"POST $path took -1 ms 200 -1 $body" logger.debug(experimentLog) doPost(path, body) } } private def eachQuery(post: (Seq[QueryRequestWithResult], Seq[QueryRequestWithResult]) => JsValue)(q: Query): Future[JsValue] = { val filterOutQueryResultsLs = q.filterOutQuery match { case Some(filterOutQuery) => graph.getEdges(filterOutQuery) case None => Future.successful(Seq.empty) } for { queryResultsLs <- graph.getEdges(q) filterOutResultsLs <- filterOutQueryResultsLs } yield { val json = post(queryResultsLs, filterOutResultsLs) json } } def getEdgesAsync(jsonQuery: JsValue) (post: (Seq[QueryRequestWithResult], Seq[QueryRequestWithResult]) => JsValue): Future[JsValue] = { val fetch = eachQuery(post) _ jsonQuery match { case JsArray(arr) => Future.traverse(arr.map(requestParser.toQuery(_)))(fetch).map(JsArray) case obj@JsObject(_) => (obj \ "queries").asOpt[JsValue] match { case None => fetch(requestParser.toQuery(obj)) case _ => val multiQuery = requestParser.toMultiQuery(obj) val filterOutFuture = multiQuery.queryOption.filterOutQuery match { case Some(filterOutQuery) => graph.getEdges(filterOutQuery) case None => Future.successful(Seq.empty) } val futures = multiQuery.queries.zip(multiQuery.weights).map { case (query, weight) => val filterOutQueryResultsLs = query.queryOption.filterOutQuery match { case Some(filterOutQuery) => graph.getEdges(filterOutQuery) case None => Future.successful(Seq.empty) } for { queryRequestWithResultLs <- graph.getEdges(query) filterOutResultsLs <- filterOutQueryResultsLs } yield { val newQueryRequestWithResult = for { queryRequestWithResult <- queryRequestWithResultLs queryResult = queryRequestWithResult.queryResult } yield { val newEdgesWithScores = for { edgeWithScore <- queryRequestWithResult.queryResult.edgeWithScoreLs } yield { edgeWithScore.copy(score = edgeWithScore.score * weight) } queryRequestWithResult.copy(queryResult = queryResult.copy(edgeWithScoreLs = newEdgesWithScores)) } logger.debug(s"[Size]: ${newQueryRequestWithResult.map(_.queryResult.edgeWithScoreLs.size).sum}") (newQueryRequestWithResult, filterOutResultsLs) } } for { filterOut <- filterOutFuture resultWithExcludeLs <- Future.sequence(futures) } yield { PostProcess.toSimpleVertexArrJsonMulti(multiQuery.queryOption, resultWithExcludeLs, filterOut) // val initial = (ListBuffer.empty[QueryRequestWithResult], ListBuffer.empty[QueryRequestWithResult]) // val (results, excludes) = resultWithExcludeLs.foldLeft(initial) { case ((prevResults, prevExcludes), (results, excludes)) => // (prevResults ++= results, prevExcludes ++= excludes) // } // PostProcess.toSimpleVertexArrJson(multiQuery.queryOption, results, excludes ++ filterOut) } } case _ => throw BadQueryException("Cannot support") } } private def getEdgesExcludedAsync(jsonQuery: JsValue) (post: (Seq[QueryRequestWithResult], Seq[QueryRequestWithResult]) => JsValue): Future[JsValue] = { val q = requestParser.toQuery(jsonQuery) val filterOutQuery = Query(q.vertices, Vector(q.steps.last)) val fetchFuture = graph.getEdges(q) val excludeFuture = graph.getEdges(filterOutQuery) for { queryResultLs <- fetchFuture exclude <- excludeFuture } yield { post(queryResultLs, exclude) } } private def getVertices(jsValue: JsValue) = { val jsonQuery = jsValue val ts = System.currentTimeMillis() val props = "{}" val vertices = jsonQuery.as[List[JsValue]].flatMap { js => val serviceName = (js \ "serviceName").as[String] val columnName = (js \ "columnName").as[String] for (id <- (js \ "ids").asOpt[List[JsValue]].getOrElse(List.empty[JsValue])) yield { Management.toVertex(ts, "insert", id.toString, serviceName, columnName, props) } } graph.getVertices(vertices) map { vertices => PostProcess.verticesToJson(vertices) } } private def buildRequestBody(requestKeyJsonOpt: Option[JsValue], bucket: Bucket, uuid: String): String = { var body = bucket.requestBody.replace("#uuid", uuid) // // replace variable // body = TemplateHelper.replaceVariable(System.currentTimeMillis(), body) // replace param for { requestKeyJson <- requestKeyJsonOpt jsObj <- requestKeyJson.asOpt[JsObject] (key, value) <- jsObj.fieldSet } { val replacement = value match { case JsString(s) => s case _ => value.toString } body = body.replace(key, replacement) } body } def calcSize(js: JsValue): Int = js match { case JsObject(obj) => (js \ "size").asOpt[Int].getOrElse(0) case JsArray(seq) => seq.map(js => (js \ "size").asOpt[Int].getOrElse(0)).sum case _ => 0 } }
jongwook/incubator-s2graph
s2core/src/main/scala/org/apache/s2graph/core/rest/RestHandler.scala
Scala
apache-2.0
10,792
// // YR_PersonalPageViewController.h // Artand // // Created by dllo on 16/9/9. // Copyright © 2016年 kaleidoscope. All rights reserved. // #import <UIKit/UIKit.h> @interface YR_PersonalPageViewController : UIViewController @property (nonatomic, copy) NSString *uid; @end
NSKaleidoscope/FirstProject
Artand/Artand/Sections/Home/Controller/PersonalPage/YR_PersonalPageViewController.h
C
apache-2.0
284
/* * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package io.trino.plugin.hive; import com.google.common.collect.ImmutableList; import com.google.common.collect.ImmutableMap; import com.google.common.collect.ImmutableSet; import com.google.common.io.Files; import io.airlift.json.JsonCodec; import io.airlift.json.JsonCodecFactory; import io.airlift.json.ObjectMapperProvider; import io.airlift.units.DataSize; import io.airlift.units.DataSize.Unit; import io.airlift.units.Duration; import io.trino.Session; import io.trino.connector.CatalogName; import io.trino.cost.StatsAndCosts; import io.trino.execution.QueryInfo; import io.trino.metadata.InsertTableHandle; import io.trino.metadata.Metadata; import io.trino.metadata.QualifiedObjectName; import io.trino.metadata.TableHandle; import io.trino.metadata.TableMetadata; import io.trino.spi.connector.CatalogSchemaTableName; import io.trino.spi.connector.ColumnHandle; import io.trino.spi.connector.ColumnMetadata; import io.trino.spi.connector.Constraint; import io.trino.spi.security.Identity; import io.trino.spi.security.SelectedRole; import io.trino.spi.type.DateType; import io.trino.spi.type.TimestampType; import io.trino.spi.type.Type; import io.trino.spi.type.VarcharType; import io.trino.sql.planner.Plan; import io.trino.sql.planner.plan.ExchangeNode; import io.trino.sql.planner.planprinter.IoPlanPrinter.ColumnConstraint; import io.trino.sql.planner.planprinter.IoPlanPrinter.EstimatedStatsAndCost; import io.trino.sql.planner.planprinter.IoPlanPrinter.FormattedDomain; import io.trino.sql.planner.planprinter.IoPlanPrinter.FormattedMarker; import io.trino.sql.planner.planprinter.IoPlanPrinter.FormattedRange; import io.trino.sql.planner.planprinter.IoPlanPrinter.IoPlan; import io.trino.sql.planner.planprinter.IoPlanPrinter.IoPlan.TableColumnInfo; import io.trino.testing.BaseConnectorTest; import io.trino.testing.DistributedQueryRunner; import io.trino.testing.MaterializedResult; import io.trino.testing.MaterializedRow; import io.trino.testing.QueryRunner; import io.trino.testing.ResultWithQueryId; import io.trino.testing.TestingConnectorBehavior; import io.trino.testing.sql.TestTable; import io.trino.testing.sql.TrinoSqlExecutor; import io.trino.type.TypeDeserializer; import org.apache.hadoop.fs.Path; import org.intellij.lang.annotations.Language; import org.testng.SkipException; import org.testng.annotations.DataProvider; import org.testng.annotations.Test; import java.io.File; import java.io.IOException; import java.math.BigDecimal; import java.time.Instant; import java.time.LocalDate; import java.time.LocalDateTime; import java.time.ZonedDateTime; import java.time.format.DateTimeFormatter; import java.util.Collections; import java.util.HashMap; import java.util.LinkedHashMap; import java.util.List; import java.util.Locale; import java.util.Map; import java.util.Optional; import java.util.Set; import java.util.StringJoiner; import java.util.function.BiConsumer; import java.util.function.Consumer; import java.util.function.Function; import java.util.stream.LongStream; import java.util.stream.Stream; import static com.google.common.base.Preconditions.checkState; import static com.google.common.base.Verify.verify; import static com.google.common.collect.Iterables.getOnlyElement; import static com.google.common.io.Files.asCharSink; import static com.google.common.io.Files.createTempDir; import static com.google.common.io.MoreFiles.deleteRecursively; import static com.google.common.io.RecursiveDeleteOption.ALLOW_INSECURE; import static io.trino.SystemSessionProperties.COLOCATED_JOIN; import static io.trino.SystemSessionProperties.CONCURRENT_LIFESPANS_PER_NODE; import static io.trino.SystemSessionProperties.DYNAMIC_SCHEDULE_FOR_GROUPED_EXECUTION; import static io.trino.SystemSessionProperties.ENABLE_DYNAMIC_FILTERING; import static io.trino.SystemSessionProperties.GROUPED_EXECUTION; import static io.trino.SystemSessionProperties.JOIN_DISTRIBUTION_TYPE; import static io.trino.SystemSessionProperties.USE_TABLE_SCAN_NODE_PARTITIONING; import static io.trino.plugin.hive.HiveColumnHandle.BUCKET_COLUMN_NAME; import static io.trino.plugin.hive.HiveColumnHandle.FILE_MODIFIED_TIME_COLUMN_NAME; import static io.trino.plugin.hive.HiveColumnHandle.FILE_SIZE_COLUMN_NAME; import static io.trino.plugin.hive.HiveColumnHandle.PARTITION_COLUMN_NAME; import static io.trino.plugin.hive.HiveColumnHandle.PATH_COLUMN_NAME; import static io.trino.plugin.hive.HiveQueryRunner.HIVE_CATALOG; import static io.trino.plugin.hive.HiveQueryRunner.TPCH_SCHEMA; import static io.trino.plugin.hive.HiveQueryRunner.createBucketedSession; import static io.trino.plugin.hive.HiveTableProperties.BUCKETED_BY_PROPERTY; import static io.trino.plugin.hive.HiveTableProperties.BUCKET_COUNT_PROPERTY; import static io.trino.plugin.hive.HiveTableProperties.PARTITIONED_BY_PROPERTY; import static io.trino.plugin.hive.HiveTableProperties.STORAGE_FORMAT_PROPERTY; import static io.trino.plugin.hive.HiveTestUtils.TYPE_MANAGER; import static io.trino.plugin.hive.HiveType.toHiveType; import static io.trino.plugin.hive.util.HiveUtil.columnExtraInfo; import static io.trino.spi.security.Identity.ofUser; import static io.trino.spi.security.SelectedRole.Type.ROLE; import static io.trino.spi.type.BigintType.BIGINT; import static io.trino.spi.type.BooleanType.BOOLEAN; import static io.trino.spi.type.CharType.createCharType; import static io.trino.spi.type.DecimalType.createDecimalType; import static io.trino.spi.type.DoubleType.DOUBLE; import static io.trino.spi.type.IntegerType.INTEGER; import static io.trino.spi.type.SmallintType.SMALLINT; import static io.trino.spi.type.TinyintType.TINYINT; import static io.trino.spi.type.VarcharType.VARCHAR; import static io.trino.spi.type.VarcharType.createUnboundedVarcharType; import static io.trino.spi.type.VarcharType.createVarcharType; import static io.trino.sql.analyzer.FeaturesConfig.JoinDistributionType.BROADCAST; import static io.trino.sql.planner.optimizations.PlanNodeSearcher.searchFrom; import static io.trino.sql.planner.planprinter.IoPlanPrinter.FormattedMarker.Bound.ABOVE; import static io.trino.sql.planner.planprinter.IoPlanPrinter.FormattedMarker.Bound.EXACTLY; import static io.trino.sql.planner.planprinter.PlanPrinter.textLogicalPlan; import static io.trino.sql.tree.ExplainType.Type.DISTRIBUTED; import static io.trino.testing.DataProviders.toDataProvider; import static io.trino.testing.MaterializedResult.resultBuilder; import static io.trino.testing.QueryAssertions.assertEqualsIgnoreOrder; import static io.trino.testing.TestingAccessControlManager.TestingPrivilegeType.DELETE_TABLE; import static io.trino.testing.TestingAccessControlManager.TestingPrivilegeType.INSERT_TABLE; import static io.trino.testing.TestingAccessControlManager.TestingPrivilegeType.SELECT_COLUMN; import static io.trino.testing.TestingAccessControlManager.TestingPrivilegeType.SHOW_COLUMNS; import static io.trino.testing.TestingAccessControlManager.privilege; import static io.trino.testing.TestingSession.testSessionBuilder; import static io.trino.testing.assertions.Assert.assertEquals; import static io.trino.testing.sql.TestTable.randomTableSuffix; import static io.trino.transaction.TransactionBuilder.transaction; import static java.lang.String.format; import static java.nio.charset.StandardCharsets.UTF_8; import static java.util.Locale.ENGLISH; import static java.util.Objects.requireNonNull; import static java.util.concurrent.TimeUnit.MILLISECONDS; import static java.util.concurrent.TimeUnit.SECONDS; import static java.util.stream.Collectors.joining; import static org.assertj.core.api.Assertions.assertThat; import static org.assertj.core.api.Assertions.assertThatThrownBy; import static org.assertj.core.data.Offset.offset; import static org.testng.Assert.assertFalse; import static org.testng.Assert.assertNotEquals; import static org.testng.Assert.assertNotNull; import static org.testng.Assert.assertNull; import static org.testng.Assert.assertTrue; import static org.testng.Assert.fail; import static org.testng.FileAssert.assertFile; public class TestHiveConnectorTest extends BaseConnectorTest { private static final DateTimeFormatter TIMESTAMP_FORMATTER = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss.SSSSSSSSS"); private final String catalog; private final Session bucketedSession; public TestHiveConnectorTest() { this.catalog = HIVE_CATALOG; this.bucketedSession = createBucketedSession(Optional.of(new SelectedRole(ROLE, Optional.of("admin")))); } @Override protected QueryRunner createQueryRunner() throws Exception { DistributedQueryRunner queryRunner = HiveQueryRunner.builder() .setHiveProperties(ImmutableMap.of( "hive.allow-register-partition-procedure", "true", // Reduce writer sort buffer size to ensure SortingFileWriter gets used "hive.writer-sort-buffer-size", "1MB")) .setInitialTables(REQUIRED_TPCH_TABLES) .build(); // extra catalog with NANOSECOND timestamp precision queryRunner.createCatalog( "hive_timestamp_nanos", "hive", ImmutableMap.of("hive.timestamp-precision", "NANOSECONDS")); return queryRunner; } @Override protected boolean hasBehavior(TestingConnectorBehavior connectorBehavior) { switch (connectorBehavior) { case SUPPORTS_TOPN_PUSHDOWN: return false; case SUPPORTS_CREATE_VIEW: return true; case SUPPORTS_DELETE: return true; case SUPPORTS_MULTI_STATEMENT_WRITES: return true; default: return super.hasBehavior(connectorBehavior); } } @Override public void testDelete() { assertThatThrownBy(super::testDelete) .hasStackTraceContaining("Deletes must match whole partitions for non-transactional tables"); } @Override public void testDeleteWithComplexPredicate() { assertThatThrownBy(super::testDeleteWithComplexPredicate) .hasStackTraceContaining("Deletes must match whole partitions for non-transactional tables"); } @Override public void testDeleteWithSemiJoin() { assertThatThrownBy(super::testDeleteWithSemiJoin) .hasStackTraceContaining("Deletes must match whole partitions for non-transactional tables"); } @Override public void testDeleteWithSubquery() { assertThatThrownBy(super::testDeleteWithSubquery) .hasStackTraceContaining("Deletes must match whole partitions for non-transactional tables"); } @Override public void testDeleteWithVarcharPredicate() { assertThatThrownBy(super::testDeleteWithVarcharPredicate) .hasStackTraceContaining("Deletes must match whole partitions for non-transactional tables"); } @Override public void testRowLevelDelete() { assertThatThrownBy(super::testRowLevelDelete) .hasStackTraceContaining("Deletes must match whole partitions for non-transactional tables"); } @Test(dataProvider = "queryPartitionFilterRequiredSchemasDataProvider") public void testRequiredPartitionFilter(String queryPartitionFilterRequiredSchemas) { Session session = Session.builder(getSession()) .setIdentity(Identity.forUser("hive") .withConnectorRole("hive", new SelectedRole(ROLE, Optional.of("admin"))) .build()) .setCatalogSessionProperty("hive", "query_partition_filter_required", "true") .setCatalogSessionProperty("hive", "query_partition_filter_required_schemas", queryPartitionFilterRequiredSchemas) .build(); assertUpdate(session, "CREATE TABLE test_required_partition_filter(id integer, a varchar, b varchar, ds varchar) WITH (partitioned_by = ARRAY['ds'])"); assertUpdate(session, "INSERT INTO test_required_partition_filter(id, a, ds) VALUES (1, 'a', '1')", 1); String filterRequiredMessage = "Filter required on tpch\\.test_required_partition_filter for at least one partition column: ds"; // no partition filter assertQueryFails(session, "SELECT id FROM test_required_partition_filter WHERE a = '1'", filterRequiredMessage); assertQueryFails(session, "EXPLAIN SELECT id FROM test_required_partition_filter WHERE a = '1'", filterRequiredMessage); assertQueryFails(session, "EXPLAIN ANALYZE SELECT id FROM test_required_partition_filter WHERE a = '1'", filterRequiredMessage); // partition filter that gets removed by planner assertQueryFails(session, "SELECT id FROM test_required_partition_filter WHERE ds IS NOT NULL OR true", filterRequiredMessage); // equality partition filter assertQuery(session, "SELECT id FROM test_required_partition_filter WHERE ds = '1'", "SELECT 1"); computeActual(session, "EXPLAIN SELECT id FROM test_required_partition_filter WHERE ds = '1'"); // IS NOT NULL partition filter assertQuery(session, "SELECT id FROM test_required_partition_filter WHERE ds IS NOT NULL", "SELECT 1"); // predicate involving a CAST (likely unwrapped) assertQuery(session, "SELECT id FROM test_required_partition_filter WHERE CAST(ds AS integer) = 1", "SELECT 1"); // partition predicate in outer query only assertQuery(session, "SELECT id FROM (SELECT * FROM test_required_partition_filter WHERE CAST(id AS smallint) = 1) WHERE CAST(ds AS integer) = 1", "select 1"); computeActual(session, "EXPLAIN SELECT id FROM (SELECT * FROM test_required_partition_filter WHERE CAST(id AS smallint) = 1) WHERE CAST(ds AS integer) = 1"); // ANALYZE assertQueryFails(session, "ANALYZE test_required_partition_filter", filterRequiredMessage); assertQueryFails(session, "EXPLAIN ANALYZE test_required_partition_filter", filterRequiredMessage); assertUpdate(session, "ANALYZE test_required_partition_filter WITH (partitions=ARRAY[ARRAY['1']])", 1); computeActual(session, "EXPLAIN ANALYZE test_required_partition_filter WITH (partitions=ARRAY[ARRAY['1']])"); assertUpdate(session, "DROP TABLE test_required_partition_filter"); } @Test(dataProvider = "queryPartitionFilterRequiredSchemasDataProvider") public void testRequiredPartitionFilterInferred(String queryPartitionFilterRequiredSchemas) { Session session = Session.builder(getSession()) .setIdentity(Identity.forUser("hive") .withConnectorRole("hive", new SelectedRole(ROLE, Optional.of("admin"))) .build()) .setCatalogSessionProperty("hive", "query_partition_filter_required", "true") .setCatalogSessionProperty("hive", "query_partition_filter_required_schemas", queryPartitionFilterRequiredSchemas) .build(); assertUpdate(session, "CREATE TABLE test_partition_filter_inferred_left(id integer, a varchar, b varchar, ds varchar) WITH (partitioned_by = ARRAY['ds'])"); assertUpdate(session, "CREATE TABLE test_partition_filter_inferred_right(id integer, a varchar, b varchar, ds varchar) WITH (partitioned_by = ARRAY['ds'])"); assertUpdate(session, "INSERT INTO test_partition_filter_inferred_left(id, a, ds) VALUES (1, 'a', '1')", 1); assertUpdate(session, "INSERT INTO test_partition_filter_inferred_right(id, a, ds) VALUES (1, 'a', '1')", 1); // Join on partition column allowing filter inference for the other table assertQuery( session, "SELECT l.id, r.id FROM test_partition_filter_inferred_left l JOIN test_partition_filter_inferred_right r ON l.ds = r.ds WHERE l.ds = '1'", "SELECT 1, 1"); // Join on non-partition column assertQueryFails( session, "SELECT l.ds, r.ds FROM test_partition_filter_inferred_left l JOIN test_partition_filter_inferred_right r ON l.id = r.id WHERE l.ds = '1'", "Filter required on tpch\\.test_partition_filter_inferred_right for at least one partition column: ds"); assertUpdate(session, "DROP TABLE test_partition_filter_inferred_left"); assertUpdate(session, "DROP TABLE test_partition_filter_inferred_right"); } @DataProvider public Object[][] queryPartitionFilterRequiredSchemasDataProvider() { return new Object[][]{ {"[]"}, {"[\"tpch\"]"} }; } @Test public void testRequiredPartitionFilterAppliedOnDifferentSchema() { String schemaName = "schema_" + randomTableSuffix(); Session session = Session.builder(getSession()) .setIdentity(Identity.forUser("hive") .withRole("hive", new SelectedRole(ROLE, Optional.of("admin"))) .build()) .setCatalogSessionProperty("hive", "query_partition_filter_required", "true") .setCatalogSessionProperty("hive", "query_partition_filter_required_schemas", format("[\"%s\"]", schemaName)) .build(); getQueryRunner().execute("CREATE SCHEMA " + schemaName); try (TestTable table = new TestTable( new TrinoSqlExecutor(getQueryRunner(), session), "test_required_partition_filter_", "(id integer, a varchar, b varchar) WITH (partitioned_by = ARRAY['b'])", ImmutableList.of("1, '1', 'b'"))) { // no partition filter assertQuery(session, format("SELECT id FROM %s WHERE a = '1'", table.getName()), "SELECT 1"); computeActual(session, format("EXPLAIN SELECT id FROM %s WHERE a = '1'", table.getName())); computeActual(session, format("EXPLAIN ANALYZE SELECT id FROM %s WHERE a = '1'", table.getName())); // partition filter that gets removed by planner assertQuery(session, format("SELECT id FROM %s WHERE b IS NOT NULL OR true", table.getName()), "SELECT 1"); // Join on non-partition column assertUpdate(session, format("CREATE TABLE %s.%s_right (id integer, a varchar, b varchar, ds varchar) WITH (partitioned_by = ARRAY['ds'])", schemaName, table.getName())); assertUpdate(session, format("INSERT INTO %s.%s_right (id, a, ds) VALUES (1, 'a', '1')", schemaName, table.getName()), 1); assertQueryFails( session, format("SELECT count(*) FROM %2$s l JOIN %s.%2$s_right r ON l.id = r.id WHERE r.a = 'a'", schemaName, table.getName()), format("Filter required on %s\\.%s_right for at least one partition column: ds", schemaName, table.getName())); assertQuery(session, format("SELECT count(*) FROM %2$s l JOIN %s.%2$s_right r ON l.id = r.id WHERE r.ds = '1'", schemaName, table.getName()), "SELECT 1"); assertUpdate(session, format("DROP TABLE %s.%s_right", schemaName, table.getName())); } getQueryRunner().execute("DROP SCHEMA " + schemaName); } @Test public void testInvalidValueForQueryPartitionFilterRequiredSchemas() { assertQueryFails( "SET SESSION hive.query_partition_filter_required_schemas = ARRAY['tpch', null]", "line 1:1: Invalid null or empty value in query_partition_filter_required_schemas property"); assertQueryFails( "SET SESSION hive.query_partition_filter_required_schemas = ARRAY['tpch', '']", "line 1:1: Invalid null or empty value in query_partition_filter_required_schemas property"); } @Test public void testNaNPartition() { // Only NaN partition assertUpdate("DROP TABLE IF EXISTS test_nan_partition"); assertUpdate("CREATE TABLE test_nan_partition(a varchar, d double) WITH (partitioned_by = ARRAY['d'])"); assertUpdate("INSERT INTO test_nan_partition VALUES ('b', nan())", 1); assertQuery( "SELECT a, d, regexp_replace(\"$path\", '.*(/[^/]*/[^/]*/)[^/]*', '...$1...') FROM test_nan_partition", "VALUES ('b', SQRT(-1), '.../test_nan_partition/d=NaN/...')"); // SQRT(-1) is H2's recommended way to obtain NaN assertQueryReturnsEmptyResult("SELECT a FROM test_nan_partition JOIN (VALUES 33e0) u(x) ON d = x"); assertQueryReturnsEmptyResult("SELECT a FROM test_nan_partition JOIN (VALUES 33e0) u(x) ON d = x OR rand() = 42"); assertQueryReturnsEmptyResult("SELECT * FROM test_nan_partition t1 JOIN test_nan_partition t2 ON t1.d = t2.d"); assertQuery( "SHOW STATS FOR test_nan_partition", "VALUES " + "('a', 1, 1, 0, null, null, null), " + "('d', null, 1, 0, null, null, null), " + "(null, null, null, null, 1, null, null)"); assertUpdate("DROP TABLE IF EXISTS test_nan_partition"); // NaN partition and other partitions assertUpdate("CREATE TABLE test_nan_partition(a varchar, d double) WITH (partitioned_by = ARRAY['d'])"); assertUpdate("INSERT INTO test_nan_partition VALUES ('a', 42e0), ('b', nan())", 2); assertQuery( "SELECT a, d, regexp_replace(\"$path\", '.*(/[^/]*/[^/]*/)[^/]*', '...$1...') FROM test_nan_partition", "VALUES " + " ('a', 42, '.../test_nan_partition/d=42.0/...'), " + " ('b', SQRT(-1), '.../test_nan_partition/d=NaN/...')"); // SQRT(-1) is H2's recommended way to obtain NaN assertQueryReturnsEmptyResult("SELECT a FROM test_nan_partition JOIN (VALUES 33e0) u(x) ON d = x"); assertQueryReturnsEmptyResult("SELECT a FROM test_nan_partition JOIN (VALUES 33e0) u(x) ON d = x OR rand() = 42"); assertQuery("SELECT * FROM test_nan_partition t1 JOIN test_nan_partition t2 ON t1.d = t2.d", "VALUES ('a', 42, 'a', 42)"); assertQuery( "SHOW STATS FOR test_nan_partition", "VALUES " + "('a', 2, 1, 0, null, null, null), " + "('d', null, 2, 0, null, null, null), " + "(null, null, null, null, 2, null, null)"); assertUpdate("DROP TABLE test_nan_partition"); } @Test public void testIsNotNullWithNestedData() { Session admin = Session.builder(getSession()) .setIdentity(Identity.forUser("hive") .withConnectorRole("hive", new SelectedRole(ROLE, Optional.of("admin"))) .build()) .setCatalogSessionProperty(catalog, "parquet_use_column_names", "true") .build(); assertUpdate(admin, "create table nest_test(id int, a row(x varchar, y integer, z varchar), b varchar) WITH (format='PARQUET')"); assertUpdate(admin, "insert into nest_test values(0, null, '1')", 1); assertUpdate(admin, "insert into nest_test values(1, ('a', null, 'b'), '1')", 1); assertUpdate(admin, "insert into nest_test values(2, ('b', 1, 'd'), '1')", 1); assertQuery(admin, "select a.y from nest_test", "values (null), (null), (1)"); assertQuery(admin, "select id from nest_test where a.y IS NOT NULL", "values (2)"); assertUpdate(admin, "DROP TABLE nest_test"); } @Test public void testSchemaOperations() { Session session = Session.builder(getSession()) .setIdentity(Identity.forUser("hive") .withConnectorRole("hive", new SelectedRole(ROLE, Optional.of("admin"))) .build()) .build(); assertUpdate(session, "CREATE SCHEMA new_schema"); assertUpdate(session, "CREATE TABLE new_schema.test (x bigint)"); assertQueryFails(session, "DROP SCHEMA new_schema", ".*Cannot drop non-empty schema 'new_schema'"); assertUpdate(session, "DROP TABLE new_schema.test"); assertUpdate(session, "DROP SCHEMA new_schema"); } @Test public void testSchemaAuthorizationForUser() { Session admin = Session.builder(getSession()) .setIdentity(Identity.forUser("hive") .withConnectorRole("hive", new SelectedRole(ROLE, Optional.of("admin"))) .build()) .build(); assertUpdate(admin, "CREATE SCHEMA test_schema_authorization_user"); Session user = testSessionBuilder() .setCatalog(getSession().getCatalog()) .setSchema("test_schema_authorization_user") .setIdentity(Identity.forUser("user") .withPrincipal(getSession().getIdentity().getPrincipal()) .build()) .build(); Session anotherUser = testSessionBuilder() .setCatalog(getSession().getCatalog()) .setSchema("test_schema_authorization_user") .setIdentity(Identity.forUser("anotheruser") .withPrincipal(getSession().getIdentity().getPrincipal()) .build()) .build(); // ordinary users cannot drop a schema or create a table in a schema the do not own assertQueryFails(user, "DROP SCHEMA test_schema_authorization_user", "Access Denied: Cannot drop schema test_schema_authorization_user"); assertQueryFails(user, "CREATE TABLE test_schema_authorization_user.test (x bigint)", "Access Denied: Cannot create table test_schema_authorization_user.test"); // change owner to user assertUpdate(admin, "ALTER SCHEMA test_schema_authorization_user SET AUTHORIZATION user"); // another user still cannot create tables assertQueryFails(anotherUser, "CREATE TABLE test_schema_authorization_user.test (x bigint)", "Access Denied: Cannot create table test_schema_authorization_user.test"); assertUpdate(user, "CREATE TABLE test_schema_authorization_user.test (x bigint)"); // another user should not be able to drop the table assertQueryFails(anotherUser, "DROP TABLE test_schema_authorization_user.test", "Access Denied: Cannot drop table test_schema_authorization_user.test"); // or access the table in any way assertQueryFails(anotherUser, "SELECT 1 FROM test_schema_authorization_user.test", "Access Denied: Cannot select from table test_schema_authorization_user.test"); assertUpdate(user, "DROP TABLE test_schema_authorization_user.test"); assertUpdate(user, "DROP SCHEMA test_schema_authorization_user"); } @Test public void testSchemaAuthorizationForRole() { Session admin = Session.builder(getSession()) .setIdentity(Identity.forUser("hive") .withConnectorRole("hive", new SelectedRole(ROLE, Optional.of("admin"))) .build()) .build(); assertUpdate(admin, "CREATE SCHEMA test_schema_authorization_role"); // make sure role-grants only work on existing roles assertQueryFails(admin, "ALTER SCHEMA test_schema_authorization_role SET AUTHORIZATION ROLE nonexisting_role", ".*?Role 'nonexisting_role' does not exist in catalog 'hive'"); assertUpdate(admin, "CREATE ROLE authorized_users IN hive"); assertUpdate(admin, "GRANT authorized_users TO user IN hive"); assertUpdate(admin, "ALTER SCHEMA test_schema_authorization_role SET AUTHORIZATION ROLE authorized_users"); Session user = testSessionBuilder() .setCatalog(getSession().getCatalog()) .setSchema("test_schema_authorization_role") .setIdentity(Identity.forUser("user") .withPrincipal(getSession().getIdentity().getPrincipal()) .build()) .build(); Session anotherUser = testSessionBuilder() .setCatalog(getSession().getCatalog()) .setSchema("test_schema_authorization_role") .setIdentity(Identity.forUser("anotheruser") .withPrincipal(getSession().getIdentity().getPrincipal()) .build()) .build(); assertUpdate(user, "CREATE TABLE test_schema_authorization_role.test (x bigint)"); // another user should not be able to drop the table assertQueryFails(anotherUser, "DROP TABLE test_schema_authorization_role.test", "Access Denied: Cannot drop table test_schema_authorization_role.test"); // or access the table in any way assertQueryFails(anotherUser, "SELECT 1 FROM test_schema_authorization_role.test", "Access Denied: Cannot select from table test_schema_authorization_role.test"); assertUpdate(user, "DROP TABLE test_schema_authorization_role.test"); assertUpdate(user, "DROP SCHEMA test_schema_authorization_role"); assertUpdate(admin, "DROP ROLE authorized_users IN hive"); } @Test public void testCreateSchemaWithAuthorizationForUser() { Session admin = Session.builder(getSession()) .setIdentity(Identity.forUser("hive") .withConnectorRole("hive", new SelectedRole(ROLE, Optional.of("admin"))) .build()) .build(); Session user = testSessionBuilder() .setCatalog(getSession().getCatalog()) .setSchema("test_createschema_authorization_user") .setIdentity(Identity.forUser("user") .withPrincipal(getSession().getIdentity().getPrincipal()) .build()) .build(); Session anotherUser = testSessionBuilder() .setCatalog(getSession().getCatalog()) .setSchema("test_createschema_authorization_user") .setIdentity(Identity.forUser("anotheruser") .withPrincipal(getSession().getIdentity().getPrincipal()) .build()) .build(); assertUpdate(admin, "CREATE SCHEMA test_createschema_authorization_user AUTHORIZATION user"); assertUpdate(user, "CREATE TABLE test_createschema_authorization_user.test (x bigint)"); // another user should not be able to drop the table assertQueryFails(anotherUser, "DROP TABLE test_createschema_authorization_user.test", "Access Denied: Cannot drop table test_createschema_authorization_user.test"); // or access the table in any way assertQueryFails(anotherUser, "SELECT 1 FROM test_createschema_authorization_user.test", "Access Denied: Cannot select from table test_createschema_authorization_user.test"); assertUpdate(user, "DROP TABLE test_createschema_authorization_user.test"); assertUpdate(user, "DROP SCHEMA test_createschema_authorization_user"); } @Test public void testCreateSchemaWithAuthorizationForRole() { Session admin = Session.builder(getSession()) .setIdentity(Identity.forUser("hive") .withConnectorRole("hive", new SelectedRole(ROLE, Optional.of("admin"))) .build()) .build(); Session user = testSessionBuilder() .setCatalog(getSession().getCatalog()) .setSchema("test_createschema_authorization_role") .setIdentity(Identity.forUser("user") .withPrincipal(getSession().getIdentity().getPrincipal()) .build()) .build(); Session userWithoutRole = testSessionBuilder() .setCatalog(getSession().getCatalog()) .setSchema("test_createschema_authorization_role") .setIdentity(Identity.forUser("user") .withConnectorRoles(Collections.emptyMap()) .build()) .build(); Session anotherUser = testSessionBuilder() .setCatalog(getSession().getCatalog()) .setSchema("test_createschema_authorization_role") .setIdentity(Identity.forUser("anotheruser") .withPrincipal(getSession().getIdentity().getPrincipal()) .build()) .build(); assertUpdate(admin, "CREATE ROLE authorized_users IN hive"); assertUpdate(admin, "GRANT authorized_users TO user IN hive"); assertQueryFails(admin, "CREATE SCHEMA test_createschema_authorization_role AUTHORIZATION ROLE nonexisting_role", ".*?Role 'nonexisting_role' does not exist in catalog 'hive'"); assertUpdate(admin, "CREATE SCHEMA test_createschema_authorization_role AUTHORIZATION ROLE authorized_users"); assertUpdate(user, "CREATE TABLE test_createschema_authorization_role.test (x bigint)"); // "user" without the role enabled cannot create new tables assertQueryFails(userWithoutRole, "CREATE TABLE test_schema_authorization_role.test1 (x bigint)", "Access Denied: Cannot create table test_schema_authorization_role.test1"); // another user should not be able to drop the table assertQueryFails(anotherUser, "DROP TABLE test_createschema_authorization_role.test", "Access Denied: Cannot drop table test_createschema_authorization_role.test"); // or access the table in any way assertQueryFails(anotherUser, "SELECT 1 FROM test_createschema_authorization_role.test", "Access Denied: Cannot select from table test_createschema_authorization_role.test"); assertUpdate(user, "DROP TABLE test_createschema_authorization_role.test"); assertUpdate(user, "DROP SCHEMA test_createschema_authorization_role"); assertUpdate(admin, "DROP ROLE authorized_users IN hive"); } @Test public void testSchemaAuthorization() { Session admin = Session.builder(getSession()) .setIdentity(Identity.forUser("hive") .withConnectorRole("hive", new SelectedRole(ROLE, Optional.of("admin"))) .build()) .build(); Session user = testSessionBuilder() .setCatalog(getSession().getCatalog()) .setSchema("test_schema_authorization") .setIdentity(Identity.forUser("user").withPrincipal(getSession().getIdentity().getPrincipal()).build()) .build(); assertUpdate(admin, "CREATE SCHEMA test_schema_authorization"); assertUpdate(admin, "ALTER SCHEMA test_schema_authorization SET AUTHORIZATION user"); assertUpdate(user, "ALTER SCHEMA test_schema_authorization SET AUTHORIZATION ROLE admin"); assertQueryFails(user, "ALTER SCHEMA test_schema_authorization SET AUTHORIZATION ROLE admin", "Access Denied: Cannot set authorization for schema test_schema_authorization to ROLE admin"); // switch owner back to user, and then change the owner to ROLE admin from a different catalog to verify roles are relative to the catalog of the schema assertUpdate(admin, "ALTER SCHEMA test_schema_authorization SET AUTHORIZATION user"); Session userSessionInDifferentCatalog = testSessionBuilder() .setIdentity(Identity.forUser("user").withPrincipal(getSession().getIdentity().getPrincipal()).build()) .build(); assertUpdate(userSessionInDifferentCatalog, "ALTER SCHEMA hive.test_schema_authorization SET AUTHORIZATION ROLE admin"); assertUpdate(admin, "ALTER SCHEMA test_schema_authorization SET AUTHORIZATION user"); assertUpdate(admin, "DROP SCHEMA test_schema_authorization"); } @Test public void testTableAuthorization() { Session admin = Session.builder(getSession()) .setCatalog(getSession().getCatalog()) .setIdentity(Identity.forUser("hive").withConnectorRole("hive", new SelectedRole(ROLE, Optional.of("admin"))).build()) .build(); Session alice = testSessionBuilder() .setCatalog(getSession().getCatalog()) .setIdentity(Identity.forUser("alice").build()) .build(); assertUpdate(admin, "CREATE SCHEMA test_table_authorization"); assertUpdate(admin, "CREATE TABLE test_table_authorization.foo (col int)"); assertAccessDenied( alice, "ALTER TABLE test_table_authorization.foo SET AUTHORIZATION alice", "Cannot set authorization for table test_table_authorization.foo to USER alice"); assertUpdate(admin, "ALTER TABLE test_table_authorization.foo SET AUTHORIZATION alice"); assertUpdate(alice, "ALTER TABLE test_table_authorization.foo SET AUTHORIZATION admin"); assertUpdate(admin, "DROP TABLE test_table_authorization.foo"); assertUpdate(admin, "DROP SCHEMA test_table_authorization"); } @Test public void testTableAuthorizationForRole() { Session admin = Session.builder(getSession()) .setCatalog(getSession().getCatalog()) .setIdentity(Identity.forUser("hive").withConnectorRole("hive", new SelectedRole(ROLE, Optional.of("admin"))).build()) .build(); Session alice = testSessionBuilder() .setCatalog(getSession().getCatalog()) .setIdentity(Identity.forUser("alice").build()) .build(); assertUpdate(admin, "CREATE SCHEMA test_table_authorization"); assertUpdate(admin, "CREATE TABLE test_table_authorization.foo (col int)"); // TODO Change assertions once https://github.com/trinodb/trino/issues/5706 is done assertAccessDenied( alice, "ALTER TABLE test_table_authorization.foo SET AUTHORIZATION ROLE admin", "Cannot set authorization for table test_table_authorization.foo to ROLE admin"); assertUpdate(admin, "ALTER TABLE test_table_authorization.foo SET AUTHORIZATION alice"); assertQueryFails( alice, "ALTER TABLE test_table_authorization.foo SET AUTHORIZATION ROLE admin", "Setting table owner type as a role is not supported"); assertUpdate(admin, "DROP TABLE test_table_authorization.foo"); assertUpdate(admin, "DROP SCHEMA test_table_authorization"); } @Test public void testViewAuthorization() { Session admin = Session.builder(getSession()) .setCatalog(getSession().getCatalog()) .setIdentity(Identity.forUser("hive").withConnectorRole("hive", new SelectedRole(ROLE, Optional.of("admin"))).build()) .build(); Session alice = testSessionBuilder() .setCatalog(getSession().getCatalog()) .setIdentity(Identity.forUser("alice").build()) .build(); String schema = "test_view_authorization" + TestTable.randomTableSuffix(); assertUpdate(admin, "CREATE SCHEMA " + schema); assertUpdate(admin, "CREATE VIEW " + schema + ".test_view AS SELECT current_user AS user"); assertAccessDenied( alice, "ALTER VIEW " + schema + ".test_view SET AUTHORIZATION alice", "Cannot set authorization for view " + schema + ".test_view to USER alice"); assertUpdate(admin, "ALTER VIEW " + schema + ".test_view SET AUTHORIZATION alice"); assertUpdate(alice, "ALTER VIEW " + schema + ".test_view SET AUTHORIZATION admin"); assertUpdate(admin, "DROP VIEW " + schema + ".test_view"); assertUpdate(admin, "DROP SCHEMA " + schema); } @Test public void testViewAuthorizationSecurityDefiner() { Session admin = Session.builder(getSession()) .setCatalog(getSession().getCatalog()) .setIdentity(Identity.forUser("hive").withConnectorRole("hive", new SelectedRole(ROLE, Optional.of("admin"))).build()) .build(); Session alice = testSessionBuilder() .setCatalog(getSession().getCatalog()) .setIdentity(Identity.forUser("alice").build()) .build(); String schema = "test_view_authorization" + TestTable.randomTableSuffix(); assertUpdate(admin, "CREATE SCHEMA " + schema); assertUpdate(admin, "CREATE TABLE " + schema + ".test_table (col int)"); assertUpdate(admin, "INSERT INTO " + schema + ".test_table VALUES (1)", 1); assertUpdate(admin, "CREATE VIEW " + schema + ".test_view SECURITY DEFINER AS SELECT * from " + schema + ".test_table"); assertUpdate(admin, "GRANT SELECT ON " + schema + ".test_view TO alice"); assertQuery(alice, "SELECT * FROM " + schema + ".test_view", "VALUES (1)"); assertUpdate(admin, "ALTER VIEW " + schema + ".test_view SET AUTHORIZATION alice"); assertQueryFails(alice, "SELECT * FROM " + schema + ".test_view", "Access Denied: Cannot select from table " + schema + ".test_table"); assertUpdate(alice, "ALTER VIEW " + schema + ".test_view SET AUTHORIZATION admin"); assertUpdate(admin, "DROP VIEW " + schema + ".test_view"); assertUpdate(admin, "DROP TABLE " + schema + ".test_table"); assertUpdate(admin, "DROP SCHEMA " + schema); } @Test public void testViewAuthorizationSecurityInvoker() { Session admin = Session.builder(getSession()) .setCatalog(getSession().getCatalog()) .setIdentity(Identity.forUser("hive").withConnectorRole("hive", new SelectedRole(ROLE, Optional.of("admin"))).build()) .build(); Session alice = testSessionBuilder() .setCatalog(getSession().getCatalog()) .setIdentity(Identity.forUser("alice").build()) .build(); String schema = "test_view_authorization" + TestTable.randomTableSuffix(); assertUpdate(admin, "CREATE SCHEMA " + schema); assertUpdate(admin, "CREATE TABLE " + schema + ".test_table (col int)"); assertUpdate(admin, "INSERT INTO " + schema + ".test_table VALUES (1)", 1); assertUpdate(admin, "CREATE VIEW " + schema + ".test_view SECURITY INVOKER AS SELECT * from " + schema + ".test_table"); assertUpdate(admin, "GRANT SELECT ON " + schema + ".test_view TO alice"); assertQueryFails(alice, "SELECT * FROM " + schema + ".test_view", "Access Denied: Cannot select from table " + schema + ".test_table"); assertUpdate(admin, "ALTER VIEW " + schema + ".test_view SET AUTHORIZATION alice"); assertQueryFails(alice, "SELECT * FROM " + schema + ".test_view", "Access Denied: Cannot select from table " + schema + ".test_table"); assertUpdate(alice, "ALTER VIEW " + schema + ".test_view SET AUTHORIZATION admin"); assertUpdate(admin, "DROP VIEW " + schema + ".test_view"); assertUpdate(admin, "DROP TABLE " + schema + ".test_table"); assertUpdate(admin, "DROP SCHEMA " + schema); } @Test public void testViewAuthorizationForRole() { Session admin = Session.builder(getSession()) .setCatalog(getSession().getCatalog()) .setIdentity(Identity.forUser("hive").withConnectorRole("hive", new SelectedRole(ROLE, Optional.of("admin"))).build()) .build(); Session alice = testSessionBuilder() .setCatalog(getSession().getCatalog()) .setIdentity(Identity.forUser("alice").build()) .build(); String schema = "test_view_authorization" + TestTable.randomTableSuffix(); assertUpdate(admin, "CREATE SCHEMA " + schema); assertUpdate(admin, "CREATE TABLE " + schema + ".test_table (col int)"); assertUpdate(admin, "CREATE VIEW " + schema + ".test_view AS SELECT * FROM " + schema + ".test_table"); // TODO Change assertions once https://github.com/trinodb/trino/issues/5706 is done assertAccessDenied( alice, "ALTER VIEW " + schema + ".test_view SET AUTHORIZATION ROLE admin", "Cannot set authorization for view " + schema + ".test_view to ROLE admin"); assertUpdate(admin, "ALTER VIEW " + schema + ".test_view SET AUTHORIZATION alice"); assertQueryFails( alice, "ALTER VIEW " + schema + ".test_view SET AUTHORIZATION ROLE admin", "Setting table owner type as a role is not supported"); assertUpdate(admin, "DROP VIEW " + schema + ".test_view"); assertUpdate(admin, "DROP TABLE " + schema + ".test_table"); assertUpdate(admin, "DROP SCHEMA " + schema); } @Test @Override public void testShowCreateSchema() { Session admin = Session.builder(getSession()) .setIdentity(Identity.forUser("hive") .withConnectorRole("hive", new SelectedRole(ROLE, Optional.of("admin"))) .build()) .build(); Session user = testSessionBuilder() .setCatalog(getSession().getCatalog()) .setSchema("test_show_create_schema") .setIdentity(Identity.forUser("user").withPrincipal(getSession().getIdentity().getPrincipal()).build()) .build(); assertUpdate(admin, "CREATE ROLE test_show_create_schema_role IN hive"); assertUpdate(admin, "GRANT test_show_create_schema_role TO user IN hive"); assertUpdate(admin, "CREATE SCHEMA test_show_create_schema"); String createSchemaSql = format("" + "CREATE SCHEMA %s.test_show_create_schema\n" + "AUTHORIZATION USER hive\n" + "WITH \\(\n" + " location = '.*test_show_create_schema'\n" + "\\)", getSession().getCatalog().get()); String actualResult = getOnlyElement(computeActual(admin, "SHOW CREATE SCHEMA test_show_create_schema").getOnlyColumnAsSet()).toString(); assertThat(actualResult).matches(createSchemaSql); assertQueryFails(user, "SHOW CREATE SCHEMA test_show_create_schema", "Access Denied: Cannot show create schema for test_show_create_schema"); assertUpdate(admin, "ALTER SCHEMA test_show_create_schema SET AUTHORIZATION ROLE test_show_create_schema_role"); createSchemaSql = format("" + "CREATE SCHEMA %s.test_show_create_schema\n" + "AUTHORIZATION ROLE test_show_create_schema_role\n" + "WITH \\(\n" + " location = '.*test_show_create_schema'\n" + "\\)", getSession().getCatalog().get()); actualResult = getOnlyElement(computeActual(admin, "SHOW CREATE SCHEMA test_show_create_schema").getOnlyColumnAsSet()).toString(); assertThat(actualResult).matches(createSchemaSql); assertUpdate(user, "DROP SCHEMA test_show_create_schema"); assertUpdate(admin, "DROP ROLE test_show_create_schema_role IN hive"); } @Test public void testIoExplain() { // Test IO explain with small number of discrete components. computeActual("CREATE TABLE test_io_explain WITH (partitioned_by = ARRAY['orderkey', 'processing']) AS SELECT custkey, orderkey, orderstatus = 'P' processing FROM orders WHERE orderkey < 3"); EstimatedStatsAndCost estimate = new EstimatedStatsAndCost(2.0, 40.0, 40.0, 0.0, 0.0); MaterializedResult result = computeActual("EXPLAIN (TYPE IO, FORMAT JSON) INSERT INTO test_io_explain SELECT custkey, orderkey, processing FROM test_io_explain WHERE custkey <= 10"); assertEquals( getIoPlanCodec().fromJson((String) getOnlyElement(result.getOnlyColumnAsSet())), new IoPlan( ImmutableSet.of( new TableColumnInfo( new CatalogSchemaTableName(catalog, "tpch", "test_io_explain"), ImmutableSet.of( new ColumnConstraint( "orderkey", BIGINT, new FormattedDomain( false, ImmutableSet.of( new FormattedRange( new FormattedMarker(Optional.of("1"), EXACTLY), new FormattedMarker(Optional.of("1"), EXACTLY)), new FormattedRange( new FormattedMarker(Optional.of("2"), EXACTLY), new FormattedMarker(Optional.of("2"), EXACTLY))))), new ColumnConstraint( "processing", BOOLEAN, new FormattedDomain( false, ImmutableSet.of( new FormattedRange( new FormattedMarker(Optional.of("false"), EXACTLY), new FormattedMarker(Optional.of("false"), EXACTLY))))), new ColumnConstraint( "custkey", BIGINT, new FormattedDomain( false, ImmutableSet.of( new FormattedRange( new FormattedMarker(Optional.empty(), ABOVE), new FormattedMarker(Optional.of("10"), EXACTLY)))))), estimate)), Optional.of(new CatalogSchemaTableName(catalog, "tpch", "test_io_explain")), estimate)); assertUpdate("DROP TABLE test_io_explain"); // Test IO explain with large number of discrete components where Domain::simpify comes into play. computeActual("CREATE TABLE test_io_explain WITH (partitioned_by = ARRAY['orderkey']) AS SELECT custkey, orderkey FROM orders WHERE orderkey < 200"); estimate = new EstimatedStatsAndCost(55.0, 990.0, 990.0, 0.0, 0.0); result = computeActual("EXPLAIN (TYPE IO, FORMAT JSON) INSERT INTO test_io_explain SELECT custkey, orderkey + 10 FROM test_io_explain WHERE custkey <= 10"); assertEquals( getIoPlanCodec().fromJson((String) getOnlyElement(result.getOnlyColumnAsSet())), new IoPlan( ImmutableSet.of( new TableColumnInfo( new CatalogSchemaTableName(catalog, "tpch", "test_io_explain"), ImmutableSet.of( new ColumnConstraint( "orderkey", BIGINT, new FormattedDomain( false, ImmutableSet.of( new FormattedRange( new FormattedMarker(Optional.of("1"), EXACTLY), new FormattedMarker(Optional.of("199"), EXACTLY))))), new ColumnConstraint( "custkey", BIGINT, new FormattedDomain( false, ImmutableSet.of( new FormattedRange( new FormattedMarker(Optional.empty(), ABOVE), new FormattedMarker(Optional.of("10"), EXACTLY)))))), estimate)), Optional.of(new CatalogSchemaTableName(catalog, "tpch", "test_io_explain")), estimate)); EstimatedStatsAndCost finalEstimate = new EstimatedStatsAndCost(Double.NaN, Double.NaN, Double.NaN, Double.NaN, Double.NaN); estimate = new EstimatedStatsAndCost(1.0, 18.0, 18, 0.0, 0.0); result = computeActual("EXPLAIN (TYPE IO, FORMAT JSON) INSERT INTO test_io_explain SELECT custkey, orderkey FROM test_io_explain WHERE orderkey = 100"); assertEquals( getIoPlanCodec().fromJson((String) getOnlyElement(result.getOnlyColumnAsSet())), new IoPlan( ImmutableSet.of( new TableColumnInfo( new CatalogSchemaTableName(catalog, "tpch", "test_io_explain"), ImmutableSet.of( new ColumnConstraint( "orderkey", BIGINT, new FormattedDomain( false, ImmutableSet.of( new FormattedRange( new FormattedMarker(Optional.of("100"), EXACTLY), new FormattedMarker(Optional.of("100"), EXACTLY))))), new ColumnConstraint( "orderkey", BIGINT, new FormattedDomain( false, ImmutableSet.of( new FormattedRange( new FormattedMarker(Optional.of("100"), EXACTLY), new FormattedMarker(Optional.of("100"), EXACTLY)))))), estimate)), Optional.of(new CatalogSchemaTableName(catalog, "tpch", "test_io_explain")), finalEstimate)); assertUpdate("DROP TABLE test_io_explain"); } @Test public void testIoExplainColumnFilters() { // Test IO explain with small number of discrete components. computeActual("CREATE TABLE test_io_explain_column_filters WITH (partitioned_by = ARRAY['orderkey']) AS SELECT custkey, orderstatus, orderkey FROM orders WHERE orderkey < 3"); EstimatedStatsAndCost estimate = new EstimatedStatsAndCost(2.0, 48.0, 48.0, 0.0, 0.0); EstimatedStatsAndCost finalEstimate = new EstimatedStatsAndCost(0.0, 0.0, 96.0, 0.0, 0.0); MaterializedResult result = computeActual("EXPLAIN (TYPE IO, FORMAT JSON) SELECT custkey, orderkey, orderstatus FROM test_io_explain_column_filters WHERE custkey <= 10 and orderstatus='P'"); assertEquals( getIoPlanCodec().fromJson((String) getOnlyElement(result.getOnlyColumnAsSet())), new IoPlan( ImmutableSet.of( new TableColumnInfo( new CatalogSchemaTableName(catalog, "tpch", "test_io_explain_column_filters"), ImmutableSet.of( new ColumnConstraint( "orderkey", BIGINT, new FormattedDomain( false, ImmutableSet.of( new FormattedRange( new FormattedMarker(Optional.of("1"), EXACTLY), new FormattedMarker(Optional.of("1"), EXACTLY)), new FormattedRange( new FormattedMarker(Optional.of("2"), EXACTLY), new FormattedMarker(Optional.of("2"), EXACTLY))))), new ColumnConstraint( "custkey", BIGINT, new FormattedDomain( false, ImmutableSet.of( new FormattedRange( new FormattedMarker(Optional.empty(), ABOVE), new FormattedMarker(Optional.of("10"), EXACTLY))))), new ColumnConstraint( "orderstatus", VarcharType.createVarcharType(1), new FormattedDomain( false, ImmutableSet.of( new FormattedRange( new FormattedMarker(Optional.of("P"), EXACTLY), new FormattedMarker(Optional.of("P"), EXACTLY)))))), estimate)), Optional.empty(), finalEstimate)); result = computeActual("EXPLAIN (TYPE IO, FORMAT JSON) SELECT custkey, orderkey, orderstatus FROM test_io_explain_column_filters WHERE custkey <= 10 and (orderstatus='P' or orderstatus='S')"); assertEquals( getIoPlanCodec().fromJson((String) getOnlyElement(result.getOnlyColumnAsSet())), new IoPlan( ImmutableSet.of( new TableColumnInfo( new CatalogSchemaTableName(catalog, "tpch", "test_io_explain_column_filters"), ImmutableSet.of( new ColumnConstraint( "orderkey", BIGINT, new FormattedDomain( false, ImmutableSet.of( new FormattedRange( new FormattedMarker(Optional.of("1"), EXACTLY), new FormattedMarker(Optional.of("1"), EXACTLY)), new FormattedRange( new FormattedMarker(Optional.of("2"), EXACTLY), new FormattedMarker(Optional.of("2"), EXACTLY))))), new ColumnConstraint( "orderstatus", VarcharType.createVarcharType(1), new FormattedDomain( false, ImmutableSet.of( new FormattedRange( new FormattedMarker(Optional.of("P"), EXACTLY), new FormattedMarker(Optional.of("P"), EXACTLY)), new FormattedRange( new FormattedMarker(Optional.of("S"), EXACTLY), new FormattedMarker(Optional.of("S"), EXACTLY))))), new ColumnConstraint( "custkey", BIGINT, new FormattedDomain( false, ImmutableSet.of( new FormattedRange( new FormattedMarker(Optional.empty(), ABOVE), new FormattedMarker(Optional.of("10"), EXACTLY)))))), estimate)), Optional.empty(), finalEstimate)); result = computeActual("EXPLAIN (TYPE IO, FORMAT JSON) SELECT custkey, orderkey, orderstatus FROM test_io_explain_column_filters WHERE custkey <= 10 and cast(orderstatus as integer) = 5"); assertEquals( getIoPlanCodec().fromJson((String) getOnlyElement(result.getOnlyColumnAsSet())), new IoPlan( ImmutableSet.of( new TableColumnInfo( new CatalogSchemaTableName(catalog, "tpch", "test_io_explain_column_filters"), ImmutableSet.of( new ColumnConstraint( "orderkey", BIGINT, new FormattedDomain( false, ImmutableSet.of( new FormattedRange( new FormattedMarker(Optional.of("1"), EXACTLY), new FormattedMarker(Optional.of("1"), EXACTLY)), new FormattedRange( new FormattedMarker(Optional.of("2"), EXACTLY), new FormattedMarker(Optional.of("2"), EXACTLY))))), new ColumnConstraint( "custkey", BIGINT, new FormattedDomain( false, ImmutableSet.of( new FormattedRange( new FormattedMarker(Optional.empty(), ABOVE), new FormattedMarker(Optional.of("10"), EXACTLY)))))), estimate)), Optional.empty(), finalEstimate)); assertUpdate("DROP TABLE test_io_explain_column_filters"); } @Test public void testIoExplainNoFilter() { Session admin = Session.builder(getSession()) .setIdentity(Identity.forUser("hive") .withConnectorRole("hive", new SelectedRole(ROLE, Optional.of("admin"))) .build()) .build(); assertUpdate( admin, "create table io_explain_test_no_filter(\n" + "id integer,\n" + "a varchar,\n" + "b varchar,\n" + "ds varchar)" + "WITH (format='PARQUET', partitioned_by = ARRAY['ds'])"); assertUpdate(admin, "insert into io_explain_test_no_filter(id,a,ds) values(1, 'a','a')", 1); EstimatedStatsAndCost estimate = new EstimatedStatsAndCost(1.0, 22.0, 22.0, 0.0, 0.0); EstimatedStatsAndCost finalEstimate = new EstimatedStatsAndCost(1.0, 22.0, 22.0, 0.0, 22.0); MaterializedResult result = computeActual("EXPLAIN (TYPE IO, FORMAT JSON) SELECT * FROM io_explain_test_no_filter"); assertEquals( getIoPlanCodec().fromJson((String) getOnlyElement(result.getOnlyColumnAsSet())), new IoPlan( ImmutableSet.of( new TableColumnInfo( new CatalogSchemaTableName(catalog, "tpch", "io_explain_test_no_filter"), ImmutableSet.of( new ColumnConstraint( "ds", VARCHAR, new FormattedDomain( false, ImmutableSet.of( new FormattedRange( new FormattedMarker(Optional.of("a"), EXACTLY), new FormattedMarker(Optional.of("a"), EXACTLY)))))), estimate)), Optional.empty(), finalEstimate)); assertUpdate("DROP TABLE io_explain_test_no_filter"); } @Test public void testIoExplainFilterOnAgg() { Session admin = Session.builder(getSession()) .setIdentity(Identity.forUser("hive") .withConnectorRole("hive", new SelectedRole(ROLE, Optional.of("admin"))) .build()) .build(); assertUpdate( admin, "create table io_explain_test_filter_on_agg(\n" + "id integer,\n" + "a varchar,\n" + "b varchar,\n" + "ds varchar)" + "WITH (format='PARQUET', partitioned_by = ARRAY['ds'])"); assertUpdate(admin, "insert into io_explain_test_filter_on_agg(id,a,ds) values(1, 'a','a')", 1); EstimatedStatsAndCost estimate = new EstimatedStatsAndCost(1.0, 5.0, 5.0, 0.0, 0.0); EstimatedStatsAndCost finalEstimate = new EstimatedStatsAndCost(Double.NaN, Double.NaN, Double.NaN, Double.NaN, Double.NaN); MaterializedResult result = computeActual("EXPLAIN (TYPE IO, FORMAT JSON) SELECT * FROM (SELECT COUNT(*) cnt FROM io_explain_test_filter_on_agg WHERE b = 'b') WHERE cnt > 0"); assertEquals( getIoPlanCodec().fromJson((String) getOnlyElement(result.getOnlyColumnAsSet())), new IoPlan( ImmutableSet.of( new TableColumnInfo( new CatalogSchemaTableName(catalog, "tpch", "io_explain_test_filter_on_agg"), ImmutableSet.of( new ColumnConstraint( "ds", VARCHAR, new FormattedDomain( false, ImmutableSet.of( new FormattedRange( new FormattedMarker(Optional.of("a"), EXACTLY), new FormattedMarker(Optional.of("a"), EXACTLY))))), new ColumnConstraint( "b", VARCHAR, new FormattedDomain( false, ImmutableSet.of( new FormattedRange( new FormattedMarker(Optional.of("b"), EXACTLY), new FormattedMarker(Optional.of("b"), EXACTLY)))))), estimate)), Optional.empty(), finalEstimate)); assertUpdate("DROP TABLE io_explain_test_filter_on_agg"); } @Test public void testIoExplainWithPrimitiveTypes() { // Use LinkedHashMap to maintain insertion order for ease of locating // map entry if assertion in the loop below fails. Map<Object, TypeAndEstimate> data = new LinkedHashMap<>(); data.put("foo", new TypeAndEstimate(createUnboundedVarcharType(), new EstimatedStatsAndCost(1.0, 16.0, 16.0, 0.0, 0.0))); data.put(Byte.toString((byte) (Byte.MAX_VALUE / 2)), new TypeAndEstimate(TINYINT, new EstimatedStatsAndCost(1.0, 10.0, 10.0, 0.0, 0.0))); data.put(Short.toString((short) (Short.MAX_VALUE / 2)), new TypeAndEstimate(SMALLINT, new EstimatedStatsAndCost(1.0, 11.0, 11.0, 0.0, 0.0))); data.put(Integer.toString(Integer.MAX_VALUE / 2), new TypeAndEstimate(INTEGER, new EstimatedStatsAndCost(1.0, 13.0, 13.0, 0.0, 0.0))); data.put(Long.toString(Long.MAX_VALUE / 2), new TypeAndEstimate(BIGINT, new EstimatedStatsAndCost(1.0, 17.0, 17.0, 0.0, 0.0))); data.put(Boolean.TRUE.toString(), new TypeAndEstimate(BOOLEAN, new EstimatedStatsAndCost(1.0, 10.0, 10.0, 0.0, 0.0))); data.put("bar", new TypeAndEstimate(createCharType(3), new EstimatedStatsAndCost(1.0, 16.0, 16.0, 0.0, 0.0))); data.put("1.2345678901234578E14", new TypeAndEstimate(DOUBLE, new EstimatedStatsAndCost(1.0, 17.0, 17.0, 0.0, 0.0))); data.put("123456789012345678901234.567", new TypeAndEstimate(createDecimalType(30, 3), new EstimatedStatsAndCost(1.0, 25.0, 25.0, 0.0, 0.0))); data.put("2019-01-01", new TypeAndEstimate(DateType.DATE, new EstimatedStatsAndCost(1.0, 13.0, 13.0, 0.0, 0.0))); data.put("2019-01-01 23:22:21.123", new TypeAndEstimate(TimestampType.TIMESTAMP_MILLIS, new EstimatedStatsAndCost(1.0, 17.0, 17.0, 0.0, 0.0))); int index = 0; for (Map.Entry<Object, TypeAndEstimate> entry : data.entrySet()) { index++; Type type = entry.getValue().type; EstimatedStatsAndCost estimate = entry.getValue().estimate; @Language("SQL") String query = format( "CREATE TABLE test_types_table WITH (partitioned_by = ARRAY['my_col']) AS " + "SELECT 'foo' my_non_partition_col, CAST('%s' AS %s) my_col", entry.getKey(), type.getDisplayName()); assertUpdate(query, 1); assertEquals( getIoPlanCodec().fromJson((String) getOnlyElement(computeActual("EXPLAIN (TYPE IO, FORMAT JSON) SELECT * FROM test_types_table").getOnlyColumnAsSet())), new IoPlan( ImmutableSet.of(new TableColumnInfo( new CatalogSchemaTableName(catalog, "tpch", "test_types_table"), ImmutableSet.of( new ColumnConstraint( "my_col", type, new FormattedDomain( false, ImmutableSet.of( new FormattedRange( new FormattedMarker(Optional.of(entry.getKey().toString()), EXACTLY), new FormattedMarker(Optional.of(entry.getKey().toString()), EXACTLY)))))), estimate)), Optional.empty(), estimate), format("%d) Type %s ", index, type)); assertUpdate("DROP TABLE test_types_table"); } } @Test public void testReadNoColumns() { testWithAllStorageFormats(this::testReadNoColumns); } private void testReadNoColumns(Session session, HiveStorageFormat storageFormat) { assertUpdate(session, format("CREATE TABLE test_read_no_columns WITH (format = '%s') AS SELECT 0 x", storageFormat), 1); assertQuery(session, "SELECT count(*) FROM test_read_no_columns", "SELECT 1"); assertUpdate(session, "DROP TABLE test_read_no_columns"); } @Test public void createTableWithEveryType() { @Language("SQL") String query = "" + "CREATE TABLE test_types_table AS " + "SELECT" + " 'foo' _varchar" + ", cast('bar' as varbinary) _varbinary" + ", cast(1 as bigint) _bigint" + ", 2 _integer" + ", CAST('3.14' AS DOUBLE) _double" + ", true _boolean" + ", DATE '1980-05-07' _date" + ", TIMESTAMP '1980-05-07 11:22:33.456' _timestamp" + ", CAST('3.14' AS DECIMAL(3,2)) _decimal_short" + ", CAST('12345678901234567890.0123456789' AS DECIMAL(30,10)) _decimal_long" + ", CAST('bar' AS CHAR(10)) _char"; assertUpdate(query, 1); MaterializedResult results = getQueryRunner().execute(getSession(), "SELECT * FROM test_types_table").toTestTypes(); assertEquals(results.getRowCount(), 1); MaterializedRow row = results.getMaterializedRows().get(0); assertEquals(row.getField(0), "foo"); assertEquals(row.getField(1), "bar".getBytes(UTF_8)); assertEquals(row.getField(2), 1L); assertEquals(row.getField(3), 2); assertEquals(row.getField(4), 3.14); assertEquals(row.getField(5), true); assertEquals(row.getField(6), LocalDate.of(1980, 5, 7)); assertEquals(row.getField(7), LocalDateTime.of(1980, 5, 7, 11, 22, 33, 456_000_000)); assertEquals(row.getField(8), new BigDecimal("3.14")); assertEquals(row.getField(9), new BigDecimal("12345678901234567890.0123456789")); assertEquals(row.getField(10), "bar "); assertUpdate("DROP TABLE test_types_table"); assertFalse(getQueryRunner().tableExists(getSession(), "test_types_table")); } @Test public void testCreatePartitionedTable() { testWithAllStorageFormats(this::testCreatePartitionedTable); } private void testCreatePartitionedTable(Session session, HiveStorageFormat storageFormat) { @Language("SQL") String createTable = "" + "CREATE TABLE test_partitioned_table (" + " _string VARCHAR" + ", _varchar VARCHAR(65535)" + ", _char CHAR(10)" + ", _bigint BIGINT" + ", _integer INTEGER" + ", _smallint SMALLINT" + ", _tinyint TINYINT" + ", _real REAL" + ", _double DOUBLE" + ", _boolean BOOLEAN" + ", _decimal_short DECIMAL(3,2)" + ", _decimal_long DECIMAL(30,10)" + ", _partition_string VARCHAR" + ", _partition_varchar VARCHAR(65535)" + ", _partition_char CHAR(10)" + ", _partition_tinyint TINYINT" + ", _partition_smallint SMALLINT" + ", _partition_integer INTEGER" + ", _partition_bigint BIGINT" + ", _partition_boolean BOOLEAN" + ", _partition_decimal_short DECIMAL(3,2)" + ", _partition_decimal_long DECIMAL(30,10)" + ", _partition_date DATE" + ", _partition_timestamp TIMESTAMP" + ") " + "WITH (" + "format = '" + storageFormat + "', " + "partitioned_by = ARRAY[ '_partition_string', '_partition_varchar', '_partition_char', '_partition_tinyint', '_partition_smallint', '_partition_integer', '_partition_bigint', '_partition_boolean', '_partition_decimal_short', '_partition_decimal_long', '_partition_date', '_partition_timestamp']" + ") "; if (storageFormat == HiveStorageFormat.AVRO) { createTable = createTable.replace(" _smallint SMALLINT,", " _smallint INTEGER,"); createTable = createTable.replace(" _tinyint TINYINT,", " _tinyint INTEGER,"); } assertUpdate(session, createTable); TableMetadata tableMetadata = getTableMetadata(catalog, TPCH_SCHEMA, "test_partitioned_table"); assertEquals(tableMetadata.getMetadata().getProperties().get(STORAGE_FORMAT_PROPERTY), storageFormat); List<String> partitionedBy = ImmutableList.of( "_partition_string", "_partition_varchar", "_partition_char", "_partition_tinyint", "_partition_smallint", "_partition_integer", "_partition_bigint", "_partition_boolean", "_partition_decimal_short", "_partition_decimal_long", "_partition_date", "_partition_timestamp"); assertEquals(tableMetadata.getMetadata().getProperties().get(PARTITIONED_BY_PROPERTY), partitionedBy); for (ColumnMetadata columnMetadata : tableMetadata.getColumns()) { boolean partitionKey = partitionedBy.contains(columnMetadata.getName()); assertEquals(columnMetadata.getExtraInfo(), columnExtraInfo(partitionKey)); } assertColumnType(tableMetadata, "_string", createUnboundedVarcharType()); assertColumnType(tableMetadata, "_varchar", createVarcharType(65535)); assertColumnType(tableMetadata, "_char", createCharType(10)); assertColumnType(tableMetadata, "_partition_string", createUnboundedVarcharType()); assertColumnType(tableMetadata, "_partition_varchar", createVarcharType(65535)); MaterializedResult result = computeActual("SELECT * FROM test_partitioned_table"); assertEquals(result.getRowCount(), 0); @Language("SQL") String select = "" + "SELECT" + " 'foo' _string" + ", 'bar' _varchar" + ", CAST('boo' AS CHAR(10)) _char" + ", CAST(1 AS BIGINT) _bigint" + ", 2 _integer" + ", CAST (3 AS SMALLINT) _smallint" + ", CAST (4 AS TINYINT) _tinyint" + ", CAST('123.45' AS REAL) _real" + ", CAST('3.14' AS DOUBLE) _double" + ", true _boolean" + ", CAST('3.14' AS DECIMAL(3,2)) _decimal_short" + ", CAST('12345678901234567890.0123456789' AS DECIMAL(30,10)) _decimal_long" + ", 'foo' _partition_string" + ", 'bar' _partition_varchar" + ", CAST('boo' AS CHAR(10)) _partition_char" + ", CAST(1 AS TINYINT) _partition_tinyint" + ", CAST(1 AS SMALLINT) _partition_smallint" + ", 1 _partition_integer" + ", CAST (1 AS BIGINT) _partition_bigint" + ", true _partition_boolean" + ", CAST('3.14' AS DECIMAL(3,2)) _partition_decimal_short" + ", CAST('12345678901234567890.0123456789' AS DECIMAL(30,10)) _partition_decimal_long" + ", CAST('2017-05-01' AS DATE) _partition_date" + ", CAST('2017-05-01 10:12:34' AS TIMESTAMP) _partition_timestamp"; if (storageFormat == HiveStorageFormat.AVRO) { select = select.replace(" CAST (3 AS SMALLINT) _smallint,", " 3 _smallint,"); select = select.replace(" CAST (4 AS TINYINT) _tinyint,", " 4 _tinyint,"); } assertUpdate(session, "INSERT INTO test_partitioned_table " + select, 1); assertQuery(session, "SELECT * FROM test_partitioned_table", select); assertQuery(session, "SELECT * FROM test_partitioned_table WHERE" + " 'foo' = _partition_string" + " AND 'bar' = _partition_varchar" + " AND CAST('boo' AS CHAR(10)) = _partition_char" + " AND CAST(1 AS TINYINT) = _partition_tinyint" + " AND CAST(1 AS SMALLINT) = _partition_smallint" + " AND 1 = _partition_integer" + " AND CAST(1 AS BIGINT) = _partition_bigint" + " AND true = _partition_boolean" + " AND CAST('3.14' AS DECIMAL(3,2)) = _partition_decimal_short" + " AND CAST('12345678901234567890.0123456789' AS DECIMAL(30,10)) = _partition_decimal_long" + " AND CAST('2017-05-01' AS DATE) = _partition_date" + " AND CAST('2017-05-01 10:12:34' AS TIMESTAMP) = _partition_timestamp", select); assertUpdate(session, "DROP TABLE test_partitioned_table"); assertFalse(getQueryRunner().tableExists(session, "test_partitioned_table")); } @Test public void createTableLike() { createTableLike("", false); createTableLike("EXCLUDING PROPERTIES", false); createTableLike("INCLUDING PROPERTIES", true); } private void createTableLike(String likeSuffix, boolean hasPartition) { // Create a non-partitioned table @Language("SQL") String createTable = "" + "CREATE TABLE test_table_original (" + " tinyint_col tinyint " + ", smallint_col smallint" + ")"; assertUpdate(createTable); // Verify the table is correctly created TableMetadata tableMetadata = getTableMetadata(catalog, TPCH_SCHEMA, "test_table_original"); assertColumnType(tableMetadata, "tinyint_col", TINYINT); assertColumnType(tableMetadata, "smallint_col", SMALLINT); // Create a partitioned table @Language("SQL") String createPartitionedTable = "" + "CREATE TABLE test_partitioned_table_original (" + " string_col VARCHAR" + ", decimal_long_col DECIMAL(30,10)" + ", partition_bigint BIGINT" + ", partition_decimal_long DECIMAL(30,10)" + ") " + "WITH (" + "partitioned_by = ARRAY['partition_bigint', 'partition_decimal_long']" + ")"; assertUpdate(createPartitionedTable); // Verify the table is correctly created tableMetadata = getTableMetadata(catalog, TPCH_SCHEMA, "test_partitioned_table_original"); // Verify the partition keys are correctly created List<String> partitionedBy = ImmutableList.of("partition_bigint", "partition_decimal_long"); assertEquals(tableMetadata.getMetadata().getProperties().get(PARTITIONED_BY_PROPERTY), partitionedBy); // Verify the column types assertColumnType(tableMetadata, "string_col", createUnboundedVarcharType()); assertColumnType(tableMetadata, "partition_bigint", BIGINT); assertColumnType(tableMetadata, "partition_decimal_long", createDecimalType(30, 10)); // Create a table using only one LIKE @Language("SQL") String createTableSingleLike = "" + "CREATE TABLE test_partitioned_table_single_like (" + "LIKE test_partitioned_table_original " + likeSuffix + ")"; assertUpdate(createTableSingleLike); tableMetadata = getTableMetadata(catalog, TPCH_SCHEMA, "test_partitioned_table_single_like"); // Verify the partitioned keys are correctly created if copying partition columns verifyPartition(hasPartition, tableMetadata, partitionedBy); // Verify the column types assertColumnType(tableMetadata, "string_col", createUnboundedVarcharType()); assertColumnType(tableMetadata, "partition_bigint", BIGINT); assertColumnType(tableMetadata, "partition_decimal_long", createDecimalType(30, 10)); @Language("SQL") String createTableLikeExtra = "" + "CREATE TABLE test_partitioned_table_like_extra (" + " bigint_col BIGINT" + ", double_col DOUBLE" + ", LIKE test_partitioned_table_single_like " + likeSuffix + ")"; assertUpdate(createTableLikeExtra); tableMetadata = getTableMetadata(catalog, TPCH_SCHEMA, "test_partitioned_table_like_extra"); // Verify the partitioned keys are correctly created if copying partition columns verifyPartition(hasPartition, tableMetadata, partitionedBy); // Verify the column types assertColumnType(tableMetadata, "bigint_col", BIGINT); assertColumnType(tableMetadata, "double_col", DOUBLE); assertColumnType(tableMetadata, "string_col", createUnboundedVarcharType()); assertColumnType(tableMetadata, "partition_bigint", BIGINT); assertColumnType(tableMetadata, "partition_decimal_long", createDecimalType(30, 10)); @Language("SQL") String createTableDoubleLike = "" + "CREATE TABLE test_partitioned_table_double_like (" + " LIKE test_table_original " + ", LIKE test_partitioned_table_like_extra " + likeSuffix + ")"; assertUpdate(createTableDoubleLike); tableMetadata = getTableMetadata(catalog, TPCH_SCHEMA, "test_partitioned_table_double_like"); // Verify the partitioned keys are correctly created if copying partition columns verifyPartition(hasPartition, tableMetadata, partitionedBy); // Verify the column types assertColumnType(tableMetadata, "tinyint_col", TINYINT); assertColumnType(tableMetadata, "smallint_col", SMALLINT); assertColumnType(tableMetadata, "string_col", createUnboundedVarcharType()); assertColumnType(tableMetadata, "partition_bigint", BIGINT); assertColumnType(tableMetadata, "partition_decimal_long", createDecimalType(30, 10)); assertUpdate("DROP TABLE test_table_original"); assertUpdate("DROP TABLE test_partitioned_table_original"); assertUpdate("DROP TABLE test_partitioned_table_single_like"); assertUpdate("DROP TABLE test_partitioned_table_like_extra"); assertUpdate("DROP TABLE test_partitioned_table_double_like"); } @Test public void testCreateTableAs() { testWithAllStorageFormats(this::testCreateTableAs); } private void testCreateTableAs(Session session, HiveStorageFormat storageFormat) { @Language("SQL") String select = "SELECT" + " 'foo' _varchar" + ", CAST('bar' AS CHAR(10)) _char" + ", CAST (1 AS BIGINT) _bigint" + ", 2 _integer" + ", CAST (3 AS SMALLINT) _smallint" + ", CAST (4 AS TINYINT) _tinyint" + ", CAST ('123.45' as REAL) _real" + ", CAST('3.14' AS DOUBLE) _double" + ", true _boolean" + ", CAST('3.14' AS DECIMAL(3,2)) _decimal_short" + ", CAST('12345678901234567890.0123456789' AS DECIMAL(30,10)) _decimal_long"; if (storageFormat == HiveStorageFormat.AVRO) { select = select.replace(" CAST (3 AS SMALLINT) _smallint,", " 3 _smallint,"); select = select.replace(" CAST (4 AS TINYINT) _tinyint,", " 4 _tinyint,"); } String createTableAs = format("CREATE TABLE test_format_table WITH (format = '%s') AS %s", storageFormat, select); assertUpdate(session, createTableAs, 1); TableMetadata tableMetadata = getTableMetadata(catalog, TPCH_SCHEMA, "test_format_table"); assertEquals(tableMetadata.getMetadata().getProperties().get(STORAGE_FORMAT_PROPERTY), storageFormat); assertColumnType(tableMetadata, "_varchar", createVarcharType(3)); assertColumnType(tableMetadata, "_char", createCharType(10)); // assure reader supports basic column reordering and pruning assertQuery(session, "SELECT _integer, _varchar, _integer FROM test_format_table", "SELECT 2, 'foo', 2"); assertQuery(session, "SELECT * FROM test_format_table", select); assertUpdate(session, "DROP TABLE test_format_table"); assertFalse(getQueryRunner().tableExists(session, "test_format_table")); } @Test public void testCreatePartitionedTableAs() { testWithAllStorageFormats(this::testCreatePartitionedTableAs); } private void testCreatePartitionedTableAs(Session session, HiveStorageFormat storageFormat) { @Language("SQL") String createTable = "" + "CREATE TABLE test_create_partitioned_table_as " + "WITH (" + "format = '" + storageFormat + "', " + "partitioned_by = ARRAY[ 'SHIP_PRIORITY', 'ORDER_STATUS' ]" + ") " + "AS " + "SELECT orderkey AS order_key, shippriority AS ship_priority, orderstatus AS order_status " + "FROM tpch.tiny.orders"; assertUpdate(session, createTable, "SELECT count(*) FROM orders"); TableMetadata tableMetadata = getTableMetadata(catalog, TPCH_SCHEMA, "test_create_partitioned_table_as"); assertEquals(tableMetadata.getMetadata().getProperties().get(STORAGE_FORMAT_PROPERTY), storageFormat); assertEquals(tableMetadata.getMetadata().getProperties().get(PARTITIONED_BY_PROPERTY), ImmutableList.of("ship_priority", "order_status")); List<?> partitions = getPartitions("test_create_partitioned_table_as"); assertEquals(partitions.size(), 3); assertQuery(session, "SELECT * FROM test_create_partitioned_table_as", "SELECT orderkey, shippriority, orderstatus FROM orders"); assertUpdate(session, "DROP TABLE test_create_partitioned_table_as"); assertFalse(getQueryRunner().tableExists(session, "test_create_partitioned_table_as")); } @Test public void testCreateTableWithUnsupportedType() { assertQueryFails("CREATE TABLE test_create_table_with_unsupported_type(x time)", "\\QUnsupported Hive type: time(3)\\E"); assertQueryFails("CREATE TABLE test_create_table_with_unsupported_type AS SELECT TIME '00:00:00' x", "\\QUnsupported Hive type: time(0)\\E"); } @Test public void testTargetMaxFileSize() { @Language("SQL") String createTableSql = "CREATE TABLE test_max_file_size AS SELECT * FROM tpch.sf1.lineitem LIMIT 1000000"; @Language("SQL") String selectFileInfo = "SELECT distinct \"$path\", \"$file_size\" FROM test_max_file_size"; // verify the default behavior is one file per node Session session = Session.builder(getSession()) .setSystemProperty("task_writer_count", "1") .build(); assertUpdate(session, createTableSql, 1000000); assertThat(computeActual(selectFileInfo).getRowCount()).isEqualTo(3); assertUpdate("DROP TABLE test_max_file_size"); // Write table with small limit and verify we get multiple files per node near the expected size // Writer writes chunks of rows that are about 40k // We use TEXTFILE in this test because is has a very consistent and predictable size DataSize maxSize = DataSize.of(40, Unit.KILOBYTE); session = Session.builder(getSession()) .setSystemProperty("task_writer_count", "1") .setCatalogSessionProperty("hive", "target_max_file_size", maxSize.toString()) .setCatalogSessionProperty("hive", "hive_storage_format", "TEXTFILE") .build(); assertUpdate(session, createTableSql, 1000000); MaterializedResult result = computeActual(selectFileInfo); assertThat(result.getRowCount()).isGreaterThan(3); for (MaterializedRow row : result) { // allow up to a larger delta due to the very small max size and the relatively large writer chunk size assertThat((Long) row.getField(1)).isLessThan(maxSize.toBytes() * 3); } assertUpdate("DROP TABLE test_max_file_size"); } @Test public void testPropertiesTable() { @Language("SQL") String createTable = "" + "CREATE TABLE test_show_properties" + " WITH (" + "format = 'orc', " + "partitioned_by = ARRAY['ship_priority', 'order_status']," + "orc_bloom_filter_columns = ARRAY['ship_priority', 'order_status']," + "orc_bloom_filter_fpp = 0.5" + ") " + "AS " + "SELECT orderkey AS order_key, shippriority AS ship_priority, orderstatus AS order_status " + "FROM tpch.tiny.orders"; assertUpdate(createTable, "SELECT count(*) FROM orders"); String queryId = (String) computeScalar("SELECT query_id FROM system.runtime.queries WHERE query LIKE 'CREATE TABLE test_show_properties%'"); String nodeVersion = (String) computeScalar("SELECT node_version FROM system.runtime.nodes WHERE coordinator"); assertQuery("SELECT * FROM \"test_show_properties$properties\"", "SELECT 'workaround for potential lack of HIVE-12730', 'ship_priority,order_status', '0.5', '" + queryId + "', '" + nodeVersion + "', 'false'"); assertUpdate("DROP TABLE test_show_properties"); } @Test(expectedExceptions = RuntimeException.class, expectedExceptionsMessageRegExp = "Partition keys must be the last columns in the table and in the same order as the table properties.*") public void testCreatePartitionedTableInvalidColumnOrdering() { assertUpdate("" + "CREATE TABLE test_create_table_invalid_column_ordering\n" + "(grape bigint, apple varchar, orange bigint, pear varchar)\n" + "WITH (partitioned_by = ARRAY['apple'])"); } @Test(expectedExceptions = RuntimeException.class, expectedExceptionsMessageRegExp = "Partition keys must be the last columns in the table and in the same order as the table properties.*") public void testCreatePartitionedTableAsInvalidColumnOrdering() { assertUpdate("" + "CREATE TABLE test_create_table_as_invalid_column_ordering " + "WITH (partitioned_by = ARRAY['SHIP_PRIORITY', 'ORDER_STATUS']) " + "AS " + "SELECT shippriority AS ship_priority, orderkey AS order_key, orderstatus AS order_status " + "FROM tpch.tiny.orders"); } @Test(expectedExceptions = RuntimeException.class, expectedExceptionsMessageRegExp = "Table contains only partition columns") public void testCreateTableOnlyPartitionColumns() { assertUpdate("" + "CREATE TABLE test_create_table_only_partition_columns\n" + "(grape bigint, apple varchar, orange bigint, pear varchar)\n" + "WITH (partitioned_by = ARRAY['grape', 'apple', 'orange', 'pear'])"); } @Test(expectedExceptions = RuntimeException.class, expectedExceptionsMessageRegExp = "Partition columns .* not present in schema") public void testCreateTableNonExistentPartitionColumns() { assertUpdate("" + "CREATE TABLE test_create_table_nonexistent_partition_columns\n" + "(grape bigint, apple varchar, orange bigint, pear varchar)\n" + "WITH (partitioned_by = ARRAY['dragonfruit'])"); } @Test(expectedExceptions = RuntimeException.class, expectedExceptionsMessageRegExp = "Unsupported type .* for partition: .*") public void testCreateTableUnsupportedPartitionType() { assertUpdate("" + "CREATE TABLE test_create_table_unsupported_partition_type " + "(foo bigint, bar ARRAY(varchar)) " + "WITH (partitioned_by = ARRAY['bar'])"); } @Test(expectedExceptions = RuntimeException.class, expectedExceptionsMessageRegExp = "Unsupported type .* for partition: a") public void testCreateTableUnsupportedPartitionTypeAs() { assertUpdate("" + "CREATE TABLE test_create_table_unsupported_partition_type_as " + "WITH (partitioned_by = ARRAY['a']) " + "AS " + "SELECT 123 x, ARRAY ['foo'] a"); } @Test(expectedExceptions = RuntimeException.class, expectedExceptionsMessageRegExp = "Unsupported Hive type: varchar\\(65536\\)\\. Supported VARCHAR types: VARCHAR\\(<=65535\\), VARCHAR\\.") public void testCreateTableNonSupportedVarcharColumn() { assertUpdate("CREATE TABLE test_create_table_non_supported_varchar_column (apple varchar(65536))"); } @Test public void testEmptyBucketedTable() { // go through all storage formats to make sure the empty buckets are correctly created testWithAllStorageFormats(this::testEmptyBucketedTable); } private void testEmptyBucketedTable(Session session, HiveStorageFormat storageFormat) { testEmptyBucketedTable(session, storageFormat, true); testEmptyBucketedTable(session, storageFormat, false); } private void testEmptyBucketedTable(Session session, HiveStorageFormat storageFormat, boolean createEmpty) { String tableName = "test_empty_bucketed_table"; @Language("SQL") String createTable = "" + "CREATE TABLE " + tableName + " " + "(bucket_key VARCHAR, col_1 VARCHAR, col2 VARCHAR) " + "WITH (" + "format = '" + storageFormat + "', " + "bucketed_by = ARRAY[ 'bucket_key' ], " + "bucket_count = 11 " + ") "; assertUpdate(createTable); TableMetadata tableMetadata = getTableMetadata(catalog, TPCH_SCHEMA, tableName); assertEquals(tableMetadata.getMetadata().getProperties().get(STORAGE_FORMAT_PROPERTY), storageFormat); assertNull(tableMetadata.getMetadata().getProperties().get(PARTITIONED_BY_PROPERTY)); assertEquals(tableMetadata.getMetadata().getProperties().get(BUCKETED_BY_PROPERTY), ImmutableList.of("bucket_key")); assertEquals(tableMetadata.getMetadata().getProperties().get(BUCKET_COUNT_PROPERTY), 11); assertEquals(computeActual("SELECT * from " + tableName).getRowCount(), 0); // make sure that we will get one file per bucket regardless of writer count configured Session parallelWriter = Session.builder(getParallelWriteSession()) .setCatalogSessionProperty(catalog, "create_empty_bucket_files", String.valueOf(createEmpty)) .build(); assertUpdate(parallelWriter, "INSERT INTO " + tableName + " VALUES ('a0', 'b0', 'c0')", 1); assertUpdate(parallelWriter, "INSERT INTO " + tableName + " VALUES ('a1', 'b1', 'c1')", 1); assertQuery("SELECT * from " + tableName, "VALUES ('a0', 'b0', 'c0'), ('a1', 'b1', 'c1')"); assertUpdate(session, "DROP TABLE " + tableName); assertFalse(getQueryRunner().tableExists(session, tableName)); } @Test public void testBucketedTable() { // go through all storage formats to make sure the empty buckets are correctly created testWithAllStorageFormats(this::testBucketedTable); } private void testBucketedTable(Session session, HiveStorageFormat storageFormat) { testBucketedTable(session, storageFormat, true); testBucketedTable(session, storageFormat, false); } private void testBucketedTable(Session session, HiveStorageFormat storageFormat, boolean createEmpty) { String tableName = "test_bucketed_table"; @Language("SQL") String createTable = "" + "CREATE TABLE " + tableName + " " + "WITH (" + "format = '" + storageFormat + "', " + "bucketed_by = ARRAY[ 'bucket_key' ], " + "bucket_count = 11 " + ") " + "AS " + "SELECT * " + "FROM (" + "VALUES " + " (VARCHAR 'a', VARCHAR 'b', VARCHAR 'c'), " + " ('aa', 'bb', 'cc'), " + " ('aaa', 'bbb', 'ccc')" + ") t (bucket_key, col_1, col_2)"; // make sure that we will get one file per bucket regardless of writer count configured Session parallelWriter = Session.builder(getParallelWriteSession()) .setCatalogSessionProperty(catalog, "create_empty_bucket_files", String.valueOf(createEmpty)) .build(); assertUpdate(parallelWriter, createTable, 3); TableMetadata tableMetadata = getTableMetadata(catalog, TPCH_SCHEMA, tableName); assertEquals(tableMetadata.getMetadata().getProperties().get(STORAGE_FORMAT_PROPERTY), storageFormat); assertNull(tableMetadata.getMetadata().getProperties().get(PARTITIONED_BY_PROPERTY)); assertEquals(tableMetadata.getMetadata().getProperties().get(BUCKETED_BY_PROPERTY), ImmutableList.of("bucket_key")); assertEquals(tableMetadata.getMetadata().getProperties().get(BUCKET_COUNT_PROPERTY), 11); assertQuery("SELECT * from " + tableName, "VALUES ('a', 'b', 'c'), ('aa', 'bb', 'cc'), ('aaa', 'bbb', 'ccc')"); assertUpdate( parallelWriter, "INSERT INTO " + tableName + " VALUES ('a0', 'b0', 'c0')", 1, // buckets should be repartitioned locally hence local repartitioned exchange should exist in plan assertLocalRepartitionedExchangesCount(1)); assertUpdate(parallelWriter, "INSERT INTO " + tableName + " VALUES ('a1', 'b1', 'c1')", 1); assertQuery("SELECT * from " + tableName, "VALUES ('a', 'b', 'c'), ('aa', 'bb', 'cc'), ('aaa', 'bbb', 'ccc'), ('a0', 'b0', 'c0'), ('a1', 'b1', 'c1')"); assertUpdate(session, "DROP TABLE " + tableName); assertFalse(getQueryRunner().tableExists(session, tableName)); } /** * Regression test for https://github.com/trinodb/trino/issues/5295 */ @Test public void testBucketedTableWithTimestampColumn() { String tableName = "test_bucketed_table_with_timestamp_" + randomTableSuffix(); String createTable = "" + "CREATE TABLE " + tableName + " (" + " bucket_key integer, " + " a_timestamp timestamp(3) " + ")" + "WITH (" + " bucketed_by = ARRAY[ 'bucket_key' ], " + " bucket_count = 11 " + ") "; assertUpdate(createTable); assertQuery( "DESCRIBE " + tableName, "VALUES " + "('bucket_key', 'integer', '', ''), " + "('a_timestamp', 'timestamp(3)', '', '')"); assertUpdate("DROP TABLE " + tableName); } @Test public void testCreatePartitionedBucketedTableAsFewRows() { // go through all storage formats to make sure the empty buckets are correctly created testWithAllStorageFormats(this::testCreatePartitionedBucketedTableAsFewRows); } private void testCreatePartitionedBucketedTableAsFewRows(Session session, HiveStorageFormat storageFormat) { testCreatePartitionedBucketedTableAsFewRows(session, storageFormat, true); testCreatePartitionedBucketedTableAsFewRows(session, storageFormat, false); } private void testCreatePartitionedBucketedTableAsFewRows(Session session, HiveStorageFormat storageFormat, boolean createEmpty) { String tableName = "test_create_partitioned_bucketed_table_as_few_rows"; @Language("SQL") String createTable = "" + "CREATE TABLE " + tableName + " " + "WITH (" + "format = '" + storageFormat + "', " + "partitioned_by = ARRAY[ 'partition_key' ], " + "bucketed_by = ARRAY[ 'bucket_key' ], " + "bucket_count = 11 " + ") " + "AS " + "SELECT * " + "FROM (" + "VALUES " + " (VARCHAR 'a', VARCHAR 'b', VARCHAR 'c'), " + " ('aa', 'bb', 'cc'), " + " ('aaa', 'bbb', 'ccc')" + ") t(bucket_key, col, partition_key)"; assertUpdate( // make sure that we will get one file per bucket regardless of writer count configured Session.builder(getParallelWriteSession()) .setCatalogSessionProperty(catalog, "create_empty_bucket_files", String.valueOf(createEmpty)) .build(), createTable, 3); verifyPartitionedBucketedTableAsFewRows(storageFormat, tableName); assertUpdate(session, "DROP TABLE " + tableName); assertFalse(getQueryRunner().tableExists(session, tableName)); } @Test public void testCreatePartitionedBucketedTableAs() { testCreatePartitionedBucketedTableAs(HiveStorageFormat.RCBINARY); } private void testCreatePartitionedBucketedTableAs(HiveStorageFormat storageFormat) { String tableName = "test_create_partitioned_bucketed_table_as"; @Language("SQL") String createTable = "" + "CREATE TABLE " + tableName + " " + "WITH (" + "format = '" + storageFormat + "', " + "partitioned_by = ARRAY[ 'orderstatus' ], " + "bucketed_by = ARRAY[ 'custkey', 'custkey2' ], " + "bucket_count = 11 " + ") " + "AS " + "SELECT custkey, custkey AS custkey2, comment, orderstatus " + "FROM tpch.tiny.orders"; assertUpdate( // make sure that we will get one file per bucket regardless of writer count configured getParallelWriteSession(), createTable, "SELECT count(*) FROM orders"); verifyPartitionedBucketedTable(storageFormat, tableName); assertUpdate("DROP TABLE " + tableName); assertFalse(getQueryRunner().tableExists(getSession(), tableName)); } @Test public void testCreatePartitionedBucketedTableWithNullsAs() { testCreatePartitionedBucketedTableWithNullsAs(HiveStorageFormat.RCBINARY); } private void testCreatePartitionedBucketedTableWithNullsAs(HiveStorageFormat storageFormat) { String tableName = "test_create_partitioned_bucketed_table_with_nulls_as"; @Language("SQL") String createTable = "" + "CREATE TABLE " + tableName + " " + "WITH (" + "format = '" + storageFormat + "', " + "partitioned_by = ARRAY[ 'orderpriority_nulls', 'orderstatus' ], " + "bucketed_by = ARRAY[ 'custkey', 'orderkey' ], " + "bucket_count = 4 " + ") " + "AS " + "SELECT custkey, orderkey, comment, nullif(orderpriority, '1-URGENT') orderpriority_nulls, orderstatus " + "FROM tpch.tiny.orders"; assertUpdate( getParallelWriteSession(), createTable, "SELECT count(*) FROM orders"); // verify that we create bucket_count files in each partition assertEqualsIgnoreOrder( computeActual(format("SELECT orderpriority_nulls, orderstatus, COUNT(DISTINCT \"$path\") FROM %s GROUP BY 1, 2", tableName)), resultBuilder(getSession(), createVarcharType(1), BIGINT) .row(null, "F", 4L) .row(null, "O", 4L) .row(null, "P", 4L) .row("2-HIGH", "F", 4L) .row("2-HIGH", "O", 4L) .row("2-HIGH", "P", 4L) .row("3-MEDIUM", "F", 4L) .row("3-MEDIUM", "O", 4L) .row("3-MEDIUM", "P", 4L) .row("4-NOT SPECIFIED", "F", 4L) .row("4-NOT SPECIFIED", "O", 4L) .row("4-NOT SPECIFIED", "P", 4L) .row("5-LOW", "F", 4L) .row("5-LOW", "O", 4L) .row("5-LOW", "P", 4L) .build()); assertQuery("SELECT * FROM " + tableName, "SELECT custkey, orderkey, comment, nullif(orderpriority, '1-URGENT') orderpriority_nulls, orderstatus FROM orders"); assertUpdate("DROP TABLE " + tableName); assertFalse(getQueryRunner().tableExists(getSession(), tableName)); } @Test public void testInsertIntoPartitionedBucketedTableFromBucketedTable() { testInsertIntoPartitionedBucketedTableFromBucketedTable(HiveStorageFormat.RCBINARY); } private void testInsertIntoPartitionedBucketedTableFromBucketedTable(HiveStorageFormat storageFormat) { String sourceTable = "test_insert_partitioned_bucketed_table_source"; String targetTable = "test_insert_partitioned_bucketed_table_target"; try { @Language("SQL") String createSourceTable = "" + "CREATE TABLE " + sourceTable + " " + "WITH (" + "format = '" + storageFormat + "', " + "bucketed_by = ARRAY[ 'custkey' ], " + "bucket_count = 10 " + ") " + "AS " + "SELECT custkey, comment, orderstatus " + "FROM tpch.tiny.orders"; @Language("SQL") String createTargetTable = "" + "CREATE TABLE " + targetTable + " " + "WITH (" + "format = '" + storageFormat + "', " + "partitioned_by = ARRAY[ 'orderstatus' ], " + "bucketed_by = ARRAY[ 'custkey' ], " + "bucket_count = 10 " + ") " + "AS " + "SELECT custkey, comment, orderstatus " + "FROM tpch.tiny.orders"; assertUpdate(getParallelWriteSession(), createSourceTable, "SELECT count(*) FROM orders"); assertUpdate(getParallelWriteSession(), createTargetTable, "SELECT count(*) FROM orders"); transaction(getQueryRunner().getTransactionManager(), getQueryRunner().getAccessControl()).execute( getParallelWriteSession(), transactionalSession -> { assertUpdate( transactionalSession, "INSERT INTO " + targetTable + " SELECT * FROM " + sourceTable, 15000, // there should be two remove exchanges, one below TableWriter and one below TableCommit assertRemoteExchangesCount(transactionalSession, 2)); }); } finally { assertUpdate("DROP TABLE IF EXISTS " + sourceTable); assertUpdate("DROP TABLE IF EXISTS " + targetTable); } } @Test public void testCreatePartitionedBucketedTableAsWithUnionAll() { testCreatePartitionedBucketedTableAsWithUnionAll(HiveStorageFormat.RCBINARY); } private void testCreatePartitionedBucketedTableAsWithUnionAll(HiveStorageFormat storageFormat) { String tableName = "test_create_partitioned_bucketed_table_as_with_union_all"; @Language("SQL") String createTable = "" + "CREATE TABLE " + tableName + " " + "WITH (" + "format = '" + storageFormat + "', " + "partitioned_by = ARRAY[ 'orderstatus' ], " + "bucketed_by = ARRAY[ 'custkey', 'custkey2' ], " + "bucket_count = 11 " + ") " + "AS " + "SELECT custkey, custkey AS custkey2, comment, orderstatus " + "FROM tpch.tiny.orders " + "WHERE length(comment) % 2 = 0 " + "UNION ALL " + "SELECT custkey, custkey AS custkey2, comment, orderstatus " + "FROM tpch.tiny.orders " + "WHERE length(comment) % 2 = 1"; assertUpdate( // make sure that we will get one file per bucket regardless of writer count configured getParallelWriteSession(), createTable, "SELECT count(*) FROM orders"); verifyPartitionedBucketedTable(storageFormat, tableName); assertUpdate("DROP TABLE " + tableName); assertFalse(getQueryRunner().tableExists(getSession(), tableName)); } private void verifyPartitionedBucketedTable(HiveStorageFormat storageFormat, String tableName) { TableMetadata tableMetadata = getTableMetadata(catalog, TPCH_SCHEMA, tableName); assertEquals(tableMetadata.getMetadata().getProperties().get(STORAGE_FORMAT_PROPERTY), storageFormat); assertEquals(tableMetadata.getMetadata().getProperties().get(PARTITIONED_BY_PROPERTY), ImmutableList.of("orderstatus")); assertEquals(tableMetadata.getMetadata().getProperties().get(BUCKETED_BY_PROPERTY), ImmutableList.of("custkey", "custkey2")); assertEquals(tableMetadata.getMetadata().getProperties().get(BUCKET_COUNT_PROPERTY), 11); List<?> partitions = getPartitions(tableName); assertEquals(partitions.size(), 3); // verify that we create bucket_count files in each partition assertEqualsIgnoreOrder( computeActual(format("SELECT orderstatus, COUNT(DISTINCT \"$path\") FROM %s GROUP BY 1", tableName)), resultBuilder(getSession(), createVarcharType(1), BIGINT) .row("F", 11L) .row("O", 11L) .row("P", 11L) .build()); assertQuery("SELECT * FROM " + tableName, "SELECT custkey, custkey, comment, orderstatus FROM orders"); for (int i = 1; i <= 30; i++) { assertQuery( format("SELECT * FROM %s WHERE custkey = %d AND custkey2 = %d", tableName, i, i), format("SELECT custkey, custkey, comment, orderstatus FROM orders WHERE custkey = %d", i)); } } @Test public void testCreateInvalidBucketedTable() { testCreateInvalidBucketedTable(HiveStorageFormat.RCBINARY); } private void testCreateInvalidBucketedTable(HiveStorageFormat storageFormat) { String tableName = "test_create_invalid_bucketed_table"; assertThatThrownBy(() -> computeActual("" + "CREATE TABLE " + tableName + " (" + " a BIGINT," + " b DOUBLE," + " p VARCHAR" + ") WITH (" + "format = '" + storageFormat + "', " + "partitioned_by = ARRAY[ 'p' ], " + "bucketed_by = ARRAY[ 'a', 'c' ], " + "bucket_count = 11 " + ")")) .hasMessage("Bucketing columns [c] not present in schema"); assertThatThrownBy(() -> computeActual("" + "CREATE TABLE " + tableName + " (" + " a BIGINT," + " b DOUBLE," + " p VARCHAR" + ") WITH (" + "format = '" + storageFormat + "', " + "partitioned_by = ARRAY[ 'p' ], " + "bucketed_by = ARRAY[ 'a' ], " + "bucket_count = 11, " + "sorted_by = ARRAY[ 'c' ] " + ")")) .hasMessage("Sorting columns [c] not present in schema"); assertThatThrownBy(() -> computeActual("" + "CREATE TABLE " + tableName + " (" + " a BIGINT," + " p VARCHAR" + ") WITH (" + "format = '" + storageFormat + "', " + "partitioned_by = ARRAY[ 'p' ], " + "bucketed_by = ARRAY[ 'p' ], " + "bucket_count = 11 " + ")")) .hasMessage("Bucketing columns [p] are also used as partitioning columns"); assertThatThrownBy(() -> computeActual("" + "CREATE TABLE " + tableName + " (" + " a BIGINT," + " p VARCHAR" + ") WITH (" + "format = '" + storageFormat + "', " + "partitioned_by = ARRAY[ 'p' ], " + "bucketed_by = ARRAY[ 'a' ], " + "bucket_count = 11, " + "sorted_by = ARRAY[ 'p' ] " + ")")) .hasMessage("Sorting columns [p] are also used as partitioning columns"); assertThatThrownBy(() -> computeActual("" + "CREATE TABLE " + tableName + " " + "WITH (" + "format = '" + storageFormat + "', " + "partitioned_by = ARRAY[ 'orderstatus' ], " + "bucketed_by = ARRAY[ 'custkey', 'custkey3' ], " + "bucket_count = 11 " + ") " + "AS " + "SELECT custkey, custkey AS custkey2, comment, orderstatus " + "FROM tpch.tiny.orders")) .hasMessage("Bucketing columns [custkey3] not present in schema"); assertThatThrownBy(() -> computeActual("" + "CREATE TABLE " + tableName + " " + "WITH (" + "format = '" + storageFormat + "', " + "partitioned_by = ARRAY[ 'orderstatus' ], " + "bucketed_by = ARRAY[ 'custkey' ], " + "bucket_count = 11, " + "sorted_by = ARRAY[ 'custkey3' ] " + ") " + "AS " + "SELECT custkey, custkey AS custkey2, comment, orderstatus " + "FROM tpch.tiny.orders")) .hasMessage("Sorting columns [custkey3] not present in schema"); assertFalse(getQueryRunner().tableExists(getSession(), tableName)); } @Test public void testCreatePartitionedUnionAll() { assertUpdate("CREATE TABLE test_create_partitioned_union_all (a varchar, ds varchar) WITH (partitioned_by = ARRAY['ds'])"); assertUpdate("INSERT INTO test_create_partitioned_union_all SELECT 'a', '2013-05-17' UNION ALL SELECT 'b', '2013-05-17'", 2); assertUpdate("DROP TABLE test_create_partitioned_union_all"); } @Test public void testInsertPartitionedBucketedTableFewRows() { // go through all storage formats to make sure the empty buckets are correctly created testWithAllStorageFormats(this::testInsertPartitionedBucketedTableFewRows); } private void testInsertPartitionedBucketedTableFewRows(Session session, HiveStorageFormat storageFormat) { String tableName = "test_insert_partitioned_bucketed_table_few_rows"; assertUpdate(session, "" + "CREATE TABLE " + tableName + " (" + " bucket_key varchar," + " col varchar," + " partition_key varchar)" + "WITH (" + "format = '" + storageFormat + "', " + "partitioned_by = ARRAY[ 'partition_key' ], " + "bucketed_by = ARRAY[ 'bucket_key' ], " + "bucket_count = 11)"); assertUpdate( // make sure that we will get one file per bucket regardless of writer count configured getParallelWriteSession(), "INSERT INTO " + tableName + " " + "VALUES " + " (VARCHAR 'a', VARCHAR 'b', VARCHAR 'c'), " + " ('aa', 'bb', 'cc'), " + " ('aaa', 'bbb', 'ccc')", 3); verifyPartitionedBucketedTableAsFewRows(storageFormat, tableName); assertUpdate(session, "DROP TABLE test_insert_partitioned_bucketed_table_few_rows"); assertFalse(getQueryRunner().tableExists(session, tableName)); } private void verifyPartitionedBucketedTableAsFewRows(HiveStorageFormat storageFormat, String tableName) { TableMetadata tableMetadata = getTableMetadata(catalog, TPCH_SCHEMA, tableName); assertEquals(tableMetadata.getMetadata().getProperties().get(STORAGE_FORMAT_PROPERTY), storageFormat); assertEquals(tableMetadata.getMetadata().getProperties().get(PARTITIONED_BY_PROPERTY), ImmutableList.of("partition_key")); assertEquals(tableMetadata.getMetadata().getProperties().get(BUCKETED_BY_PROPERTY), ImmutableList.of("bucket_key")); assertEquals(tableMetadata.getMetadata().getProperties().get(BUCKET_COUNT_PROPERTY), 11); List<?> partitions = getPartitions(tableName); assertEquals(partitions.size(), 3); MaterializedResult actual = computeActual("SELECT * FROM " + tableName); MaterializedResult expected = resultBuilder(getSession(), canonicalizeType(createUnboundedVarcharType()), canonicalizeType(createUnboundedVarcharType()), canonicalizeType(createUnboundedVarcharType())) .row("a", "b", "c") .row("aa", "bb", "cc") .row("aaa", "bbb", "ccc") .build(); assertEqualsIgnoreOrder(actual.getMaterializedRows(), expected.getMaterializedRows()); } @Test public void testCastNullToColumnTypes() { String tableName = "test_cast_null_to_column_types"; assertUpdate("" + "CREATE TABLE " + tableName + " (" + " col1 bigint," + " col2 map(bigint, bigint)," + " partition_key varchar)" + "WITH (" + " format = 'ORC', " + " partitioned_by = ARRAY[ 'partition_key' ] " + ")"); assertUpdate(format("INSERT INTO %s (col1) VALUES (1), (2), (3)", tableName), 3); assertUpdate("DROP TABLE " + tableName); } @Test public void testCreateEmptyNonBucketedPartition() { String tableName = "test_insert_empty_partitioned_unbucketed_table"; assertUpdate("" + "CREATE TABLE " + tableName + " (" + " dummy_col bigint," + " part varchar)" + "WITH (" + " format = 'ORC', " + " partitioned_by = ARRAY[ 'part' ] " + ")"); assertQuery(format("SELECT count(*) FROM \"%s$partitions\"", tableName), "SELECT 0"); assertAccessDenied( format("CALL system.create_empty_partition('%s', '%s', ARRAY['part'], ARRAY['%s'])", TPCH_SCHEMA, tableName, "empty"), format("Cannot insert into table hive.tpch.%s", tableName), privilege(tableName, INSERT_TABLE)); // create an empty partition assertUpdate(format("CALL system.create_empty_partition('%s', '%s', ARRAY['part'], ARRAY['%s'])", TPCH_SCHEMA, tableName, "empty")); assertQuery(format("SELECT count(*) FROM \"%s$partitions\"", tableName), "SELECT 1"); assertUpdate("DROP TABLE " + tableName); } @Test public void testUnregisterRegisterPartition() { String tableName = "test_register_partition_for_table"; assertUpdate("" + "CREATE TABLE " + tableName + " (" + " dummy_col bigint," + " part varchar)" + "WITH (" + " partitioned_by = ARRAY['part'] " + ")"); assertQuery(format("SELECT count(*) FROM \"%s$partitions\"", tableName), "SELECT 0"); assertUpdate(format("INSERT INTO %s (dummy_col, part) VALUES (1, 'first'), (2, 'second'), (3, 'third')", tableName), 3); List<MaterializedRow> paths = getQueryRunner().execute(getSession(), "SELECT \"$path\" FROM " + tableName + " ORDER BY \"$path\" ASC").toTestTypes().getMaterializedRows(); assertEquals(paths.size(), 3); String firstPartition = new Path((String) paths.get(0).getField(0)).getParent().toString(); assertAccessDenied( format("CALL system.unregister_partition('%s', '%s', ARRAY['part'], ARRAY['first'])", TPCH_SCHEMA, tableName), format("Cannot delete from table hive.tpch.%s", tableName), privilege(tableName, DELETE_TABLE)); assertQueryFails(format("CALL system.unregister_partition('%s', '%s', ARRAY['part'], ARRAY['empty'])", TPCH_SCHEMA, tableName), "Partition 'part=empty' does not exist"); assertUpdate(format("CALL system.unregister_partition('%s', '%s', ARRAY['part'], ARRAY['first'])", TPCH_SCHEMA, tableName)); assertQuery(getSession(), format("SELECT count(*) FROM \"%s$partitions\"", tableName), "SELECT 2"); assertQuery(getSession(), "SELECT count(*) FROM " + tableName, "SELECT 2"); assertAccessDenied( format("CALL system.register_partition('%s', '%s', ARRAY['part'], ARRAY['first'])", TPCH_SCHEMA, tableName), format("Cannot insert into table hive.tpch.%s", tableName), privilege(tableName, INSERT_TABLE)); assertUpdate(format("CALL system.register_partition('%s', '%s', ARRAY['part'], ARRAY['first'], '%s')", TPCH_SCHEMA, tableName, firstPartition)); assertQuery(getSession(), format("SELECT count(*) FROM \"%s$partitions\"", tableName), "SELECT 3"); assertQuery(getSession(), "SELECT count(*) FROM " + tableName, "SELECT 3"); assertUpdate("DROP TABLE " + tableName); } @Test public void testCreateEmptyBucketedPartition() { for (TestingHiveStorageFormat storageFormat : getAllTestingHiveStorageFormat()) { testCreateEmptyBucketedPartition(storageFormat.getFormat()); } } private void testCreateEmptyBucketedPartition(HiveStorageFormat storageFormat) { String tableName = "test_insert_empty_partitioned_bucketed_table"; createPartitionedBucketedTable(tableName, storageFormat); List<String> orderStatusList = ImmutableList.of("F", "O", "P"); for (int i = 0; i < orderStatusList.size(); i++) { String sql = format("CALL system.create_empty_partition('%s', '%s', ARRAY['orderstatus'], ARRAY['%s'])", TPCH_SCHEMA, tableName, orderStatusList.get(i)); assertUpdate(sql); assertQuery( format("SELECT count(*) FROM \"%s$partitions\"", tableName), "SELECT " + (i + 1)); assertQueryFails(sql, "Partition already exists.*"); } assertUpdate("DROP TABLE " + tableName); assertFalse(getQueryRunner().tableExists(getSession(), tableName)); } @Test public void testCreateEmptyPartitionOnNonExistingTable() { assertQueryFails( format("CALL system.create_empty_partition('%s', '%s', ARRAY['part'], ARRAY['%s'])", TPCH_SCHEMA, "non_existing_table", "empty"), format("Table '%s.%s' does not exist", TPCH_SCHEMA, "non_existing_table")); } @Test public void testInsertPartitionedBucketedTable() { testInsertPartitionedBucketedTable(HiveStorageFormat.RCBINARY); } private void testInsertPartitionedBucketedTable(HiveStorageFormat storageFormat) { String tableName = "test_insert_partitioned_bucketed_table"; createPartitionedBucketedTable(tableName, storageFormat); List<String> orderStatusList = ImmutableList.of("F", "O", "P"); for (int i = 0; i < orderStatusList.size(); i++) { String orderStatus = orderStatusList.get(i); assertUpdate( // make sure that we will get one file per bucket regardless of writer count configured getParallelWriteSession(), format( "INSERT INTO " + tableName + " " + "SELECT custkey, custkey AS custkey2, comment, orderstatus " + "FROM tpch.tiny.orders " + "WHERE orderstatus = '%s'", orderStatus), format("SELECT count(*) FROM orders WHERE orderstatus = '%s'", orderStatus)); } verifyPartitionedBucketedTable(storageFormat, tableName); assertUpdate("DROP TABLE " + tableName); assertFalse(getQueryRunner().tableExists(getSession(), tableName)); } private void createPartitionedBucketedTable(String tableName, HiveStorageFormat storageFormat) { assertUpdate("" + "CREATE TABLE " + tableName + " (" + " custkey bigint," + " custkey2 bigint," + " comment varchar," + " orderstatus varchar)" + "WITH (" + "format = '" + storageFormat + "', " + "partitioned_by = ARRAY[ 'orderstatus' ], " + "bucketed_by = ARRAY[ 'custkey', 'custkey2' ], " + "bucket_count = 11)"); } @Test public void testInsertPartitionedBucketedTableWithUnionAll() { testInsertPartitionedBucketedTableWithUnionAll(HiveStorageFormat.RCBINARY); } private void testInsertPartitionedBucketedTableWithUnionAll(HiveStorageFormat storageFormat) { String tableName = "test_insert_partitioned_bucketed_table_with_union_all"; assertUpdate("" + "CREATE TABLE " + tableName + " (" + " custkey bigint," + " custkey2 bigint," + " comment varchar," + " orderstatus varchar)" + "WITH (" + "format = '" + storageFormat + "', " + "partitioned_by = ARRAY[ 'orderstatus' ], " + "bucketed_by = ARRAY[ 'custkey', 'custkey2' ], " + "bucket_count = 11)"); List<String> orderStatusList = ImmutableList.of("F", "O", "P"); for (int i = 0; i < orderStatusList.size(); i++) { String orderStatus = orderStatusList.get(i); assertUpdate( // make sure that we will get one file per bucket regardless of writer count configured getParallelWriteSession(), format( "INSERT INTO " + tableName + " " + "SELECT custkey, custkey AS custkey2, comment, orderstatus " + "FROM tpch.tiny.orders " + "WHERE orderstatus = '%s' AND length(comment) %% 2 = 0 " + "UNION ALL " + "SELECT custkey, custkey AS custkey2, comment, orderstatus " + "FROM tpch.tiny.orders " + "WHERE orderstatus = '%s' AND length(comment) %% 2 = 1", orderStatus, orderStatus), format("SELECT count(*) FROM orders WHERE orderstatus = '%s'", orderStatus)); } verifyPartitionedBucketedTable(storageFormat, tableName); assertUpdate("DROP TABLE " + tableName); assertFalse(getQueryRunner().tableExists(getSession(), tableName)); } @Test public void testInsertTwiceToSamePartitionedBucket() { String tableName = "test_insert_twice_to_same_partitioned_bucket"; createPartitionedBucketedTable(tableName, HiveStorageFormat.RCBINARY); String insert = "INSERT INTO " + tableName + " VALUES (1, 1, 'first_comment', 'F'), (2, 2, 'second_comment', 'G')"; assertUpdate(insert, 2); assertUpdate(insert, 2); assertQuery( "SELECT custkey, custkey2, comment, orderstatus FROM " + tableName + " ORDER BY custkey", "VALUES (1, 1, 'first_comment', 'F'), (1, 1, 'first_comment', 'F'), (2, 2, 'second_comment', 'G'), (2, 2, 'second_comment', 'G')"); assertQuery( "SELECT custkey, custkey2, comment, orderstatus FROM " + tableName + " WHERE custkey = 1 and custkey2 = 1", "VALUES (1, 1, 'first_comment', 'F'), (1, 1, 'first_comment', 'F')"); assertUpdate("DROP TABLE " + tableName); } @Test @Override public void testInsert() { testWithAllStorageFormats(this::testInsert); } private void testInsert(Session session, HiveStorageFormat storageFormat) { @Language("SQL") String createTable = "" + "CREATE TABLE test_insert_format_table " + "(" + " _string VARCHAR," + " _varchar VARCHAR(65535)," + " _char CHAR(10)," + " _bigint BIGINT," + " _integer INTEGER," + " _smallint SMALLINT," + " _tinyint TINYINT," + " _real REAL," + " _double DOUBLE," + " _boolean BOOLEAN," + " _decimal_short DECIMAL(3,2)," + " _decimal_long DECIMAL(30,10)" + ") " + "WITH (format = '" + storageFormat + "') "; if (storageFormat == HiveStorageFormat.AVRO) { createTable = createTable.replace(" _smallint SMALLINT,", " _smallint INTEGER,"); createTable = createTable.replace(" _tinyint TINYINT,", " _tinyint INTEGER,"); } assertUpdate(session, createTable); TableMetadata tableMetadata = getTableMetadata(catalog, TPCH_SCHEMA, "test_insert_format_table"); assertEquals(tableMetadata.getMetadata().getProperties().get(STORAGE_FORMAT_PROPERTY), storageFormat); assertColumnType(tableMetadata, "_string", createUnboundedVarcharType()); assertColumnType(tableMetadata, "_varchar", createVarcharType(65535)); assertColumnType(tableMetadata, "_char", createCharType(10)); @Language("SQL") String select = "SELECT" + " 'foo' _string" + ", 'bar' _varchar" + ", CAST('boo' AS CHAR(10)) _char" + ", 1 _bigint" + ", CAST(42 AS INTEGER) _integer" + ", CAST(43 AS SMALLINT) _smallint" + ", CAST(44 AS TINYINT) _tinyint" + ", CAST('123.45' AS REAL) _real" + ", CAST('3.14' AS DOUBLE) _double" + ", true _boolean" + ", CAST('3.14' AS DECIMAL(3,2)) _decimal_short" + ", CAST('12345678901234567890.0123456789' AS DECIMAL(30,10)) _decimal_long"; if (storageFormat == HiveStorageFormat.AVRO) { select = select.replace(" CAST (43 AS SMALLINT) _smallint,", " 3 _smallint,"); select = select.replace(" CAST (44 AS TINYINT) _tinyint,", " 4 _tinyint,"); } assertUpdate(session, "INSERT INTO test_insert_format_table " + select, 1); assertQuery(session, "SELECT * FROM test_insert_format_table", select); assertUpdate(session, "INSERT INTO test_insert_format_table (_tinyint, _smallint, _integer, _bigint, _real, _double) SELECT CAST(1 AS TINYINT), CAST(2 AS SMALLINT), 3, 4, cast(14.3E0 as REAL), 14.3E0", 1); assertQuery(session, "SELECT * FROM test_insert_format_table WHERE _bigint = 4", "SELECT null, null, null, 4, 3, 2, 1, 14.3, 14.3, null, null, null"); assertQuery(session, "SELECT * FROM test_insert_format_table WHERE _real = CAST(14.3 as REAL)", "SELECT null, null, null, 4, 3, 2, 1, 14.3, 14.3, null, null, null"); assertUpdate(session, "INSERT INTO test_insert_format_table (_double, _bigint) SELECT 2.72E0, 3", 1); assertQuery(session, "SELECT * FROM test_insert_format_table WHERE _double = CAST(2.72E0 as DOUBLE)", "SELECT null, null, null, 3, null, null, null, null, 2.72, null, null, null"); assertUpdate(session, "INSERT INTO test_insert_format_table (_decimal_short, _decimal_long) SELECT DECIMAL '2.72', DECIMAL '98765432101234567890.0123456789'", 1); assertQuery(session, "SELECT * FROM test_insert_format_table WHERE _decimal_long = DECIMAL '98765432101234567890.0123456789'", "SELECT null, null, null, null, null, null, null, null, null, null, 2.72, 98765432101234567890.0123456789"); assertUpdate(session, "DROP TABLE test_insert_format_table"); assertFalse(getQueryRunner().tableExists(session, "test_insert_format_table")); } @Test public void testInsertPartitionedTable() { testWithAllStorageFormats(this::testInsertPartitionedTable); } private void testInsertPartitionedTable(Session session, HiveStorageFormat storageFormat) { @Language("SQL") String createTable = "" + "CREATE TABLE test_insert_partitioned_table " + "(" + " ORDER_KEY BIGINT," + " SHIP_PRIORITY INTEGER," + " ORDER_STATUS VARCHAR" + ") " + "WITH (" + "format = '" + storageFormat + "', " + "partitioned_by = ARRAY[ 'SHIP_PRIORITY', 'ORDER_STATUS' ]" + ") "; assertUpdate(session, createTable); TableMetadata tableMetadata = getTableMetadata(catalog, TPCH_SCHEMA, "test_insert_partitioned_table"); assertEquals(tableMetadata.getMetadata().getProperties().get(STORAGE_FORMAT_PROPERTY), storageFormat); assertEquals(tableMetadata.getMetadata().getProperties().get(PARTITIONED_BY_PROPERTY), ImmutableList.of("ship_priority", "order_status")); String partitionsTable = "\"test_insert_partitioned_table$partitions\""; assertQuery( session, "SELECT * FROM " + partitionsTable, "SELECT shippriority, orderstatus FROM orders LIMIT 0"); // Hive will reorder the partition keys, so we must insert into the table assuming the partition keys have been moved to the end assertUpdate( session, "" + "INSERT INTO test_insert_partitioned_table " + "SELECT orderkey, shippriority, orderstatus " + "FROM tpch.tiny.orders", "SELECT count(*) FROM orders"); // verify the partitions List<?> partitions = getPartitions("test_insert_partitioned_table"); assertEquals(partitions.size(), 3); assertQuery(session, "SELECT * FROM test_insert_partitioned_table", "SELECT orderkey, shippriority, orderstatus FROM orders"); assertQuery( session, "SELECT * FROM " + partitionsTable, "SELECT DISTINCT shippriority, orderstatus FROM orders"); assertQuery( session, "SELECT * FROM " + partitionsTable + " ORDER BY order_status LIMIT 2", "SELECT DISTINCT shippriority, orderstatus FROM orders ORDER BY orderstatus LIMIT 2"); assertQuery( session, "SELECT * FROM " + partitionsTable + " WHERE order_status = 'O'", "SELECT DISTINCT shippriority, orderstatus FROM orders WHERE orderstatus = 'O'"); assertQueryFails(session, "SELECT * FROM " + partitionsTable + " WHERE no_such_column = 1", "line \\S*: Column 'no_such_column' cannot be resolved"); assertQueryFails(session, "SELECT * FROM " + partitionsTable + " WHERE orderkey = 1", "line \\S*: Column 'orderkey' cannot be resolved"); assertUpdate(session, "DROP TABLE test_insert_partitioned_table"); assertFalse(getQueryRunner().tableExists(session, "test_insert_partitioned_table")); } @Test public void testInsertPartitionedTableExistingPartition() { testWithAllStorageFormats(this::testInsertPartitionedTableExistingPartition); } private void testInsertPartitionedTableExistingPartition(Session session, HiveStorageFormat storageFormat) { String tableName = "test_insert_partitioned_table_existing_partition"; @Language("SQL") String createTable = "" + "CREATE TABLE " + tableName + " " + "(" + " order_key BIGINT," + " comment VARCHAR," + " order_status VARCHAR" + ") " + "WITH (" + "format = '" + storageFormat + "', " + "partitioned_by = ARRAY[ 'order_status' ]" + ") "; assertUpdate(session, createTable); TableMetadata tableMetadata = getTableMetadata(catalog, TPCH_SCHEMA, tableName); assertEquals(tableMetadata.getMetadata().getProperties().get(STORAGE_FORMAT_PROPERTY), storageFormat); assertEquals(tableMetadata.getMetadata().getProperties().get(PARTITIONED_BY_PROPERTY), ImmutableList.of("order_status")); for (int i = 0; i < 3; i++) { assertUpdate( session, format( "INSERT INTO " + tableName + " " + "SELECT orderkey, comment, orderstatus " + "FROM tpch.tiny.orders " + "WHERE orderkey %% 3 = %d", i), format("SELECT count(*) FROM orders WHERE orderkey %% 3 = %d", i)); } // verify the partitions List<?> partitions = getPartitions(tableName); assertEquals(partitions.size(), 3); assertQuery( session, "SELECT * FROM " + tableName, "SELECT orderkey, comment, orderstatus FROM orders"); assertUpdate(session, "DROP TABLE " + tableName); assertFalse(getQueryRunner().tableExists(session, tableName)); } @Test public void testInsertPartitionedTableOverwriteExistingPartition() { testInsertPartitionedTableOverwriteExistingPartition( Session.builder(getSession()) .setCatalogSessionProperty(catalog, "insert_existing_partitions_behavior", "OVERWRITE") .build(), HiveStorageFormat.ORC); } private void testInsertPartitionedTableOverwriteExistingPartition(Session session, HiveStorageFormat storageFormat) { String tableName = "test_insert_partitioned_table_overwrite_existing_partition"; @Language("SQL") String createTable = "" + "CREATE TABLE " + tableName + " " + "(" + " order_key BIGINT," + " comment VARCHAR," + " order_status VARCHAR" + ") " + "WITH (" + "format = '" + storageFormat + "', " + "partitioned_by = ARRAY[ 'order_status' ]" + ") "; assertUpdate(session, createTable); TableMetadata tableMetadata = getTableMetadata(catalog, TPCH_SCHEMA, tableName); assertEquals(tableMetadata.getMetadata().getProperties().get(STORAGE_FORMAT_PROPERTY), storageFormat); assertEquals(tableMetadata.getMetadata().getProperties().get(PARTITIONED_BY_PROPERTY), ImmutableList.of("order_status")); for (int i = 0; i < 3; i++) { assertUpdate( session, format( "INSERT INTO " + tableName + " " + "SELECT orderkey, comment, orderstatus " + "FROM tpch.tiny.orders " + "WHERE orderkey %% 3 = %d", i), format("SELECT count(*) FROM orders WHERE orderkey %% 3 = %d", i)); // verify the partitions List<?> partitions = getPartitions(tableName); assertEquals(partitions.size(), 3); assertQuery( session, "SELECT * FROM " + tableName, format("SELECT orderkey, comment, orderstatus FROM orders WHERE orderkey %% 3 = %d", i)); } assertUpdate(session, "DROP TABLE " + tableName); assertFalse(getQueryRunner().tableExists(session, tableName)); } @Test public void testNullPartitionValues() { assertUpdate("" + "CREATE TABLE test_null_partition (test VARCHAR, part VARCHAR)\n" + "WITH (partitioned_by = ARRAY['part'])"); assertUpdate("INSERT INTO test_null_partition VALUES ('hello', 'test'), ('world', null)", 2); assertQuery( "SELECT * FROM test_null_partition", "VALUES ('hello', 'test'), ('world', null)"); assertQuery( "SELECT * FROM \"test_null_partition$partitions\"", "VALUES 'test', null"); assertUpdate("DROP TABLE test_null_partition"); } @Test @Override public void testInsertUnicode() { testWithAllStorageFormats(this::testInsertUnicode); } private void testInsertUnicode(Session session, HiveStorageFormat storageFormat) { assertUpdate(session, "DROP TABLE IF EXISTS test_insert_unicode"); assertUpdate(session, "CREATE TABLE test_insert_unicode(test varchar) WITH (format = '" + storageFormat + "')"); assertUpdate("INSERT INTO test_insert_unicode(test) VALUES 'Hello', U&'hello\\6d4B\\8Bd5\\+10FFFFworld\\7F16\\7801' ", 2); assertThat(computeActual("SELECT test FROM test_insert_unicode").getOnlyColumnAsSet()) .containsExactlyInAnyOrder("Hello", "hello测试􏿿world编码"); assertUpdate(session, "DELETE FROM test_insert_unicode"); assertUpdate(session, "INSERT INTO test_insert_unicode(test) VALUES 'Hello', U&'hello\\6d4B\\8Bd5\\+10FFFFworld\\7F16\\7801' ", 2); assertThat(computeActual(session, "SELECT test FROM test_insert_unicode").getOnlyColumnAsSet()) .containsExactlyInAnyOrder("Hello", "hello测试􏿿world编码"); assertUpdate(session, "DELETE FROM test_insert_unicode"); assertUpdate(session, "INSERT INTO test_insert_unicode(test) VALUES 'aa', 'bé'", 2); assertQuery(session, "SELECT test FROM test_insert_unicode", "VALUES 'aa', 'bé'"); assertQuery(session, "SELECT test FROM test_insert_unicode WHERE test = 'aa'", "VALUES 'aa'"); assertQuery(session, "SELECT test FROM test_insert_unicode WHERE test > 'ba'", "VALUES 'bé'"); assertQuery(session, "SELECT test FROM test_insert_unicode WHERE test < 'ba'", "VALUES 'aa'"); assertQueryReturnsEmptyResult(session, "SELECT test FROM test_insert_unicode WHERE test = 'ba'"); assertUpdate(session, "DELETE FROM test_insert_unicode"); assertUpdate(session, "INSERT INTO test_insert_unicode(test) VALUES 'a', 'é'", 2); assertQuery(session, "SELECT test FROM test_insert_unicode", "VALUES 'a', 'é'"); assertQuery(session, "SELECT test FROM test_insert_unicode WHERE test = 'a'", "VALUES 'a'"); assertQuery(session, "SELECT test FROM test_insert_unicode WHERE test > 'b'", "VALUES 'é'"); assertQuery(session, "SELECT test FROM test_insert_unicode WHERE test < 'b'", "VALUES 'a'"); assertQueryReturnsEmptyResult(session, "SELECT test FROM test_insert_unicode WHERE test = 'b'"); assertUpdate(session, "DROP TABLE test_insert_unicode"); } @Test public void testPartitionPerScanLimit() { TestingHiveStorageFormat storageFormat = new TestingHiveStorageFormat(getSession(), HiveStorageFormat.ORC); testWithStorageFormat(storageFormat, this::testPartitionPerScanLimit); } private void testPartitionPerScanLimit(Session session, HiveStorageFormat storageFormat) { String tableName = "test_partition_per_scan_limit"; String partitionsTable = "\"" + tableName + "$partitions\""; @Language("SQL") String createTable = "" + "CREATE TABLE " + tableName + " " + "(" + " foo VARCHAR," + " part BIGINT" + ") " + "WITH (" + "format = '" + storageFormat + "', " + "partitioned_by = ARRAY[ 'part' ]" + ") "; assertUpdate(session, createTable); TableMetadata tableMetadata = getTableMetadata(catalog, TPCH_SCHEMA, tableName); assertEquals(tableMetadata.getMetadata().getProperties().get(STORAGE_FORMAT_PROPERTY), storageFormat); assertEquals(tableMetadata.getMetadata().getProperties().get(PARTITIONED_BY_PROPERTY), ImmutableList.of("part")); // insert 1200 partitions for (int i = 0; i < 12; i++) { int partStart = i * 100; int partEnd = (i + 1) * 100 - 1; @Language("SQL") String insertPartitions = "" + "INSERT INTO " + tableName + " " + "SELECT 'bar' foo, part " + "FROM UNNEST(SEQUENCE(" + partStart + ", " + partEnd + ")) AS TMP(part)"; assertUpdate(session, insertPartitions, 100); } // we are not constrained by hive.max-partitions-per-scan when listing partitions assertQuery( session, "SELECT * FROM " + partitionsTable + " WHERE part > 490 AND part <= 500", "VALUES 491, 492, 493, 494, 495, 496, 497, 498, 499, 500"); assertQuery( session, "SELECT * FROM " + partitionsTable + " WHERE part < 0", "SELECT null WHERE false"); assertQuery( session, "SELECT * FROM " + partitionsTable, "VALUES " + LongStream.range(0, 1200) .mapToObj(String::valueOf) .collect(joining(","))); // verify can query 1000 partitions assertQuery( session, "SELECT count(foo) FROM " + tableName + " WHERE part < 1000", "SELECT 1000"); // verify the rest 200 partitions are successfully inserted assertQuery( session, "SELECT count(foo) FROM " + tableName + " WHERE part >= 1000 AND part < 1200", "SELECT 200"); // verify cannot query more than 1000 partitions assertQueryFails( session, "SELECT * FROM " + tableName + " WHERE part < 1001", format("Query over table 'tpch.%s' can potentially read more than 1000 partitions", tableName)); // verify cannot query all partitions assertQueryFails( session, "SELECT * FROM " + tableName, format("Query over table 'tpch.%s' can potentially read more than 1000 partitions", tableName)); assertUpdate(session, "DROP TABLE " + tableName); assertFalse(getQueryRunner().tableExists(session, tableName)); } @Test public void testShowColumnsFromPartitions() { String tableName = "test_show_columns_from_partitions"; @Language("SQL") String createTable = "" + "CREATE TABLE " + tableName + " " + "(" + " foo VARCHAR," + " part1 BIGINT," + " part2 VARCHAR" + ") " + "WITH (" + "partitioned_by = ARRAY[ 'part1', 'part2' ]" + ") "; assertUpdate(getSession(), createTable); assertQuery( getSession(), "SHOW COLUMNS FROM \"" + tableName + "$partitions\"", "VALUES ('part1', 'bigint', '', ''), ('part2', 'varchar', '', '')"); assertQueryFails( getSession(), "SHOW COLUMNS FROM \"$partitions\"", ".*Table '.*\\.tpch\\.\\$partitions' does not exist"); assertQueryFails( getSession(), "SHOW COLUMNS FROM \"orders$partitions\"", ".*Table '.*\\.tpch\\.orders\\$partitions' does not exist"); assertQueryFails( getSession(), "SHOW COLUMNS FROM \"blah$partitions\"", ".*Table '.*\\.tpch\\.blah\\$partitions' does not exist"); } @Test public void testPartitionsTableInvalidAccess() { @Language("SQL") String createTable = "" + "CREATE TABLE test_partitions_invalid " + "(" + " foo VARCHAR," + " part1 BIGINT," + " part2 VARCHAR" + ") " + "WITH (" + "partitioned_by = ARRAY[ 'part1', 'part2' ]" + ") "; assertUpdate(getSession(), createTable); assertQueryFails( getSession(), "SELECT * FROM \"test_partitions_invalid$partitions$partitions\"", ".*Table '.*\\.tpch\\.test_partitions_invalid\\$partitions\\$partitions' does not exist"); assertQueryFails( getSession(), "SELECT * FROM \"non_existent$partitions\"", ".*Table '.*\\.tpch\\.non_existent\\$partitions' does not exist"); } @Test public void testInsertUnpartitionedTable() { testWithAllStorageFormats(this::testInsertUnpartitionedTable); } private void testInsertUnpartitionedTable(Session session, HiveStorageFormat storageFormat) { String tableName = "test_insert_unpartitioned_table"; @Language("SQL") String createTable = "" + "CREATE TABLE " + tableName + " " + "(" + " order_key BIGINT," + " comment VARCHAR," + " order_status VARCHAR" + ") " + "WITH (" + "format = '" + storageFormat + "'" + ") "; assertUpdate(session, createTable); TableMetadata tableMetadata = getTableMetadata(catalog, TPCH_SCHEMA, tableName); assertEquals(tableMetadata.getMetadata().getProperties().get(STORAGE_FORMAT_PROPERTY), storageFormat); for (int i = 0; i < 3; i++) { assertUpdate( session, format( "INSERT INTO " + tableName + " " + "SELECT orderkey, comment, orderstatus " + "FROM tpch.tiny.orders " + "WHERE orderkey %% 3 = %d", i), format("SELECT count(*) FROM orders WHERE orderkey %% 3 = %d", i)); } assertQuery( session, "SELECT * FROM " + tableName, "SELECT orderkey, comment, orderstatus FROM orders"); assertUpdate(session, "DROP TABLE " + tableName); assertFalse(getQueryRunner().tableExists(session, tableName)); } @Test public void testDeleteFromUnpartitionedTable() { assertUpdate("CREATE TABLE test_delete_unpartitioned AS SELECT orderstatus FROM tpch.tiny.orders", "SELECT count(*) FROM orders"); assertUpdate("DELETE FROM test_delete_unpartitioned"); MaterializedResult result = computeActual("SELECT * FROM test_delete_unpartitioned"); assertEquals(result.getRowCount(), 0); assertUpdate("DROP TABLE test_delete_unpartitioned"); assertFalse(getQueryRunner().tableExists(getSession(), "test_delete_unpartitioned")); } @Test public void testMetadataDelete() { @Language("SQL") String createTable = "" + "CREATE TABLE test_metadata_delete " + "(" + " ORDER_KEY BIGINT," + " LINE_NUMBER INTEGER," + " LINE_STATUS VARCHAR" + ") " + "WITH (" + PARTITIONED_BY_PROPERTY + " = ARRAY[ 'LINE_NUMBER', 'LINE_STATUS' ]" + ") "; assertUpdate(createTable); assertUpdate("" + "INSERT INTO test_metadata_delete " + "SELECT orderkey, linenumber, linestatus " + "FROM tpch.tiny.lineitem", "SELECT count(*) FROM lineitem"); // Delete returns number of rows deleted, or null if obtaining the number is hard or impossible. // Currently, Hive implementation always returns null. assertUpdate("DELETE FROM test_metadata_delete WHERE LINE_STATUS='F' AND LINE_NUMBER=CAST(3 AS INTEGER)"); assertQuery("SELECT * FROM test_metadata_delete", "SELECT orderkey, linenumber, linestatus FROM lineitem WHERE linestatus<>'F' or linenumber<>3"); assertUpdate("DELETE FROM test_metadata_delete WHERE LINE_STATUS='O'"); assertQuery("SELECT * FROM test_metadata_delete", "SELECT orderkey, linenumber, linestatus FROM lineitem WHERE linestatus<>'O' AND linenumber<>3"); assertThatThrownBy(() -> getQueryRunner().execute("DELETE FROM test_metadata_delete WHERE ORDER_KEY=1")) .isInstanceOf(RuntimeException.class) .hasMessage("Deletes must match whole partitions for non-transactional tables"); assertQuery("SELECT * FROM test_metadata_delete", "SELECT orderkey, linenumber, linestatus FROM lineitem WHERE linestatus<>'O' AND linenumber<>3"); assertUpdate("DROP TABLE test_metadata_delete"); assertFalse(getQueryRunner().tableExists(getSession(), "test_metadata_delete")); } private TableMetadata getTableMetadata(String catalog, String schema, String tableName) { Session session = getSession(); Metadata metadata = getDistributedQueryRunner().getCoordinator().getMetadata(); return transaction(getQueryRunner().getTransactionManager(), getQueryRunner().getAccessControl()) .readOnly() .execute(session, transactionSession -> { Optional<TableHandle> tableHandle = metadata.getTableHandle(transactionSession, new QualifiedObjectName(catalog, schema, tableName)); assertTrue(tableHandle.isPresent()); return metadata.getTableMetadata(transactionSession, tableHandle.get()); }); } private Object getHiveTableProperty(String tableName, Function<HiveTableHandle, Object> propertyGetter) { Session session = getSession(); Metadata metadata = getDistributedQueryRunner().getCoordinator().getMetadata(); return transaction(getQueryRunner().getTransactionManager(), getQueryRunner().getAccessControl()) .readOnly() .execute(session, transactionSession -> { QualifiedObjectName name = new QualifiedObjectName(catalog, TPCH_SCHEMA, tableName); TableHandle table = metadata.getTableHandle(transactionSession, name) .orElseThrow(() -> new AssertionError("table not found: " + name)); table = metadata.applyFilter(transactionSession, table, Constraint.alwaysTrue()) .orElseThrow(() -> new AssertionError("applyFilter did not return a result")) .getHandle(); return propertyGetter.apply((HiveTableHandle) table.getConnectorHandle()); }); } private List<?> getPartitions(String tableName) { return (List<?>) getHiveTableProperty(tableName, handle -> handle.getPartitions().get()); } private int getBucketCount(String tableName) { return (int) getHiveTableProperty(tableName, table -> table.getBucketHandle().get().getTableBucketCount()); } @Test public void testShowColumnsPartitionKey() { assertUpdate("" + "CREATE TABLE test_show_columns_partition_key\n" + "(grape bigint, orange bigint, pear varchar(65535), mango integer, lychee smallint, kiwi tinyint, apple varchar, pineapple varchar(65535))\n" + "WITH (partitioned_by = ARRAY['apple', 'pineapple'])"); MaterializedResult actual = computeActual("SHOW COLUMNS FROM test_show_columns_partition_key"); Type unboundedVarchar = canonicalizeType(VARCHAR); MaterializedResult expected = resultBuilder(getSession(), unboundedVarchar, unboundedVarchar, unboundedVarchar, unboundedVarchar) .row("grape", canonicalizeType(BIGINT).toString(), "", "") .row("orange", canonicalizeType(BIGINT).toString(), "", "") .row("pear", canonicalizeType(createVarcharType(65535)).toString(), "", "") .row("mango", canonicalizeType(INTEGER).toString(), "", "") .row("lychee", canonicalizeType(SMALLINT).toString(), "", "") .row("kiwi", canonicalizeType(TINYINT).toString(), "", "") .row("apple", canonicalizeType(VARCHAR).toString(), "partition key", "") .row("pineapple", canonicalizeType(createVarcharType(65535)).toString(), "partition key", "") .build(); assertEquals(actual, expected); } // TODO: These should be moved to another class, when more connectors support arrays @Test public void testArrays() { assertUpdate("CREATE TABLE tmp_array1 AS SELECT ARRAY[1, 2, NULL] AS col", 1); assertQuery("SELECT col[2] FROM tmp_array1", "SELECT 2"); assertQuery("SELECT col[3] FROM tmp_array1", "SELECT NULL"); assertUpdate("CREATE TABLE tmp_array2 AS SELECT ARRAY[1.0E0, 2.5E0, 3.5E0] AS col", 1); assertQuery("SELECT col[2] FROM tmp_array2", "SELECT 2.5"); assertUpdate("CREATE TABLE tmp_array3 AS SELECT ARRAY['puppies', 'kittens', NULL] AS col", 1); assertQuery("SELECT col[2] FROM tmp_array3", "SELECT 'kittens'"); assertQuery("SELECT col[3] FROM tmp_array3", "SELECT NULL"); assertUpdate("CREATE TABLE tmp_array4 AS SELECT ARRAY[TRUE, NULL] AS col", 1); assertQuery("SELECT col[1] FROM tmp_array4", "SELECT TRUE"); assertQuery("SELECT col[2] FROM tmp_array4", "SELECT NULL"); assertUpdate("CREATE TABLE tmp_array5 AS SELECT ARRAY[ARRAY[1, 2], NULL, ARRAY[3, 4]] AS col", 1); assertQuery("SELECT col[1][2] FROM tmp_array5", "SELECT 2"); assertUpdate("CREATE TABLE tmp_array6 AS SELECT ARRAY[ARRAY['\"hi\"'], NULL, ARRAY['puppies']] AS col", 1); assertQuery("SELECT col[1][1] FROM tmp_array6", "SELECT '\"hi\"'"); assertQuery("SELECT col[3][1] FROM tmp_array6", "SELECT 'puppies'"); assertUpdate("CREATE TABLE tmp_array7 AS SELECT ARRAY[ARRAY[INTEGER'1', INTEGER'2'], NULL, ARRAY[INTEGER'3', INTEGER'4']] AS col", 1); assertQuery("SELECT col[1][2] FROM tmp_array7", "SELECT 2"); assertUpdate("CREATE TABLE tmp_array8 AS SELECT ARRAY[ARRAY[SMALLINT'1', SMALLINT'2'], NULL, ARRAY[SMALLINT'3', SMALLINT'4']] AS col", 1); assertQuery("SELECT col[1][2] FROM tmp_array8", "SELECT 2"); assertUpdate("CREATE TABLE tmp_array9 AS SELECT ARRAY[ARRAY[TINYINT'1', TINYINT'2'], NULL, ARRAY[TINYINT'3', TINYINT'4']] AS col", 1); assertQuery("SELECT col[1][2] FROM tmp_array9", "SELECT 2"); assertUpdate("CREATE TABLE tmp_array10 AS SELECT ARRAY[ARRAY[DECIMAL '3.14']] AS col1, ARRAY[ARRAY[DECIMAL '12345678901234567890.0123456789']] AS col2", 1); assertQuery("SELECT col1[1][1] FROM tmp_array10", "SELECT 3.14"); assertQuery("SELECT col2[1][1] FROM tmp_array10", "SELECT 12345678901234567890.0123456789"); assertUpdate("CREATE TABLE tmp_array13 AS SELECT ARRAY[ARRAY[REAL'1.234', REAL'2.345'], NULL, ARRAY[REAL'3.456', REAL'4.567']] AS col", 1); assertQuery("SELECT col[1][2] FROM tmp_array13", "SELECT 2.345"); } @Test(dataProvider = "timestampPrecision") public void testTemporalArrays(HiveTimestampPrecision timestampPrecision) { Session session = withTimestampPrecision(getSession(), timestampPrecision); assertUpdate("DROP TABLE IF EXISTS tmp_array11"); assertUpdate("CREATE TABLE tmp_array11 AS SELECT ARRAY[DATE '2014-09-30'] AS col", 1); assertOneNotNullResult("SELECT col[1] FROM tmp_array11"); assertUpdate("DROP TABLE IF EXISTS tmp_array12"); assertUpdate("CREATE TABLE tmp_array12 AS SELECT ARRAY[TIMESTAMP '2001-08-22 03:04:05.321'] AS col", 1); assertOneNotNullResult(session, "SELECT col[1] FROM tmp_array12"); } @Test(dataProvider = "timestampPrecision") public void testMaps(HiveTimestampPrecision timestampPrecision) { Session session = withTimestampPrecision(getSession(), timestampPrecision); assertUpdate("DROP TABLE IF EXISTS tmp_map1"); assertUpdate("CREATE TABLE tmp_map1 AS SELECT MAP(ARRAY[0,1], ARRAY[2,NULL]) AS col", 1); assertQuery("SELECT col[0] FROM tmp_map1", "SELECT 2"); assertQuery("SELECT col[1] FROM tmp_map1", "SELECT NULL"); assertUpdate("DROP TABLE IF EXISTS tmp_map2"); assertUpdate("CREATE TABLE tmp_map2 AS SELECT MAP(ARRAY[INTEGER'1'], ARRAY[INTEGER'2']) AS col", 1); assertQuery("SELECT col[INTEGER'1'] FROM tmp_map2", "SELECT 2"); assertUpdate("DROP TABLE IF EXISTS tmp_map3"); assertUpdate("CREATE TABLE tmp_map3 AS SELECT MAP(ARRAY[SMALLINT'1'], ARRAY[SMALLINT'2']) AS col", 1); assertQuery("SELECT col[SMALLINT'1'] FROM tmp_map3", "SELECT 2"); assertUpdate("DROP TABLE IF EXISTS tmp_map4"); assertUpdate("CREATE TABLE tmp_map4 AS SELECT MAP(ARRAY[TINYINT'1'], ARRAY[TINYINT'2']) AS col", 1); assertQuery("SELECT col[TINYINT'1'] FROM tmp_map4", "SELECT 2"); assertUpdate("DROP TABLE IF EXISTS tmp_map5"); assertUpdate("CREATE TABLE tmp_map5 AS SELECT MAP(ARRAY[1.0], ARRAY[2.5]) AS col", 1); assertQuery("SELECT col[1.0] FROM tmp_map5", "SELECT 2.5"); assertUpdate("DROP TABLE IF EXISTS tmp_map6"); assertUpdate("CREATE TABLE tmp_map6 AS SELECT MAP(ARRAY['puppies'], ARRAY['kittens']) AS col", 1); assertQuery("SELECT col['puppies'] FROM tmp_map6", "SELECT 'kittens'"); assertUpdate("DROP TABLE IF EXISTS tmp_map7"); assertUpdate("CREATE TABLE tmp_map7 AS SELECT MAP(ARRAY[TRUE], ARRAY[FALSE]) AS col", 1); assertQuery("SELECT col[TRUE] FROM tmp_map7", "SELECT FALSE"); assertUpdate("DROP TABLE IF EXISTS tmp_map8"); assertUpdate("CREATE TABLE tmp_map8 AS SELECT MAP(ARRAY[DATE '2014-09-30'], ARRAY[DATE '2014-09-29']) AS col", 1); assertOneNotNullResult("SELECT col[DATE '2014-09-30'] FROM tmp_map8"); assertUpdate("DROP TABLE IF EXISTS tmp_map9"); assertUpdate("CREATE TABLE tmp_map9 AS SELECT MAP(ARRAY[TIMESTAMP '2001-08-22 03:04:05.321'], ARRAY[TIMESTAMP '2001-08-22 03:04:05.321']) AS col", 1); assertOneNotNullResult(session, "SELECT col[TIMESTAMP '2001-08-22 03:04:05.321'] FROM tmp_map9"); assertUpdate("DROP TABLE IF EXISTS tmp_map10"); assertUpdate("CREATE TABLE tmp_map10 AS SELECT MAP(ARRAY[DECIMAL '3.14', DECIMAL '12345678901234567890.0123456789'], " + "ARRAY[DECIMAL '12345678901234567890.0123456789', DECIMAL '3.0123456789']) AS col", 1); assertQuery("SELECT col[DECIMAL '3.14'], col[DECIMAL '12345678901234567890.0123456789'] FROM tmp_map10", "SELECT 12345678901234567890.0123456789, 3.0123456789"); assertUpdate("DROP TABLE IF EXISTS tmp_map11"); assertUpdate("CREATE TABLE tmp_map11 AS SELECT MAP(ARRAY[REAL'1.234'], ARRAY[REAL'2.345']) AS col", 1); assertQuery("SELECT col[REAL'1.234'] FROM tmp_map11", "SELECT 2.345"); assertUpdate("DROP TABLE IF EXISTS tmp_map12"); assertUpdate("CREATE TABLE tmp_map12 AS SELECT MAP(ARRAY[1.0E0], ARRAY[ARRAY[1, 2]]) AS col", 1); assertQuery("SELECT col[1.0][2] FROM tmp_map12", "SELECT 2"); } @Test public void testRowsWithAllFormats() { testWithAllStorageFormats(this::testRows); } private void testRows(Session session, HiveStorageFormat format) { String tableName = "test_dereferences"; @Language("SQL") String createTable = "" + "CREATE TABLE " + tableName + " WITH (" + "format = '" + format + "'" + ") " + "AS SELECT " + "CAST(row(CAST(1 as BIGINT), CAST(NULL as BIGINT)) AS row(col0 bigint, col1 bigint)) AS a, " + "CAST(row(row(VARCHAR 'abc', CAST(5 as BIGINT)), CAST(3.0 AS DOUBLE)) AS row(field0 row(col0 varchar, col1 bigint), field1 double)) AS b"; assertUpdate(session, createTable, 1); assertQuery(session, "SELECT a.col0, a.col1, b.field0.col0, b.field0.col1, b.field1 FROM " + tableName, "SELECT 1, cast(null as bigint), CAST('abc' AS varchar), CAST(5 as BIGINT), CAST(3.0 AS DOUBLE)"); assertUpdate(session, "DROP TABLE " + tableName); } @Test public void testRowsWithNulls() { testRowsWithNulls(getSession(), HiveStorageFormat.ORC); testRowsWithNulls(getSession(), HiveStorageFormat.PARQUET); } private void testRowsWithNulls(Session session, HiveStorageFormat format) { String tableName = "test_dereferences_with_nulls"; @Language("SQL") String createTable = "" + "CREATE TABLE " + tableName + "\n" + "(col0 BIGINT, col1 row(f0 BIGINT, f1 BIGINT), col2 row(f0 BIGINT, f1 ROW(f0 BIGINT, f1 BIGINT)))\n" + "WITH (format = '" + format + "')"; assertUpdate(session, createTable); @Language("SQL") String insertTable = "" + "INSERT INTO " + tableName + " VALUES \n" + "row(1, row(2, 3), row(4, row(5, 6))),\n" + "row(7, row(8, 9), row(10, row(11, NULL))),\n" + "row(NULL, NULL, row(12, NULL)),\n" + "row(13, row(NULL, 14), NULL),\n" + "row(15, row(16, NULL), row(NULL, row(17, 18)))"; assertUpdate(session, insertTable, 5); assertQuery( session, format("SELECT col0, col1.f0, col2.f1.f1 FROM %s", tableName), "SELECT * FROM \n" + " (SELECT 1, 2, 6) UNION\n" + " (SELECT 7, 8, NULL) UNION\n" + " (SELECT NULL, NULL, NULL) UNION\n" + " (SELECT 13, NULL, NULL) UNION\n" + " (SELECT 15, 16, 18)"); assertQuery(session, format("SELECT col0 FROM %s WHERE col2.f1.f1 IS NOT NULL", tableName), "SELECT * FROM UNNEST(array[1, 15])"); assertQuery(session, format("SELECT col0, col1.f0, col1.f1 FROM %s WHERE col2.f1.f1 = 18", tableName), "SELECT 15, 16, NULL"); assertUpdate(session, "DROP TABLE " + tableName); } @Test public void testComplex() { assertUpdate("CREATE TABLE tmp_complex1 AS SELECT " + "ARRAY [MAP(ARRAY['a', 'b'], ARRAY[2.0E0, 4.0E0]), MAP(ARRAY['c', 'd'], ARRAY[12.0E0, 14.0E0])] AS a", 1); assertQuery( "SELECT a[1]['a'], a[2]['d'] FROM tmp_complex1", "SELECT 2.0, 14.0"); } @Test public void testBucketedCatalog() { String bucketedCatalog = bucketedSession.getCatalog().get(); String bucketedSchema = bucketedSession.getSchema().get(); TableMetadata ordersTableMetadata = getTableMetadata(bucketedCatalog, bucketedSchema, "orders"); assertEquals(ordersTableMetadata.getMetadata().getProperties().get(BUCKETED_BY_PROPERTY), ImmutableList.of("custkey")); assertEquals(ordersTableMetadata.getMetadata().getProperties().get(BUCKET_COUNT_PROPERTY), 11); TableMetadata customerTableMetadata = getTableMetadata(bucketedCatalog, bucketedSchema, "customer"); assertEquals(customerTableMetadata.getMetadata().getProperties().get(BUCKETED_BY_PROPERTY), ImmutableList.of("custkey")); assertEquals(customerTableMetadata.getMetadata().getProperties().get(BUCKET_COUNT_PROPERTY), 11); } @Test public void testBucketedExecution() { assertQuery(bucketedSession, "SELECT count(*) a FROM orders t1 JOIN orders t2 on t1.custkey=t2.custkey"); assertQuery(bucketedSession, "SELECT count(*) a FROM orders t1 JOIN customer t2 on t1.custkey=t2.custkey", "SELECT count(*) FROM orders"); assertQuery(bucketedSession, "SELECT count(distinct custkey) FROM orders"); assertQuery( Session.builder(bucketedSession).setSystemProperty("task_writer_count", "1").build(), "SELECT custkey, COUNT(*) FROM orders GROUP BY custkey"); assertQuery( Session.builder(bucketedSession).setSystemProperty("task_writer_count", "4").build(), "SELECT custkey, COUNT(*) FROM orders GROUP BY custkey"); } @Test public void testScaleWriters() { testWithAllStorageFormats(this::testSingleWriter); testWithAllStorageFormats(this::testMultipleWriters); } private void testSingleWriter(Session session, HiveStorageFormat storageFormat) { try { // small table that will only have one writer @Language("SQL") String createTableSql = format("" + "CREATE TABLE scale_writers_small WITH (format = '%s') AS " + "SELECT * FROM tpch.tiny.orders", storageFormat); assertUpdate( Session.builder(session) .setSystemProperty("scale_writers", "true") .setSystemProperty("writer_min_size", "32MB") .build(), createTableSql, (long) computeActual("SELECT count(*) FROM tpch.tiny.orders").getOnlyValue()); assertEquals(computeActual("SELECT count(DISTINCT \"$path\") FROM scale_writers_small").getOnlyValue(), 1L); } finally { assertUpdate("DROP TABLE IF EXISTS scale_writers_small"); } } private void testMultipleWriters(Session session, HiveStorageFormat storageFormat) { try { // large table that will scale writers to multiple machines @Language("SQL") String createTableSql = format("" + "CREATE TABLE scale_writers_large WITH (format = '%s') AS " + "SELECT * FROM tpch.sf1.orders", storageFormat); assertUpdate( Session.builder(session) .setSystemProperty("scale_writers", "true") .setSystemProperty("writer_min_size", "1MB") .setCatalogSessionProperty(catalog, "parquet_writer_block_size", "4MB") .build(), createTableSql, (long) computeActual("SELECT count(*) FROM tpch.sf1.orders").getOnlyValue()); long files = (long) computeScalar("SELECT count(DISTINCT \"$path\") FROM scale_writers_large"); long workers = (long) computeScalar("SELECT count(*) FROM system.runtime.nodes"); assertThat(files).isBetween(2L, workers); } finally { assertUpdate("DROP TABLE IF EXISTS scale_writers_large"); } } @Test public void testTableCommentsTable() { assertUpdate("CREATE TABLE test_comment (c1 bigint) COMMENT 'foo'"); String selectTableComment = format("" + "SELECT comment FROM system.metadata.table_comments " + "WHERE catalog_name = '%s' AND schema_name = '%s' AND table_name = 'test_comment'", getSession().getCatalog().get(), getSession().getSchema().get()); assertQuery(selectTableComment, "SELECT 'foo'"); assertUpdate("DROP TABLE IF EXISTS test_comment"); } @Test @Override public void testShowCreateTable() { assertThat(computeActual("SHOW CREATE TABLE orders").getOnlyValue()) .isEqualTo("CREATE TABLE hive.tpch.orders (\n" + " orderkey bigint,\n" + " custkey bigint,\n" + " orderstatus varchar(1),\n" + " totalprice double,\n" + " orderdate date,\n" + " orderpriority varchar(15),\n" + " clerk varchar(15),\n" + " shippriority integer,\n" + " comment varchar(79)\n" + ")\n" + "WITH (\n" + " format = 'ORC'\n" + ")"); String createTableSql = format("" + "CREATE TABLE %s.%s.%s (\n" + " c1 bigint,\n" + " c2 double,\n" + " \"c 3\" varchar,\n" + " \"c'4\" array(bigint),\n" + " c5 map(bigint, varchar)\n" + ")\n" + "WITH (\n" + " format = 'RCBINARY'\n" + ")", getSession().getCatalog().get(), getSession().getSchema().get(), "test_show_create_table"); assertUpdate(createTableSql); MaterializedResult actualResult = computeActual("SHOW CREATE TABLE test_show_create_table"); assertEquals(getOnlyElement(actualResult.getOnlyColumnAsSet()), createTableSql); createTableSql = format("" + "CREATE TABLE %s.%s.%s (\n" + " c1 bigint,\n" + " \"c 2\" varchar,\n" + " \"c'3\" array(bigint),\n" + " c4 map(bigint, varchar) COMMENT 'comment test4',\n" + " c5 double COMMENT ''\n)\n" + "COMMENT 'test'\n" + "WITH (\n" + " bucket_count = 5,\n" + " bucketed_by = ARRAY['c1','c 2'],\n" + " bucketing_version = 1,\n" + " format = 'ORC',\n" + " orc_bloom_filter_columns = ARRAY['c1','c2'],\n" + " orc_bloom_filter_fpp = 7E-1,\n" + " partitioned_by = ARRAY['c5'],\n" + " sorted_by = ARRAY['c1','c 2 DESC'],\n" + " transactional = true\n" + ")", getSession().getCatalog().get(), getSession().getSchema().get(), "\"test_show_create_table'2\""); assertUpdate(createTableSql); actualResult = computeActual("SHOW CREATE TABLE \"test_show_create_table'2\""); assertEquals(getOnlyElement(actualResult.getOnlyColumnAsSet()), createTableSql); createTableSql = format("" + "CREATE TABLE %s.%s.%s (\n" + " c1 ROW(\"$a\" bigint, \"$b\" varchar)\n)\n" + "WITH (\n" + " format = 'ORC'\n" + ")", getSession().getCatalog().get(), getSession().getSchema().get(), "test_show_create_table_with_special_characters"); assertUpdate(createTableSql); actualResult = computeActual("SHOW CREATE TABLE test_show_create_table_with_special_characters"); assertEquals(getOnlyElement(actualResult.getOnlyColumnAsSet()), createTableSql); } private void testCreateExternalTable( String tableName, String fileContents, String expectedResults, List<String> tableProperties) throws Exception { File tempDir = createTempDir(); File dataFile = new File(tempDir, "test.txt"); Files.asCharSink(dataFile, UTF_8).write(fileContents); // Table properties StringJoiner propertiesSql = new StringJoiner(",\n "); propertiesSql.add( format("external_location = '%s'", new Path(tempDir.toURI().toASCIIString()))); propertiesSql.add("format = 'TEXTFILE'"); tableProperties.forEach(propertiesSql::add); @Language("SQL") String createTableSql = format("" + "CREATE TABLE %s.%s.%s (\n" + " col1 varchar,\n" + " col2 varchar\n" + ")\n" + "WITH (\n" + " %s\n" + ")", getSession().getCatalog().get(), getSession().getSchema().get(), tableName, propertiesSql); assertUpdate(createTableSql); MaterializedResult actual = computeActual(format("SHOW CREATE TABLE %s", tableName)); assertEquals(actual.getOnlyValue(), createTableSql); assertQuery(format("SELECT col1, col2 from %s", tableName), expectedResults); assertUpdate(format("DROP TABLE %s", tableName)); assertFile(dataFile); // file should still exist after drop deleteRecursively(tempDir.toPath(), ALLOW_INSECURE); } @Test public void testCreateExternalTable() throws Exception { testCreateExternalTable( "test_create_external", "hello\u0001world\nbye\u0001world", "VALUES ('hello', 'world'), ('bye', 'world')", ImmutableList.of()); } @Test public void testCreateExternalTableWithFieldSeparator() throws Exception { testCreateExternalTable( "test_create_external_with_field_separator", "helloXworld\nbyeXworld", "VALUES ('hello', 'world'), ('bye', 'world')", ImmutableList.of("textfile_field_separator = 'X'")); } @Test public void testCreateExternalTableWithFieldSeparatorEscape() throws Exception { testCreateExternalTable( "test_create_external_text_file_with_field_separator_and_escape", "HelloEFFWorld\nByeEFFWorld", "VALUES ('HelloF', 'World'), ('ByeF', 'World')", ImmutableList.of( "textfile_field_separator = 'F'", "textfile_field_separator_escape = 'E'")); } @Test public void testCreateExternalTableWithNullFormat() throws Exception { testCreateExternalTable( "test_create_external_textfile_with_null_format", "hello\u0001NULL_VALUE\nNULL_VALUE\u0001123\n\\N\u0001456", "VALUES ('hello', NULL), (NULL, 123), ('\\N', 456)", ImmutableList.of("null_format = 'NULL_VALUE'")); } @Test public void testCreateExternalTableWithDataNotAllowed() throws IOException { File tempDir = createTempDir(); @Language("SQL") String createTableSql = format("" + "CREATE TABLE test_create_external_with_data_not_allowed " + "WITH (external_location = '%s') AS " + "SELECT * FROM tpch.tiny.nation", tempDir.toURI().toASCIIString()); assertQueryFails(createTableSql, "Writes to non-managed Hive tables is disabled"); deleteRecursively(tempDir.toPath(), ALLOW_INSECURE); } private void testCreateTableWithHeaderAndFooter(String format) { String name = format.toLowerCase(ENGLISH); String catalog = getSession().getCatalog().get(); String schema = getSession().getSchema().get(); @Language("SQL") String createTableSql = format("" + "CREATE TABLE %s.%s.%s_table_skip_header (\n" + " name varchar\n" + ")\n" + "WITH (\n" + " format = '%s',\n" + " skip_header_line_count = 1\n" + ")", catalog, schema, name, format); assertUpdate(createTableSql); MaterializedResult actual = computeActual(format("SHOW CREATE TABLE %s_table_skip_header", format)); assertEquals(actual.getOnlyValue(), createTableSql); assertUpdate(format("DROP TABLE %s_table_skip_header", format)); createTableSql = format("" + "CREATE TABLE %s.%s.%s_table_skip_footer (\n" + " name varchar\n" + ")\n" + "WITH (\n" + " format = '%s',\n" + " skip_footer_line_count = 1\n" + ")", catalog, schema, name, format); assertUpdate(createTableSql); actual = computeActual(format("SHOW CREATE TABLE %s_table_skip_footer", format)); assertEquals(actual.getOnlyValue(), createTableSql); assertUpdate(format("DROP TABLE %s_table_skip_footer", format)); createTableSql = format("" + "CREATE TABLE %s.%s.%s_table_skip_header_footer (\n" + " name varchar\n" + ")\n" + "WITH (\n" + " format = '%s',\n" + " skip_footer_line_count = 1,\n" + " skip_header_line_count = 1\n" + ")", catalog, schema, name, format); assertUpdate(createTableSql); actual = computeActual(format("SHOW CREATE TABLE %s_table_skip_header_footer", format)); assertEquals(actual.getOnlyValue(), createTableSql); assertUpdate(format("DROP TABLE %s_table_skip_header_footer", format)); createTableSql = format("" + "CREATE TABLE %s.%s.%s_table_skip_header " + "WITH (\n" + " format = '%s',\n" + " skip_header_line_count = 1\n" + ") AS SELECT CAST(1 AS VARCHAR) AS col_name1, CAST(2 AS VARCHAR) as col_name2", catalog, schema, name, format); assertUpdate(createTableSql, 1); assertUpdate(format("INSERT INTO %s.%s.%s_table_skip_header VALUES('3', '4')", catalog, schema, name), 1); MaterializedResult materializedRows = computeActual(format("SELECT * FROM %s_table_skip_header", name)); assertEqualsIgnoreOrder(materializedRows, resultBuilder(getSession(), VARCHAR, VARCHAR) .row("1", "2") .row("3", "4") .build() .getMaterializedRows()); assertUpdate(format("DROP TABLE %s_table_skip_header", format)); } @Test public void testCreateTableWithHeaderAndFooterForTextFile() { testCreateTableWithHeaderAndFooter("TEXTFILE"); } @Test public void testCreateTableWithHeaderAndFooterForCsv() { testCreateTableWithHeaderAndFooter("CSV"); } @Test public void testInsertTableWithHeaderAndFooterForCsv() { @Language("SQL") String createTableSql = format("" + "CREATE TABLE %s.%s.csv_table_skip_header (\n" + " name VARCHAR\n" + ")\n" + "WITH (\n" + " format = 'CSV',\n" + " skip_header_line_count = 2\n" + ")", getSession().getCatalog().get(), getSession().getSchema().get()); assertUpdate(createTableSql); assertThatThrownBy(() -> assertUpdate( format("INSERT INTO %s.%s.csv_table_skip_header VALUES ('name')", getSession().getCatalog().get(), getSession().getSchema().get()))) .hasMessageMatching("Inserting into Hive table with value of skip.header.line.count property greater than 1 is not supported"); assertUpdate("DROP TABLE csv_table_skip_header"); createTableSql = format("" + "CREATE TABLE %s.%s.csv_table_skip_footer (\n" + " name VARCHAR\n" + ")\n" + "WITH (\n" + " format = 'CSV',\n" + " skip_footer_line_count = 1\n" + ")", getSession().getCatalog().get(), getSession().getSchema().get()); assertUpdate(createTableSql); assertThatThrownBy(() -> assertUpdate( format("INSERT INTO %s.%s.csv_table_skip_footer VALUES ('name')", getSession().getCatalog().get(), getSession().getSchema().get()))) .hasMessageMatching("Inserting into Hive table with skip.footer.line.count property not supported"); createTableSql = format("" + "CREATE TABLE %s.%s.csv_table_skip_header_footer (\n" + " name VARCHAR\n" + ")\n" + "WITH (\n" + " format = 'CSV',\n" + " skip_footer_line_count = 1,\n" + " skip_header_line_count = 1\n" + ")", getSession().getCatalog().get(), getSession().getSchema().get()); assertUpdate(createTableSql); assertThatThrownBy(() -> assertUpdate( format("INSERT INTO %s.%s.csv_table_skip_header_footer VALUES ('name')", getSession().getCatalog().get(), getSession().getSchema().get()))) .hasMessageMatching("Inserting into Hive table with skip.footer.line.count property not supported"); assertUpdate("DROP TABLE csv_table_skip_header_footer"); } @Test public void testCreateTableWithInvalidProperties() { // ORC assertThatThrownBy(() -> assertUpdate("CREATE TABLE invalid_table (col1 bigint) WITH (format = 'TEXTFILE', orc_bloom_filter_columns = ARRAY['col1'])")) .hasMessageMatching("Cannot specify orc_bloom_filter_columns table property for storage format: TEXTFILE"); // TEXTFILE assertThatThrownBy(() -> assertUpdate("CREATE TABLE test_orc_skip_header (col1 bigint) WITH (format = 'ORC', skip_header_line_count = 1)")) .hasMessageMatching("Cannot specify skip_header_line_count table property for storage format: ORC"); assertThatThrownBy(() -> assertUpdate("CREATE TABLE test_orc_skip_footer (col1 bigint) WITH (format = 'ORC', skip_footer_line_count = 1)")) .hasMessageMatching("Cannot specify skip_footer_line_count table property for storage format: ORC"); assertThatThrownBy(() -> assertUpdate("CREATE TABLE test_orc_skip_footer (col1 bigint) WITH (format = 'ORC', null_format = 'ERROR')")) .hasMessageMatching("Cannot specify null_format table property for storage format: ORC"); assertThatThrownBy(() -> assertUpdate("CREATE TABLE test_invalid_skip_header (col1 bigint) WITH (format = 'TEXTFILE', skip_header_line_count = -1)")) .hasMessageMatching("Invalid value for skip_header_line_count property: -1"); assertThatThrownBy(() -> assertUpdate("CREATE TABLE test_invalid_skip_footer (col1 bigint) WITH (format = 'TEXTFILE', skip_footer_line_count = -1)")) .hasMessageMatching("Invalid value for skip_footer_line_count property: -1"); // CSV assertThatThrownBy(() -> assertUpdate("CREATE TABLE invalid_table (col1 bigint) WITH (format = 'ORC', csv_separator = 'S')")) .hasMessageMatching("Cannot specify csv_separator table property for storage format: ORC"); assertThatThrownBy(() -> assertUpdate("CREATE TABLE invalid_table (col1 varchar) WITH (format = 'CSV', csv_separator = 'SS')")) .hasMessageMatching("csv_separator must be a single character string, but was: 'SS'"); assertThatThrownBy(() -> assertUpdate("CREATE TABLE invalid_table (col1 bigint) WITH (format = 'ORC', csv_quote = 'Q')")) .hasMessageMatching("Cannot specify csv_quote table property for storage format: ORC"); assertThatThrownBy(() -> assertUpdate("CREATE TABLE invalid_table (col1 varchar) WITH (format = 'CSV', csv_quote = 'QQ')")) .hasMessageMatching("csv_quote must be a single character string, but was: 'QQ'"); assertThatThrownBy(() -> assertUpdate("CREATE TABLE invalid_table (col1 varchar) WITH (format = 'ORC', csv_escape = 'E')")) .hasMessageMatching("Cannot specify csv_escape table property for storage format: ORC"); assertThatThrownBy(() -> assertUpdate("CREATE TABLE invalid_table (col1 varchar) WITH (format = 'CSV', csv_escape = 'EE')")) .hasMessageMatching("csv_escape must be a single character string, but was: 'EE'"); } @Test public void testPathHiddenColumn() { testWithAllStorageFormats(this::testPathHiddenColumn); } private void testPathHiddenColumn(Session session, HiveStorageFormat storageFormat) { @Language("SQL") String createTable = "CREATE TABLE test_path " + "WITH (" + "format = '" + storageFormat + "'," + "partitioned_by = ARRAY['col1']" + ") AS " + "SELECT * FROM (VALUES " + "(0, 0), (3, 0), (6, 0), " + "(1, 1), (4, 1), (7, 1), " + "(2, 2), (5, 2) " + " ) t(col0, col1) "; assertUpdate(session, createTable, 8); assertTrue(getQueryRunner().tableExists(getSession(), "test_path")); TableMetadata tableMetadata = getTableMetadata(catalog, TPCH_SCHEMA, "test_path"); assertEquals(tableMetadata.getMetadata().getProperties().get(STORAGE_FORMAT_PROPERTY), storageFormat); List<String> columnNames = ImmutableList.of("col0", "col1", PATH_COLUMN_NAME, FILE_SIZE_COLUMN_NAME, FILE_MODIFIED_TIME_COLUMN_NAME, PARTITION_COLUMN_NAME); List<ColumnMetadata> columnMetadatas = tableMetadata.getColumns(); assertEquals(columnMetadatas.size(), columnNames.size()); for (int i = 0; i < columnMetadatas.size(); i++) { ColumnMetadata columnMetadata = columnMetadatas.get(i); assertEquals(columnMetadata.getName(), columnNames.get(i)); if (columnMetadata.getName().equals(PATH_COLUMN_NAME)) { // $path should be hidden column assertTrue(columnMetadata.isHidden()); } } assertEquals(getPartitions("test_path").size(), 3); MaterializedResult results = computeActual(session, format("SELECT *, \"%s\" FROM test_path", PATH_COLUMN_NAME)); Map<Integer, String> partitionPathMap = new HashMap<>(); for (int i = 0; i < results.getRowCount(); i++) { MaterializedRow row = results.getMaterializedRows().get(i); int col0 = (int) row.getField(0); int col1 = (int) row.getField(1); String pathName = (String) row.getField(2); String parentDirectory = new Path(pathName).getParent().toString(); assertTrue(pathName.length() > 0); assertEquals(col0 % 3, col1); if (partitionPathMap.containsKey(col1)) { // the rows in the same partition should be in the same partition directory assertEquals(partitionPathMap.get(col1), parentDirectory); } else { partitionPathMap.put(col1, parentDirectory); } } assertEquals(partitionPathMap.size(), 3); assertUpdate(session, "DROP TABLE test_path"); assertFalse(getQueryRunner().tableExists(session, "test_path")); } @Test public void testBucketHiddenColumn() { @Language("SQL") String createTable = "CREATE TABLE test_bucket_hidden_column " + "WITH (" + "bucketed_by = ARRAY['col0']," + "bucket_count = 2" + ") AS " + "SELECT * FROM (VALUES " + "(0, 11), (1, 12), (2, 13), " + "(3, 14), (4, 15), (5, 16), " + "(6, 17), (7, 18), (8, 19)" + " ) t (col0, col1) "; assertUpdate(createTable, 9); assertTrue(getQueryRunner().tableExists(getSession(), "test_bucket_hidden_column")); TableMetadata tableMetadata = getTableMetadata(catalog, TPCH_SCHEMA, "test_bucket_hidden_column"); assertEquals(tableMetadata.getMetadata().getProperties().get(BUCKETED_BY_PROPERTY), ImmutableList.of("col0")); assertEquals(tableMetadata.getMetadata().getProperties().get(BUCKET_COUNT_PROPERTY), 2); List<String> columnNames = ImmutableList.of("col0", "col1", PATH_COLUMN_NAME, BUCKET_COLUMN_NAME, FILE_SIZE_COLUMN_NAME, FILE_MODIFIED_TIME_COLUMN_NAME); List<ColumnMetadata> columnMetadatas = tableMetadata.getColumns(); assertEquals(columnMetadatas.size(), columnNames.size()); for (int i = 0; i < columnMetadatas.size(); i++) { ColumnMetadata columnMetadata = columnMetadatas.get(i); assertEquals(columnMetadata.getName(), columnNames.get(i)); if (columnMetadata.getName().equals(BUCKET_COLUMN_NAME)) { // $bucket_number should be hidden column assertTrue(columnMetadata.isHidden()); } } assertEquals(getBucketCount("test_bucket_hidden_column"), 2); MaterializedResult results = computeActual(format("SELECT *, \"%1$s\" FROM test_bucket_hidden_column WHERE \"%1$s\" = 1", BUCKET_COLUMN_NAME)); for (int i = 0; i < results.getRowCount(); i++) { MaterializedRow row = results.getMaterializedRows().get(i); int col0 = (int) row.getField(0); int col1 = (int) row.getField(1); int bucket = (int) row.getField(2); assertEquals(col1, col0 + 11); assertTrue(col1 % 2 == 0); // Because Hive's hash function for integer n is h(n) = n. assertEquals(bucket, col0 % 2); } assertEquals(results.getRowCount(), 4); assertUpdate("DROP TABLE test_bucket_hidden_column"); assertFalse(getQueryRunner().tableExists(getSession(), "test_bucket_hidden_column")); } @Test public void testFileSizeHiddenColumn() { @Language("SQL") String createTable = "CREATE TABLE test_file_size " + "WITH (" + "partitioned_by = ARRAY['col1']" + ") AS " + "SELECT * FROM (VALUES " + "(0, 0), (3, 0), (6, 0), " + "(1, 1), (4, 1), (7, 1), " + "(2, 2), (5, 2) " + " ) t(col0, col1) "; assertUpdate(createTable, 8); assertTrue(getQueryRunner().tableExists(getSession(), "test_file_size")); TableMetadata tableMetadata = getTableMetadata(catalog, TPCH_SCHEMA, "test_file_size"); List<String> columnNames = ImmutableList.of("col0", "col1", PATH_COLUMN_NAME, FILE_SIZE_COLUMN_NAME, FILE_MODIFIED_TIME_COLUMN_NAME, PARTITION_COLUMN_NAME); List<ColumnMetadata> columnMetadatas = tableMetadata.getColumns(); assertEquals(columnMetadatas.size(), columnNames.size()); for (int i = 0; i < columnMetadatas.size(); i++) { ColumnMetadata columnMetadata = columnMetadatas.get(i); assertEquals(columnMetadata.getName(), columnNames.get(i)); if (columnMetadata.getName().equals(FILE_SIZE_COLUMN_NAME)) { assertTrue(columnMetadata.isHidden()); } } assertEquals(getPartitions("test_file_size").size(), 3); MaterializedResult results = computeActual(format("SELECT *, \"%s\" FROM test_file_size", FILE_SIZE_COLUMN_NAME)); Map<Integer, Long> fileSizeMap = new HashMap<>(); for (int i = 0; i < results.getRowCount(); i++) { MaterializedRow row = results.getMaterializedRows().get(i); int col0 = (int) row.getField(0); int col1 = (int) row.getField(1); long fileSize = (Long) row.getField(2); assertTrue(fileSize > 0); assertEquals(col0 % 3, col1); if (fileSizeMap.containsKey(col1)) { assertEquals(fileSizeMap.get(col1).longValue(), fileSize); } else { fileSizeMap.put(col1, fileSize); } } assertEquals(fileSizeMap.size(), 3); assertUpdate("DROP TABLE test_file_size"); } @Test(dataProvider = "timestampPrecision") public void testFileModifiedTimeHiddenColumn(HiveTimestampPrecision precision) { long testStartTime = Instant.now().toEpochMilli(); @Language("SQL") String createTable = "CREATE TABLE test_file_modified_time " + "WITH (" + "partitioned_by = ARRAY['col1']" + ") AS " + "SELECT * FROM (VALUES " + "(0, 0), (3, 0), (6, 0), " + "(1, 1), (4, 1), (7, 1), " + "(2, 2), (5, 2) " + " ) t(col0, col1) "; assertUpdate(createTable, 8); assertTrue(getQueryRunner().tableExists(getSession(), "test_file_modified_time")); TableMetadata tableMetadata = getTableMetadata(catalog, TPCH_SCHEMA, "test_file_modified_time"); List<String> columnNames = ImmutableList.of("col0", "col1", PATH_COLUMN_NAME, FILE_SIZE_COLUMN_NAME, FILE_MODIFIED_TIME_COLUMN_NAME, PARTITION_COLUMN_NAME); List<ColumnMetadata> columnMetadatas = tableMetadata.getColumns(); assertEquals(columnMetadatas.size(), columnNames.size()); for (int i = 0; i < columnMetadatas.size(); i++) { ColumnMetadata columnMetadata = columnMetadatas.get(i); assertEquals(columnMetadata.getName(), columnNames.get(i)); if (columnMetadata.getName().equals(FILE_MODIFIED_TIME_COLUMN_NAME)) { assertTrue(columnMetadata.isHidden()); } } assertEquals(getPartitions("test_file_modified_time").size(), 3); Session sessionWithTimestampPrecision = withTimestampPrecision(getSession(), precision); MaterializedResult results = computeActual( sessionWithTimestampPrecision, format("SELECT *, \"%s\" FROM test_file_modified_time", FILE_MODIFIED_TIME_COLUMN_NAME)); Map<Integer, Instant> fileModifiedTimeMap = new HashMap<>(); for (int i = 0; i < results.getRowCount(); i++) { MaterializedRow row = results.getMaterializedRows().get(i); int col0 = (int) row.getField(0); int col1 = (int) row.getField(1); Instant fileModifiedTime = ((ZonedDateTime) row.getField(2)).toInstant(); assertThat(fileModifiedTime.toEpochMilli()).isCloseTo(testStartTime, offset(2000L)); assertEquals(col0 % 3, col1); if (fileModifiedTimeMap.containsKey(col1)) { assertEquals(fileModifiedTimeMap.get(col1), fileModifiedTime); } else { fileModifiedTimeMap.put(col1, fileModifiedTime); } } assertEquals(fileModifiedTimeMap.size(), 3); assertUpdate("DROP TABLE test_file_modified_time"); } @Test public void testPartitionHiddenColumn() { @Language("SQL") String createTable = "CREATE TABLE test_partition_hidden_column " + "WITH (" + "partitioned_by = ARRAY['col1', 'col2']" + ") AS " + "SELECT * FROM (VALUES " + "(0, 11, 21), (1, 12, 22), (2, 13, 23), " + "(3, 14, 24), (4, 15, 25), (5, 16, 26), " + "(6, 17, 27), (7, 18, 28), (8, 19, 29)" + " ) t (col0, col1, col2) "; assertUpdate(createTable, 9); assertTrue(getQueryRunner().tableExists(getSession(), "test_partition_hidden_column")); TableMetadata tableMetadata = getTableMetadata(catalog, TPCH_SCHEMA, "test_partition_hidden_column"); assertEquals(tableMetadata.getMetadata().getProperties().get(PARTITIONED_BY_PROPERTY), ImmutableList.of("col1", "col2")); List<String> columnNames = ImmutableList.of("col0", "col1", "col2", PATH_COLUMN_NAME, FILE_SIZE_COLUMN_NAME, FILE_MODIFIED_TIME_COLUMN_NAME, PARTITION_COLUMN_NAME); List<ColumnMetadata> columnMetadatas = tableMetadata.getColumns(); assertEquals(columnMetadatas.size(), columnNames.size()); for (int i = 0; i < columnMetadatas.size(); i++) { ColumnMetadata columnMetadata = columnMetadatas.get(i); assertEquals(columnMetadata.getName(), columnNames.get(i)); if (columnMetadata.getName().equals(PARTITION_COLUMN_NAME)) { assertTrue(columnMetadata.isHidden()); } } assertEquals(getPartitions("test_partition_hidden_column").size(), 9); MaterializedResult results = computeActual(format("SELECT *, \"%s\" FROM test_partition_hidden_column", PARTITION_COLUMN_NAME)); for (MaterializedRow row : results.getMaterializedRows()) { String actualPartition = (String) row.getField(3); String expectedPartition = format("col1=%s/col2=%s", row.getField(1), row.getField(2)); assertEquals(actualPartition, expectedPartition); } assertEquals(results.getRowCount(), 9); assertUpdate("DROP TABLE test_partition_hidden_column"); } @Test public void testDeleteAndInsert() { Session session = getSession(); // Partition 1 is untouched // Partition 2 is altered (dropped and then added back) // Partition 3 is added // Partition 4 is dropped assertUpdate( session, "CREATE TABLE tmp_delete_insert WITH (partitioned_by=array ['z']) AS " + "SELECT * FROM (VALUES (CAST (101 AS BIGINT), CAST (1 AS BIGINT)), (201, 2), (202, 2), (401, 4), (402, 4), (403, 4)) t(a, z)", 6); List<MaterializedRow> expectedBefore = resultBuilder(session, BIGINT, BIGINT) .row(101L, 1L) .row(201L, 2L) .row(202L, 2L) .row(401L, 4L) .row(402L, 4L) .row(403L, 4L) .build() .getMaterializedRows(); List<MaterializedRow> expectedAfter = resultBuilder(session, BIGINT, BIGINT) .row(101L, 1L) .row(203L, 2L) .row(204L, 2L) .row(205L, 2L) .row(301L, 2L) .row(302L, 3L) .build() .getMaterializedRows(); try { transaction(getQueryRunner().getTransactionManager(), getQueryRunner().getAccessControl()) .execute(session, transactionSession -> { assertUpdate(transactionSession, "DELETE FROM tmp_delete_insert WHERE z >= 2"); assertUpdate(transactionSession, "INSERT INTO tmp_delete_insert VALUES (203, 2), (204, 2), (205, 2), (301, 2), (302, 3)", 5); MaterializedResult actualFromAnotherTransaction = computeActual(session, "SELECT * FROM tmp_delete_insert"); assertEqualsIgnoreOrder(actualFromAnotherTransaction, expectedBefore); MaterializedResult actualFromCurrentTransaction = computeActual(transactionSession, "SELECT * FROM tmp_delete_insert"); assertEqualsIgnoreOrder(actualFromCurrentTransaction, expectedAfter); rollback(); }); } catch (RollbackException e) { // ignore } MaterializedResult actualAfterRollback = computeActual(session, "SELECT * FROM tmp_delete_insert"); assertEqualsIgnoreOrder(actualAfterRollback, expectedBefore); transaction(getQueryRunner().getTransactionManager(), getQueryRunner().getAccessControl()) .execute(session, transactionSession -> { assertUpdate(transactionSession, "DELETE FROM tmp_delete_insert WHERE z >= 2"); assertUpdate(transactionSession, "INSERT INTO tmp_delete_insert VALUES (203, 2), (204, 2), (205, 2), (301, 2), (302, 3)", 5); MaterializedResult actualOutOfTransaction = computeActual(session, "SELECT * FROM tmp_delete_insert"); assertEqualsIgnoreOrder(actualOutOfTransaction, expectedBefore); MaterializedResult actualInTransaction = computeActual(transactionSession, "SELECT * FROM tmp_delete_insert"); assertEqualsIgnoreOrder(actualInTransaction, expectedAfter); }); MaterializedResult actualAfterTransaction = computeActual(session, "SELECT * FROM tmp_delete_insert"); assertEqualsIgnoreOrder(actualAfterTransaction, expectedAfter); } @Test public void testCreateAndInsert() { Session session = getSession(); List<MaterializedRow> expected = resultBuilder(session, BIGINT, BIGINT) .row(101L, 1L) .row(201L, 2L) .row(202L, 2L) .row(301L, 3L) .row(302L, 3L) .build() .getMaterializedRows(); transaction(getQueryRunner().getTransactionManager(), getQueryRunner().getAccessControl()) .execute(session, transactionSession -> { assertUpdate( transactionSession, "CREATE TABLE tmp_create_insert WITH (partitioned_by=array ['z']) AS " + "SELECT * FROM (VALUES (CAST (101 AS BIGINT), CAST (1 AS BIGINT)), (201, 2), (202, 2)) t(a, z)", 3); assertUpdate(transactionSession, "INSERT INTO tmp_create_insert VALUES (301, 3), (302, 3)", 2); MaterializedResult actualFromCurrentTransaction = computeActual(transactionSession, "SELECT * FROM tmp_create_insert"); assertEqualsIgnoreOrder(actualFromCurrentTransaction, expected); }); MaterializedResult actualAfterTransaction = computeActual(session, "SELECT * FROM tmp_create_insert"); assertEqualsIgnoreOrder(actualAfterTransaction, expected); } @Test public void testRenameView() { assertUpdate("CREATE VIEW rename_view_original AS SELECT COUNT(*) as count FROM orders"); assertQuery("SELECT * FROM rename_view_original", "SELECT COUNT(*) FROM orders"); assertUpdate("CREATE SCHEMA view_rename"); assertUpdate("ALTER VIEW rename_view_original RENAME TO view_rename.rename_view_new"); assertQuery("SELECT * FROM view_rename.rename_view_new", "SELECT COUNT(*) FROM orders"); assertQueryFails("SELECT * FROM rename_view_original", ".*rename_view_original' does not exist"); assertUpdate("DROP VIEW view_rename.rename_view_new"); } @Test @Override public void testRenameColumn() { super.testRenameColumn(); // Additional tests for hive partition columns invariants @Language("SQL") String createTable = "" + "CREATE TABLE test_rename_column\n" + "WITH (\n" + " partitioned_by = ARRAY ['orderstatus']\n" + ")\n" + "AS\n" + "SELECT orderkey, orderstatus FROM orders"; assertUpdate(createTable, "SELECT count(*) FROM orders"); assertUpdate("ALTER TABLE test_rename_column RENAME COLUMN orderkey TO new_orderkey"); assertQuery("SELECT new_orderkey, orderstatus FROM test_rename_column", "SELECT orderkey, orderstatus FROM orders"); assertQueryFails("ALTER TABLE test_rename_column RENAME COLUMN \"$path\" TO test", ".* Cannot rename hidden column"); assertQueryFails("ALTER TABLE test_rename_column RENAME COLUMN orderstatus TO new_orderstatus", "Renaming partition columns is not supported"); assertQuery("SELECT new_orderkey, orderstatus FROM test_rename_column", "SELECT orderkey, orderstatus FROM orders"); assertUpdate("DROP TABLE test_rename_column"); } @Test @Override public void testDropColumn() { super.testDropColumn(); // Additional tests for hive partition columns invariants @Language("SQL") String createTable = "" + "CREATE TABLE test_drop_column\n" + "WITH (\n" + " partitioned_by = ARRAY ['orderstatus']\n" + ")\n" + "AS\n" + "SELECT custkey, orderkey, orderstatus FROM orders"; assertUpdate(createTable, "SELECT count(*) FROM orders"); assertQuery("SELECT orderkey, orderstatus FROM test_drop_column", "SELECT orderkey, orderstatus FROM orders"); assertQueryFails("ALTER TABLE test_drop_column DROP COLUMN \"$path\"", ".* Cannot drop hidden column"); assertQueryFails("ALTER TABLE test_drop_column DROP COLUMN orderstatus", "Cannot drop partition columns"); assertUpdate("ALTER TABLE test_drop_column DROP COLUMN orderkey"); assertQueryFails("ALTER TABLE test_drop_column DROP COLUMN custkey", "Cannot drop the only non-partition column in a table"); assertQuery("SELECT * FROM test_drop_column", "SELECT custkey, orderstatus FROM orders"); assertUpdate("DROP TABLE test_drop_column"); } @Test public void testAvroTypeValidation() { assertQueryFails("CREATE TABLE test_avro_types (x map(bigint, bigint)) WITH (format = 'AVRO')", "Column 'x' has a non-varchar map key, which is not supported by Avro"); assertQueryFails("CREATE TABLE test_avro_types (x tinyint) WITH (format = 'AVRO')", "Column 'x' is tinyint, which is not supported by Avro. Use integer instead."); assertQueryFails("CREATE TABLE test_avro_types (x smallint) WITH (format = 'AVRO')", "Column 'x' is smallint, which is not supported by Avro. Use integer instead."); assertQueryFails("CREATE TABLE test_avro_types WITH (format = 'AVRO') AS SELECT cast(42 AS smallint) z", "Column 'z' is smallint, which is not supported by Avro. Use integer instead."); } @Test public void testOrderByChar() { assertUpdate("CREATE TABLE char_order_by (c_char char(2))"); assertUpdate("INSERT INTO char_order_by (c_char) VALUES" + "(CAST('a' as CHAR(2)))," + "(CAST('a\0' as CHAR(2)))," + "(CAST('a ' as CHAR(2)))", 3); MaterializedResult actual = computeActual(getSession(), "SELECT * FROM char_order_by ORDER BY c_char ASC"); assertUpdate("DROP TABLE char_order_by"); MaterializedResult expected = resultBuilder(getSession(), createCharType(2)) .row("a\0") .row("a ") .row("a ") .build(); assertEquals(actual, expected); } /** * Tests correctness of comparison of char(x) and varchar pushed down to a table scan as a TupleDomain */ @Test public void testPredicatePushDownToTableScan() { // Test not specific to Hive, but needs a connector supporting table creation assertUpdate("CREATE TABLE test_table_with_char (a char(20))"); try { assertUpdate("INSERT INTO test_table_with_char (a) VALUES" + "(cast('aaa' as char(20)))," + "(cast('bbb' as char(20)))," + "(cast('bbc' as char(20)))," + "(cast('bbd' as char(20)))", 4); assertQuery( "SELECT a, a <= 'bbc' FROM test_table_with_char", "VALUES (cast('aaa' as char(20)), true), " + "(cast('bbb' as char(20)), true), " + "(cast('bbc' as char(20)), true), " + "(cast('bbd' as char(20)), false)"); assertQuery( "SELECT a FROM test_table_with_char WHERE a <= 'bbc'", "VALUES cast('aaa' as char(20)), " + "cast('bbb' as char(20)), " + "cast('bbc' as char(20))"); } finally { assertUpdate("DROP TABLE test_table_with_char"); } } @DataProvider public Object[][] timestampPrecisionAndValues() { return new Object[][] { {HiveTimestampPrecision.MILLISECONDS, LocalDateTime.parse("2012-10-31T01:00:08.123")}, {HiveTimestampPrecision.MICROSECONDS, LocalDateTime.parse("2012-10-31T01:00:08.123456")}, {HiveTimestampPrecision.NANOSECONDS, LocalDateTime.parse("2012-10-31T01:00:08.123000000")}, {HiveTimestampPrecision.NANOSECONDS, LocalDateTime.parse("2012-10-31T01:00:08.123000001")}, {HiveTimestampPrecision.NANOSECONDS, LocalDateTime.parse("2012-10-31T01:00:08.123456789")}, {HiveTimestampPrecision.MILLISECONDS, LocalDateTime.parse("1965-10-31T01:00:08.123")}, {HiveTimestampPrecision.MICROSECONDS, LocalDateTime.parse("1965-10-31T01:00:08.123456")}, {HiveTimestampPrecision.NANOSECONDS, LocalDateTime.parse("1965-10-31T01:00:08.123000000")}, {HiveTimestampPrecision.NANOSECONDS, LocalDateTime.parse("1965-10-31T01:00:08.123000001")}, {HiveTimestampPrecision.NANOSECONDS, LocalDateTime.parse("1965-10-31T01:00:08.123456789")}}; } @Test(dataProvider = "timestampPrecisionAndValues") public void testParquetTimestampPredicatePushdown(HiveTimestampPrecision timestampPrecision, LocalDateTime value) { Session session = withTimestampPrecision(getSession(), timestampPrecision); assertUpdate("DROP TABLE IF EXISTS test_parquet_timestamp_predicate_pushdown"); assertUpdate("CREATE TABLE test_parquet_timestamp_predicate_pushdown (t TIMESTAMP) WITH (format = 'PARQUET')"); assertUpdate(session, format("INSERT INTO test_parquet_timestamp_predicate_pushdown VALUES (%s)", formatTimestamp(value)), 1); assertQuery(session, "SELECT * FROM test_parquet_timestamp_predicate_pushdown", format("VALUES (%s)", formatTimestamp(value))); DistributedQueryRunner queryRunner = (DistributedQueryRunner) getQueryRunner(); ResultWithQueryId<MaterializedResult> queryResult = queryRunner.executeWithQueryId( session, format("SELECT * FROM test_parquet_timestamp_predicate_pushdown WHERE t < %s", formatTimestamp(value))); assertEquals(getQueryInfo(queryRunner, queryResult).getQueryStats().getProcessedInputDataSize().toBytes(), 0); queryResult = queryRunner.executeWithQueryId( session, format("SELECT * FROM test_parquet_timestamp_predicate_pushdown WHERE t > %s", formatTimestamp(value))); assertEquals(getQueryInfo(queryRunner, queryResult).getQueryStats().getProcessedInputDataSize().toBytes(), 0); // TODO: replace this with a simple query stats check once we find a way to wait until all pending updates to query stats have been applied // (might be fixed by https://github.com/trinodb/trino/issues/5172) ExponentialSleeper sleeper = new ExponentialSleeper(); assertQueryStats( session, format("SELECT * FROM test_parquet_timestamp_predicate_pushdown WHERE t = %s", formatTimestamp(value)), queryStats -> { sleeper.sleep(); assertThat(queryStats.getProcessedInputDataSize().toBytes()).isGreaterThan(0); }, results -> {}, new Duration(30, SECONDS)); } @Test(dataProvider = "timestampPrecisionAndValues") public void testOrcTimestampPredicatePushdown(HiveTimestampPrecision timestampPrecision, LocalDateTime value) { Session session = withTimestampPrecision(getSession(), timestampPrecision); assertUpdate("DROP TABLE IF EXISTS test_orc_timestamp_predicate_pushdown"); assertUpdate("CREATE TABLE test_orc_timestamp_predicate_pushdown (t TIMESTAMP) WITH (format = 'ORC')"); assertUpdate(session, format("INSERT INTO test_orc_timestamp_predicate_pushdown VALUES (%s)", formatTimestamp(value)), 1); assertQuery(session, "SELECT * FROM test_orc_timestamp_predicate_pushdown", format("VALUES (%s)", formatTimestamp(value))); // to account for the fact that ORC stats are stored at millisecond precision and Presto rounds timestamps, // we filter by timestamps that differ from the actual value by at least 1ms, to observe pruning DistributedQueryRunner queryRunner = getDistributedQueryRunner(); ResultWithQueryId<MaterializedResult> queryResult = queryRunner.executeWithQueryId( session, format("SELECT * FROM test_orc_timestamp_predicate_pushdown WHERE t < %s", formatTimestamp(value.minusNanos(MILLISECONDS.toNanos(1))))); assertEquals(getQueryInfo(queryRunner, queryResult).getQueryStats().getProcessedInputDataSize().toBytes(), 0); queryResult = queryRunner.executeWithQueryId( session, format("SELECT * FROM test_orc_timestamp_predicate_pushdown WHERE t > %s", formatTimestamp(value.plusNanos(MILLISECONDS.toNanos(1))))); assertEquals(getQueryInfo(queryRunner, queryResult).getQueryStats().getProcessedInputDataSize().toBytes(), 0); assertQuery(session, "SELECT * FROM test_orc_timestamp_predicate_pushdown WHERE t < " + formatTimestamp(value.plusNanos(1)), format("VALUES (%s)", formatTimestamp(value))); // TODO: replace this with a simple query stats check once we find a way to wait until all pending updates to query stats have been applied // (might be fixed by https://github.com/trinodb/trino/issues/5172) ExponentialSleeper sleeper = new ExponentialSleeper(); assertQueryStats( session, format("SELECT * FROM test_orc_timestamp_predicate_pushdown WHERE t = %s", formatTimestamp(value)), queryStats -> { sleeper.sleep(); assertThat(queryStats.getProcessedInputDataSize().toBytes()).isGreaterThan(0); }, results -> {}, new Duration(30, SECONDS)); } private static String formatTimestamp(LocalDateTime timestamp) { return format("TIMESTAMP '%s'", TIMESTAMP_FORMATTER.format(timestamp)); } @Test public void testParquetShortDecimalPredicatePushdown() { assertUpdate("DROP TABLE IF EXISTS test_parquet_decimal_predicate_pushdown"); assertUpdate("CREATE TABLE test_parquet_decimal_predicate_pushdown (decimal_t DECIMAL(5, 3)) WITH (format = 'PARQUET')"); assertUpdate("INSERT INTO test_parquet_decimal_predicate_pushdown VALUES DECIMAL '12.345'", 1); assertQuery("SELECT * FROM test_parquet_decimal_predicate_pushdown", "VALUES 12.345"); assertQuery("SELECT count(*) FROM test_parquet_decimal_predicate_pushdown WHERE decimal_t = DECIMAL '12.345'", "VALUES 1"); assertNoDataRead("SELECT * FROM test_parquet_decimal_predicate_pushdown WHERE decimal_t < DECIMAL '12.345'"); assertNoDataRead("SELECT * FROM test_parquet_decimal_predicate_pushdown WHERE decimal_t > DECIMAL '12.345'"); assertNoDataRead("SELECT * FROM test_parquet_decimal_predicate_pushdown WHERE decimal_t != DECIMAL '12.345'"); } @Test public void testParquetLongDecimalPredicatePushdown() { assertUpdate("DROP TABLE IF EXISTS test_parquet_long_decimal_predicate_pushdown"); assertUpdate("CREATE TABLE test_parquet_long_decimal_predicate_pushdown (decimal_t DECIMAL(20, 3)) WITH (format = 'PARQUET')"); assertUpdate("INSERT INTO test_parquet_long_decimal_predicate_pushdown VALUES DECIMAL '12345678900000000.345'", 1); assertQuery("SELECT * FROM test_parquet_long_decimal_predicate_pushdown", "VALUES 12345678900000000.345"); assertQuery("SELECT count(*) FROM test_parquet_long_decimal_predicate_pushdown WHERE decimal_t = DECIMAL '12345678900000000.345'", "VALUES 1"); assertNoDataRead("SELECT * FROM test_parquet_long_decimal_predicate_pushdown WHERE decimal_t < DECIMAL '12345678900000000.345'"); assertNoDataRead("SELECT * FROM test_parquet_long_decimal_predicate_pushdown WHERE decimal_t > DECIMAL '12345678900000000.345'"); assertNoDataRead("SELECT * FROM test_parquet_long_decimal_predicate_pushdown WHERE decimal_t != DECIMAL '12345678900000000.345'"); } private void assertNoDataRead(@Language("SQL") String sql) { assertQueryStats( getSession(), sql, queryStats -> assertThat(queryStats.getProcessedInputDataSize().toBytes()).isEqualTo(0), results -> assertThat(results.getRowCount()).isEqualTo(0), new Duration(5, SECONDS)); } private QueryInfo getQueryInfo(DistributedQueryRunner queryRunner, ResultWithQueryId<MaterializedResult> queryResult) { return queryRunner.getCoordinator().getQueryManager().getFullQueryInfo(queryResult.getQueryId()); } @Test public void testPartitionPruning() { assertUpdate("CREATE TABLE test_partition_pruning (v bigint, k varchar) WITH (partitioned_by = array['k'])"); assertUpdate("INSERT INTO test_partition_pruning (v, k) VALUES (1, 'a'), (2, 'b'), (3, 'c'), (4, 'e')", 4); try { String query = "SELECT * FROM test_partition_pruning WHERE k = 'a'"; assertQuery(query, "VALUES (1, 'a')"); assertConstraints( query, ImmutableSet.of( new ColumnConstraint( "k", VARCHAR, new FormattedDomain( false, ImmutableSet.of( new FormattedRange( new FormattedMarker(Optional.of("a"), EXACTLY), new FormattedMarker(Optional.of("a"), EXACTLY))))))); query = "SELECT * FROM test_partition_pruning WHERE k IN ('a', 'b')"; assertQuery(query, "VALUES (1, 'a'), (2, 'b')"); assertConstraints( query, ImmutableSet.of( new ColumnConstraint( "k", VARCHAR, new FormattedDomain( false, ImmutableSet.of( new FormattedRange( new FormattedMarker(Optional.of("a"), EXACTLY), new FormattedMarker(Optional.of("a"), EXACTLY)), new FormattedRange( new FormattedMarker(Optional.of("b"), EXACTLY), new FormattedMarker(Optional.of("b"), EXACTLY))))))); query = "SELECT * FROM test_partition_pruning WHERE k >= 'b'"; assertQuery(query, "VALUES (2, 'b'), (3, 'c'), (4, 'e')"); assertConstraints( query, ImmutableSet.of( new ColumnConstraint( "k", VARCHAR, new FormattedDomain( false, ImmutableSet.of( new FormattedRange( new FormattedMarker(Optional.of("b"), EXACTLY), new FormattedMarker(Optional.of("b"), EXACTLY)), new FormattedRange( new FormattedMarker(Optional.of("c"), EXACTLY), new FormattedMarker(Optional.of("c"), EXACTLY)), new FormattedRange( new FormattedMarker(Optional.of("e"), EXACTLY), new FormattedMarker(Optional.of("e"), EXACTLY))))))); query = "SELECT * FROM (" + " SELECT * " + " FROM test_partition_pruning " + " WHERE v IN (1, 2, 4) " + ") t " + "WHERE t.k >= 'b'"; assertQuery(query, "VALUES (2, 'b'), (4, 'e')"); assertConstraints( query, ImmutableSet.of( new ColumnConstraint( "k", VARCHAR, new FormattedDomain( false, ImmutableSet.of( new FormattedRange( new FormattedMarker(Optional.of("b"), EXACTLY), new FormattedMarker(Optional.of("b"), EXACTLY)), new FormattedRange( new FormattedMarker(Optional.of("c"), EXACTLY), new FormattedMarker(Optional.of("c"), EXACTLY)), new FormattedRange( new FormattedMarker(Optional.of("e"), EXACTLY), new FormattedMarker(Optional.of("e"), EXACTLY))))))); } finally { assertUpdate("DROP TABLE test_partition_pruning"); } } @Test public void testBucketFilteringByInPredicate() { @Language("SQL") String createTable = "" + "CREATE TABLE test_bucket_filtering " + "(bucket_key_1 BIGINT, bucket_key_2 VARCHAR, col3 BOOLEAN) " + "WITH (" + "bucketed_by = ARRAY[ 'bucket_key_1', 'bucket_key_2' ], " + "bucket_count = 11" + ") "; assertUpdate(createTable); assertUpdate( "INSERT INTO test_bucket_filtering (bucket_key_1, bucket_key_2, col3) VALUES " + "(1, 'd', true), " + "(2, 'c', null), " + "(3, 'b', false), " + "(4, null, true), " + "(null, 'a', true)", 5); try { assertQuery( "SELECT * FROM test_bucket_filtering WHERE bucket_key_1 IN (1, 2) AND bucket_key_2 IN ('b', 'd')", "VALUES (1, 'd', true)"); assertQuery( "SELECT * FROM test_bucket_filtering WHERE bucket_key_1 IN (1, 2, 5, 6) AND bucket_key_2 IN ('b', 'd', 'x')", "VALUES (1, 'd', true)"); assertQuery( "SELECT * FROM test_bucket_filtering WHERE (bucket_key_1 IN (1, 2) OR bucket_key_1 IS NULL) AND (bucket_key_2 IN ('a', 'd') OR bucket_key_2 IS NULL)", "VALUES (1, 'd', true), (null, 'a', true)"); assertQueryReturnsEmptyResult("SELECT * FROM test_bucket_filtering WHERE bucket_key_1 IN (5, 6) AND bucket_key_2 IN ('x', 'y')"); assertQuery( "SELECT * FROM test_bucket_filtering WHERE bucket_key_1 IN (1, 2, 3) AND bucket_key_2 IN ('b', 'c', 'd') AND col3 = true", "VALUES (1, 'd', true)"); assertQuery( "SELECT * FROM test_bucket_filtering WHERE bucket_key_1 IN (1, 2) AND bucket_key_2 IN ('c', 'd') AND col3 IS NULL", "VALUES (2, 'c', null)"); assertQuery( "SELECT * FROM test_bucket_filtering WHERE bucket_key_1 IN (1, 2) AND bucket_key_2 IN ('b', 'c') OR col3 = false", "VALUES (2, 'c', null), (3, 'b', false)"); } finally { assertUpdate("DROP TABLE test_bucket_filtering"); } } @Test public void schemaMismatchesWithDereferenceProjections() { for (TestingHiveStorageFormat format : getAllTestingHiveStorageFormat()) { schemaMismatchesWithDereferenceProjections(format.getFormat()); } } private void schemaMismatchesWithDereferenceProjections(HiveStorageFormat format) { // Verify reordering of subfields between a partition column and a table column is not supported // eg. table column: a row(c varchar, b bigint), partition column: a row(b bigint, c varchar) try { assertUpdate("CREATE TABLE evolve_test (dummy bigint, a row(b bigint, c varchar), d bigint) with (format = '" + format + "', partitioned_by=array['d'])"); assertUpdate("INSERT INTO evolve_test values (1, row(1, 'abc'), 1)", 1); assertUpdate("ALTER TABLE evolve_test DROP COLUMN a"); assertUpdate("ALTER TABLE evolve_test ADD COLUMN a row(c varchar, b bigint)"); assertUpdate("INSERT INTO evolve_test values (2, row('def', 2), 2)", 1); assertQueryFails("SELECT a.b FROM evolve_test where d = 1", ".*There is a mismatch between the table and partition schemas.*"); } finally { assertUpdate("DROP TABLE IF EXISTS evolve_test"); } // Subfield absent in partition schema is reported as null // i.e. "a.c" produces null for rows that were inserted before type of "a" was changed try { assertUpdate("CREATE TABLE evolve_test (dummy bigint, a row(b bigint), d bigint) with (format = '" + format + "', partitioned_by=array['d'])"); assertUpdate("INSERT INTO evolve_test values (1, row(1), 1)", 1); assertUpdate("ALTER TABLE evolve_test DROP COLUMN a"); assertUpdate("ALTER TABLE evolve_test ADD COLUMN a row(b bigint, c varchar)"); assertUpdate("INSERT INTO evolve_test values (2, row(2, 'def'), 2)", 1); assertQuery("SELECT a.c FROM evolve_test", "SELECT 'def' UNION SELECT null"); } finally { assertUpdate("DROP TABLE IF EXISTS evolve_test"); } // Verify field access when the row evolves without changes to field type try { assertUpdate("CREATE TABLE evolve_test (dummy bigint, a row(b bigint, c varchar), d bigint) with (format = '" + format + "', partitioned_by=array['d'])"); assertUpdate("INSERT INTO evolve_test values (1, row(1, 'abc'), 1)", 1); assertUpdate("ALTER TABLE evolve_test DROP COLUMN a"); assertUpdate("ALTER TABLE evolve_test ADD COLUMN a row(b bigint, c varchar, e int)"); assertUpdate("INSERT INTO evolve_test values (2, row(2, 'def', 2), 2)", 1); assertQuery("SELECT a.b FROM evolve_test", "VALUES 1, 2"); } finally { assertUpdate("DROP TABLE IF EXISTS evolve_test"); } } @Test public void testSubfieldReordering() { // Validate for formats for which subfield access is name based List<HiveStorageFormat> formats = ImmutableList.of(HiveStorageFormat.ORC, HiveStorageFormat.PARQUET); for (HiveStorageFormat format : formats) { // Subfields reordered in the file are read correctly. e.g. if partition column type is row(b bigint, c varchar) but the file // column type is row(c varchar, b bigint), "a.b" should read the correct field from the file. try { assertUpdate("CREATE TABLE evolve_test (dummy bigint, a row(b bigint, c varchar)) with (format = '" + format + "')"); assertUpdate("INSERT INTO evolve_test values (1, row(1, 'abc'))", 1); assertUpdate("ALTER TABLE evolve_test DROP COLUMN a"); assertUpdate("ALTER TABLE evolve_test ADD COLUMN a row(c varchar, b bigint)"); assertQuery("SELECT a.b FROM evolve_test", "VALUES 1"); } finally { assertUpdate("DROP TABLE IF EXISTS evolve_test"); } // Assert that reordered subfields are read correctly for a two-level nesting. This is useful for asserting correct adaptation // of residue projections in HivePageSourceProvider try { assertUpdate("CREATE TABLE evolve_test (dummy bigint, a row(b bigint, c row(x bigint, y varchar))) with (format = '" + format + "')"); assertUpdate("INSERT INTO evolve_test values (1, row(1, row(3, 'abc')))", 1); assertUpdate("ALTER TABLE evolve_test DROP COLUMN a"); assertUpdate("ALTER TABLE evolve_test ADD COLUMN a row(c row(y varchar, x bigint), b bigint)"); // TODO: replace the following assertion with assertQuery once h2QueryRunner starts supporting row types assertQuerySucceeds("SELECT a.c.y, a.c FROM evolve_test"); } finally { assertUpdate("DROP TABLE IF EXISTS evolve_test"); } } } @Test public void testParquetColumnNameMappings() { Session sessionUsingColumnIndex = Session.builder(getSession()) .setCatalogSessionProperty(catalog, "parquet_use_column_names", "false") .build(); Session sessionUsingColumnName = Session.builder(getSession()) .setCatalogSessionProperty(catalog, "parquet_use_column_names", "true") .build(); String tableName = "test_parquet_by_column_index"; assertUpdate(sessionUsingColumnIndex, format( "CREATE TABLE %s(" + " a varchar, " + " b varchar) " + "WITH (format='PARQUET')", tableName)); assertUpdate(sessionUsingColumnIndex, "INSERT INTO " + tableName + " VALUES ('a', 'b')", 1); assertQuery( sessionUsingColumnIndex, "SELECT a, b FROM " + tableName, "VALUES ('a', 'b')"); assertQuery( sessionUsingColumnIndex, "SELECT a FROM " + tableName + " WHERE b = 'b'", "VALUES ('a')"); String tableLocation = (String) computeActual("SELECT DISTINCT regexp_replace(\"$path\", '/[^/]*$', '') FROM " + tableName).getOnlyValue(); // Reverse the table so that the Hive column ordering does not match the Parquet column ordering String reversedTableName = "test_parquet_by_column_index_reversed"; assertUpdate(sessionUsingColumnIndex, format( "CREATE TABLE %s(" + " b varchar, " + " a varchar) " + "WITH (format='PARQUET', external_location='%s')", reversedTableName, tableLocation)); assertQuery( sessionUsingColumnIndex, "SELECT a, b FROM " + reversedTableName, "VALUES ('b', 'a')"); assertQuery( sessionUsingColumnIndex, "SELECT a FROM " + reversedTableName + " WHERE b = 'a'", "VALUES ('b')"); assertQuery( sessionUsingColumnName, "SELECT a, b FROM " + reversedTableName, "VALUES ('a', 'b')"); assertQuery( sessionUsingColumnName, "SELECT a FROM " + reversedTableName + " WHERE b = 'b'", "VALUES ('a')"); assertUpdate(sessionUsingColumnIndex, "DROP TABLE " + reversedTableName); assertUpdate(sessionUsingColumnIndex, "DROP TABLE " + tableName); } @Test public void testParquetWithMissingColumns() { Session sessionUsingColumnIndex = Session.builder(getSession()) .setCatalogSessionProperty(catalog, "parquet_use_column_names", "false") .build(); Session sessionUsingColumnName = Session.builder(getSession()) .setCatalogSessionProperty(catalog, "parquet_use_column_names", "true") .build(); String singleColumnTableName = "test_parquet_with_missing_columns_one"; assertUpdate(format( "CREATE TABLE %s(" + " a varchar) " + "WITH (format='PARQUET')", singleColumnTableName)); assertUpdate(sessionUsingColumnIndex, "INSERT INTO " + singleColumnTableName + " VALUES ('a')", 1); String tableLocation = (String) computeActual("SELECT DISTINCT regexp_replace(\"$path\", '/[^/]*$', '') FROM " + singleColumnTableName).getOnlyValue(); String multiColumnTableName = "test_parquet_missing_columns_two"; assertUpdate(sessionUsingColumnIndex, format( "CREATE TABLE %s(" + " b varchar, " + " a varchar) " + "WITH (format='PARQUET', external_location='%s')", multiColumnTableName, tableLocation)); assertQuery( sessionUsingColumnName, "SELECT a FROM " + multiColumnTableName + " WHERE b IS NULL", "VALUES ('a')"); assertQuery( sessionUsingColumnName, "SELECT a FROM " + multiColumnTableName + " WHERE a = 'a'", "VALUES ('a')"); assertQuery( sessionUsingColumnIndex, "SELECT b FROM " + multiColumnTableName + " WHERE b = 'a'", "VALUES ('a')"); assertQuery( sessionUsingColumnIndex, "SELECT b FROM " + multiColumnTableName + " WHERE a IS NULL", "VALUES ('a')"); assertUpdate(sessionUsingColumnIndex, "DROP TABLE " + singleColumnTableName); assertUpdate(sessionUsingColumnIndex, "DROP TABLE " + multiColumnTableName); } @Test public void testParquetWithMissingNestedColumns() { Session sessionUsingColumnIndex = Session.builder(getSession()) .setCatalogSessionProperty(catalog, "parquet_use_column_names", "false") .build(); Session sessionUsingColumnName = Session.builder(getSession()) .setCatalogSessionProperty(catalog, "parquet_use_column_names", "true") .build(); String missingNestedFieldsTableName = "test_parquet_missing_nested_fields"; assertUpdate(format( "CREATE TABLE %s(" + " an_array ARRAY(ROW(a2 int))) " + "WITH (format='PARQUET')", missingNestedFieldsTableName)); assertUpdate(sessionUsingColumnIndex, "INSERT INTO " + missingNestedFieldsTableName + " VALUES (ARRAY[ROW(2)])", 1); String tableLocation = (String) computeActual("SELECT DISTINCT regexp_replace(\"$path\", '/[^/]*$', '') FROM " + missingNestedFieldsTableName).getOnlyValue(); String missingNestedArrayTableName = "test_parquet_missing_nested_array"; assertUpdate(sessionUsingColumnIndex, format( "CREATE TABLE %s(" + " an_array ARRAY(ROW(nested_array ARRAY(varchar), a2 int))) " + "WITH (format='PARQUET', external_location='%s')", missingNestedArrayTableName, tableLocation)); /* * Expected behavior is to read a null collection when a nested array is not define in the parquet footer. * This query should not fail nor an empty collection. */ assertQuery( sessionUsingColumnIndex, "SELECT an_array[1].nested_array FROM " + missingNestedArrayTableName, "VALUES (null)"); assertQuery( sessionUsingColumnName, "SELECT an_array[1].nested_array FROM " + missingNestedArrayTableName, "VALUES (null)"); assertUpdate(sessionUsingColumnIndex, "DROP TABLE " + missingNestedFieldsTableName); assertUpdate(sessionUsingColumnIndex, "DROP TABLE " + missingNestedArrayTableName); } @Test public void testNestedColumnWithDuplicateName() { String tableName = "test_nested_column_with_duplicate_name"; assertUpdate(format( "CREATE TABLE %s(" + " foo varchar, " + " root ROW (foo varchar)) " + "WITH (format='PARQUET')", tableName)); assertUpdate("INSERT INTO " + tableName + " VALUES ('a', ROW('b'))", 1); assertQuery("SELECT root.foo FROM " + tableName + " WHERE foo = 'a'", "VALUES ('b')"); assertQuery("SELECT root.foo FROM " + tableName + " WHERE root.foo = 'b'", "VALUES ('b')"); assertQuery("SELECT root.foo FROM " + tableName + " WHERE foo = 'a' AND root.foo = 'b'", "VALUES ('b')"); assertQuery("SELECT foo FROM " + tableName + " WHERE foo = 'a'", "VALUES ('a')"); assertQuery("SELECT foo FROM " + tableName + " WHERE root.foo = 'b'", "VALUES ('a')"); assertQuery("SELECT foo FROM " + tableName + " WHERE foo = 'a' AND root.foo = 'b'", "VALUES ('a')"); assertTrue(computeActual("SELECT foo FROM " + tableName + " WHERE foo = 'a' AND root.foo = 'a'").getMaterializedRows().isEmpty()); assertTrue(computeActual("SELECT foo FROM " + tableName + " WHERE foo = 'b' AND root.foo = 'b'").getMaterializedRows().isEmpty()); assertUpdate("DROP TABLE " + tableName); } @Test public void testParquetNaNStatistics() { String tableName = "test_parquet_nan_statistics"; assertUpdate("CREATE TABLE " + tableName + " (c_double DOUBLE, c_real REAL, c_string VARCHAR) WITH (format = 'PARQUET')"); assertUpdate("INSERT INTO " + tableName + " VALUES (nan(), cast(nan() as REAL), 'all nan')", 1); assertUpdate("INSERT INTO " + tableName + " VALUES (nan(), null, 'null real'), (null, nan(), 'null double')", 2); assertUpdate("INSERT INTO " + tableName + " VALUES (nan(), 4.2, '4.2 real'), (4.2, nan(), '4.2 double')", 2); assertUpdate("INSERT INTO " + tableName + " VALUES (0.1, 0.1, 'both 0.1')", 1); // These assertions are intended to make sure we are handling NaN values in Parquet statistics, // however Parquet file stats created in Presto don't include such values; the test is here mainly to prevent // regressions, should a new writer start recording such stats assertQuery("SELECT c_string FROM " + tableName + " WHERE c_double > 4", "VALUES ('4.2 double')"); assertQuery("SELECT c_string FROM " + tableName + " WHERE c_real > 4", "VALUES ('4.2 real')"); } @Test public void testMismatchedBucketing() { try { assertUpdate( "CREATE TABLE test_mismatch_bucketing16\n" + "WITH (bucket_count = 16, bucketed_by = ARRAY['key16']) AS\n" + "SELECT orderkey key16, comment value16 FROM orders", 15000); assertUpdate( "CREATE TABLE test_mismatch_bucketing32\n" + "WITH (bucket_count = 32, bucketed_by = ARRAY['key32']) AS\n" + "SELECT orderkey key32, comment value32 FROM orders", 15000); assertUpdate( "CREATE TABLE test_mismatch_bucketingN AS\n" + "SELECT orderkey keyN, comment valueN FROM orders", 15000); Session withMismatchOptimization = Session.builder(getSession()) .setSystemProperty(COLOCATED_JOIN, "true") .setSystemProperty(ENABLE_DYNAMIC_FILTERING, "false") .setCatalogSessionProperty(catalog, "optimize_mismatched_bucket_count", "true") .build(); Session withoutMismatchOptimization = Session.builder(getSession()) .setSystemProperty(COLOCATED_JOIN, "true") .setSystemProperty(ENABLE_DYNAMIC_FILTERING, "false") .setCatalogSessionProperty(catalog, "optimize_mismatched_bucket_count", "false") .build(); @Language("SQL") String writeToTableWithMoreBuckets = "CREATE TABLE test_mismatch_bucketing_out32\n" + "WITH (bucket_count = 32, bucketed_by = ARRAY['key16'])\n" + "AS\n" + "SELECT key16, value16, key32, value32, keyN, valueN\n" + "FROM\n" + " test_mismatch_bucketing16\n" + "JOIN\n" + " test_mismatch_bucketing32\n" + "ON key16=key32\n" + "JOIN\n" + " test_mismatch_bucketingN\n" + "ON key16=keyN"; @Language("SQL") String writeToTableWithFewerBuckets = "CREATE TABLE test_mismatch_bucketing_out8\n" + "WITH (bucket_count = 8, bucketed_by = ARRAY['key16'])\n" + "AS\n" + "SELECT key16, value16, key32, value32, keyN, valueN\n" + "FROM\n" + " test_mismatch_bucketing16\n" + "JOIN\n" + " test_mismatch_bucketing32\n" + "ON key16=key32\n" + "JOIN\n" + " test_mismatch_bucketingN\n" + "ON key16=keyN"; assertUpdate(withoutMismatchOptimization, writeToTableWithMoreBuckets, 15000, assertRemoteExchangesCount(3)); assertQuery("SELECT * FROM test_mismatch_bucketing_out32", "SELECT orderkey, comment, orderkey, comment, orderkey, comment FROM orders"); assertUpdate("DROP TABLE IF EXISTS test_mismatch_bucketing_out32"); assertUpdate(withMismatchOptimization, writeToTableWithMoreBuckets, 15000, assertRemoteExchangesCount(2)); assertQuery("SELECT * FROM test_mismatch_bucketing_out32", "SELECT orderkey, comment, orderkey, comment, orderkey, comment FROM orders"); assertUpdate(withMismatchOptimization, writeToTableWithFewerBuckets, 15000, assertRemoteExchangesCount(2)); assertQuery("SELECT * FROM test_mismatch_bucketing_out8", "SELECT orderkey, comment, orderkey, comment, orderkey, comment FROM orders"); } finally { assertUpdate("DROP TABLE IF EXISTS test_mismatch_bucketing16"); assertUpdate("DROP TABLE IF EXISTS test_mismatch_bucketing32"); assertUpdate("DROP TABLE IF EXISTS test_mismatch_bucketingN"); assertUpdate("DROP TABLE IF EXISTS test_mismatch_bucketing_out32"); assertUpdate("DROP TABLE IF EXISTS test_mismatch_bucketing_out8"); } } @Test public void testBucketedSelect() { try { assertUpdate( "CREATE TABLE test_bucketed_select\n" + "WITH (bucket_count = 13, bucketed_by = ARRAY['key1']) AS\n" + "SELECT orderkey key1, comment value1 FROM orders", 15000); Session planWithTableNodePartitioning = Session.builder(getSession()) .setSystemProperty(USE_TABLE_SCAN_NODE_PARTITIONING, "true") .build(); Session planWithoutTableNodePartitioning = Session.builder(getSession()) .setSystemProperty(USE_TABLE_SCAN_NODE_PARTITIONING, "false") .build(); @Language("SQL") String query = "SELECT count(value1) FROM test_bucketed_select GROUP BY key1"; @Language("SQL") String expectedQuery = "SELECT count(comment) FROM orders GROUP BY orderkey"; assertQuery(planWithTableNodePartitioning, query, expectedQuery, assertRemoteExchangesCount(1)); assertQuery(planWithoutTableNodePartitioning, query, expectedQuery, assertRemoteExchangesCount(2)); } finally { assertUpdate("DROP TABLE IF EXISTS test_bucketed_select"); } } @Test public void testGroupedExecution() { try { assertUpdate( "CREATE TABLE test_grouped_join1\n" + "WITH (bucket_count = 13, bucketed_by = ARRAY['key1']) AS\n" + "SELECT orderkey key1, comment value1 FROM orders", 15000); assertUpdate( "CREATE TABLE test_grouped_join2\n" + "WITH (bucket_count = 13, bucketed_by = ARRAY['key2']) AS\n" + "SELECT orderkey key2, comment value2 FROM orders", 15000); assertUpdate( "CREATE TABLE test_grouped_join3\n" + "WITH (bucket_count = 13, bucketed_by = ARRAY['key3']) AS\n" + "SELECT orderkey key3, comment value3 FROM orders", 15000); assertUpdate( "CREATE TABLE test_grouped_join4\n" + "WITH (bucket_count = 13, bucketed_by = ARRAY['key4_bucket']) AS\n" + "SELECT orderkey key4_bucket, orderkey key4_non_bucket, comment value4 FROM orders", 15000); assertUpdate( "CREATE TABLE test_grouped_joinN AS\n" + "SELECT orderkey keyN, comment valueN FROM orders", 15000); assertUpdate( "CREATE TABLE test_grouped_joinDual\n" + "WITH (bucket_count = 13, bucketed_by = ARRAY['keyD']) AS\n" + "SELECT orderkey keyD, comment valueD FROM orders CROSS JOIN UNNEST(repeat(NULL, 2))", 30000); assertUpdate( "CREATE TABLE test_grouped_window\n" + "WITH (bucket_count = 5, bucketed_by = ARRAY['key']) AS\n" + "SELECT custkey key, orderkey value FROM orders WHERE custkey <= 5 ORDER BY orderkey LIMIT 10", 10); // NOT grouped execution; default Session notColocated = Session.builder(getSession()) .setSystemProperty(COLOCATED_JOIN, "false") .setSystemProperty(GROUPED_EXECUTION, "false") .setSystemProperty(ENABLE_DYNAMIC_FILTERING, "false") .build(); // Co-located JOIN with all groups at once, fixed schedule Session colocatedAllGroupsAtOnce = Session.builder(getSession()) .setSystemProperty(COLOCATED_JOIN, "true") .setSystemProperty(GROUPED_EXECUTION, "true") .setSystemProperty(CONCURRENT_LIFESPANS_PER_NODE, "0") .setSystemProperty(DYNAMIC_SCHEDULE_FOR_GROUPED_EXECUTION, "false") .setSystemProperty(ENABLE_DYNAMIC_FILTERING, "false") .build(); // Co-located JOIN, 1 group per worker at a time, fixed schedule Session colocatedOneGroupAtATime = Session.builder(getSession()) .setSystemProperty(COLOCATED_JOIN, "true") .setSystemProperty(GROUPED_EXECUTION, "true") .setSystemProperty(CONCURRENT_LIFESPANS_PER_NODE, "1") .setSystemProperty(DYNAMIC_SCHEDULE_FOR_GROUPED_EXECUTION, "false") .setSystemProperty(ENABLE_DYNAMIC_FILTERING, "false") .build(); // Co-located JOIN with all groups at once, dynamic schedule Session colocatedAllGroupsAtOnceDynamic = Session.builder(getSession()) .setSystemProperty(COLOCATED_JOIN, "true") .setSystemProperty(GROUPED_EXECUTION, "true") .setSystemProperty(CONCURRENT_LIFESPANS_PER_NODE, "0") .setSystemProperty(DYNAMIC_SCHEDULE_FOR_GROUPED_EXECUTION, "true") .setSystemProperty(ENABLE_DYNAMIC_FILTERING, "false") .build(); // Co-located JOIN, 1 group per worker at a time, dynamic schedule Session colocatedOneGroupAtATimeDynamic = Session.builder(getSession()) .setSystemProperty(COLOCATED_JOIN, "true") .setSystemProperty(GROUPED_EXECUTION, "true") .setSystemProperty(CONCURRENT_LIFESPANS_PER_NODE, "1") .setSystemProperty(DYNAMIC_SCHEDULE_FOR_GROUPED_EXECUTION, "true") .setSystemProperty(ENABLE_DYNAMIC_FILTERING, "false") .build(); // Broadcast JOIN, 1 group per worker at a time Session broadcastOneGroupAtATime = Session.builder(getSession()) .setSystemProperty(JOIN_DISTRIBUTION_TYPE, BROADCAST.name()) .setSystemProperty(COLOCATED_JOIN, "true") .setSystemProperty(GROUPED_EXECUTION, "true") .setSystemProperty(CONCURRENT_LIFESPANS_PER_NODE, "1") .setSystemProperty(ENABLE_DYNAMIC_FILTERING, "false") .build(); // Broadcast JOIN, 1 group per worker at a time, dynamic schedule Session broadcastOneGroupAtATimeDynamic = Session.builder(getSession()) .setSystemProperty(JOIN_DISTRIBUTION_TYPE, BROADCAST.name()) .setSystemProperty(COLOCATED_JOIN, "true") .setSystemProperty(GROUPED_EXECUTION, "true") .setSystemProperty(CONCURRENT_LIFESPANS_PER_NODE, "1") .setSystemProperty(DYNAMIC_SCHEDULE_FOR_GROUPED_EXECUTION, "true") .setSystemProperty(ENABLE_DYNAMIC_FILTERING, "false") .build(); // // HASH JOIN // ========= @Language("SQL") String joinThreeBucketedTable = "SELECT key1, value1, key2, value2, key3, value3\n" + "FROM test_grouped_join1\n" + "JOIN test_grouped_join2\n" + "ON key1 = key2\n" + "JOIN test_grouped_join3\n" + "ON key2 = key3"; @Language("SQL") String joinThreeMixedTable = "SELECT key1, value1, key2, value2, keyN, valueN\n" + "FROM test_grouped_join1\n" + "JOIN test_grouped_join2\n" + "ON key1 = key2\n" + "JOIN test_grouped_joinN\n" + "ON key2 = keyN"; @Language("SQL") String expectedJoinQuery = "SELECT orderkey, comment, orderkey, comment, orderkey, comment FROM orders"; @Language("SQL") String leftJoinBucketedTable = "SELECT key1, value1, key2, value2\n" + "FROM test_grouped_join1\n" + "LEFT JOIN (SELECT * FROM test_grouped_join2 WHERE key2 % 2 = 0)\n" + "ON key1 = key2"; @Language("SQL") String rightJoinBucketedTable = "SELECT key1, value1, key2, value2\n" + "FROM (SELECT * FROM test_grouped_join2 WHERE key2 % 2 = 0)\n" + "RIGHT JOIN test_grouped_join1\n" + "ON key1 = key2"; @Language("SQL") String expectedOuterJoinQuery = "SELECT orderkey, comment, CASE mod(orderkey, 2) WHEN 0 THEN orderkey END, CASE mod(orderkey, 2) WHEN 0 THEN comment END FROM orders"; assertQuery(notColocated, joinThreeBucketedTable, expectedJoinQuery); assertQuery(notColocated, leftJoinBucketedTable, expectedOuterJoinQuery); assertQuery(notColocated, rightJoinBucketedTable, expectedOuterJoinQuery); assertQuery(colocatedAllGroupsAtOnce, joinThreeBucketedTable, expectedJoinQuery, assertRemoteExchangesCount(1)); assertQuery(colocatedAllGroupsAtOnce, joinThreeMixedTable, expectedJoinQuery, assertRemoteExchangesCount(2)); assertQuery(colocatedOneGroupAtATime, joinThreeBucketedTable, expectedJoinQuery, assertRemoteExchangesCount(1)); assertQuery(colocatedOneGroupAtATime, joinThreeMixedTable, expectedJoinQuery, assertRemoteExchangesCount(2)); assertQuery(colocatedAllGroupsAtOnceDynamic, joinThreeBucketedTable, expectedJoinQuery, assertRemoteExchangesCount(1)); assertQuery(colocatedAllGroupsAtOnceDynamic, joinThreeMixedTable, expectedJoinQuery, assertRemoteExchangesCount(2)); assertQuery(colocatedOneGroupAtATimeDynamic, joinThreeBucketedTable, expectedJoinQuery, assertRemoteExchangesCount(1)); assertQuery(colocatedOneGroupAtATimeDynamic, joinThreeMixedTable, expectedJoinQuery, assertRemoteExchangesCount(2)); assertQuery(colocatedAllGroupsAtOnce, leftJoinBucketedTable, expectedOuterJoinQuery, assertRemoteExchangesCount(1)); assertQuery(colocatedAllGroupsAtOnce, rightJoinBucketedTable, expectedOuterJoinQuery, assertRemoteExchangesCount(1)); assertQuery(colocatedOneGroupAtATime, leftJoinBucketedTable, expectedOuterJoinQuery, assertRemoteExchangesCount(1)); assertQuery(colocatedOneGroupAtATime, rightJoinBucketedTable, expectedOuterJoinQuery, assertRemoteExchangesCount(1)); assertQuery(colocatedAllGroupsAtOnceDynamic, leftJoinBucketedTable, expectedOuterJoinQuery, assertRemoteExchangesCount(1)); assertQuery(colocatedAllGroupsAtOnceDynamic, rightJoinBucketedTable, expectedOuterJoinQuery, assertRemoteExchangesCount(1)); assertQuery(colocatedOneGroupAtATimeDynamic, leftJoinBucketedTable, expectedOuterJoinQuery, assertRemoteExchangesCount(1)); assertQuery(colocatedOneGroupAtATimeDynamic, rightJoinBucketedTable, expectedOuterJoinQuery, assertRemoteExchangesCount(1)); // // CROSS JOIN and HASH JOIN mixed // ============================== @Language("SQL") String crossJoin = "SELECT key1, value1, key2, value2, key3, value3\n" + "FROM test_grouped_join1\n" + "JOIN test_grouped_join2\n" + "ON key1 = key2\n" + "CROSS JOIN (SELECT * FROM test_grouped_join3 WHERE key3 <= 3)"; @Language("SQL") String expectedCrossJoinQuery = "SELECT key1, value1, key1, value1, key3, value3\n" + "FROM\n" + " (SELECT orderkey key1, comment value1 FROM orders)\n" + "CROSS JOIN\n" + " (SELECT orderkey key3, comment value3 FROM orders WHERE orderkey <= 3)"; assertQuery(notColocated, crossJoin, expectedCrossJoinQuery); assertQuery(colocatedAllGroupsAtOnce, crossJoin, expectedCrossJoinQuery, assertRemoteExchangesCount(2)); assertQuery(colocatedOneGroupAtATime, crossJoin, expectedCrossJoinQuery, assertRemoteExchangesCount(2)); // // Bucketed and unbucketed HASH JOIN mixed // ======================================= @Language("SQL") String bucketedAndUnbucketedJoin = "SELECT key1, value1, keyN, valueN, key2, value2, key3, value3\n" + "FROM\n" + " test_grouped_join1\n" + "JOIN (\n" + " SELECT *\n" + " FROM test_grouped_joinN\n" + " JOIN test_grouped_join2\n" + " ON keyN = key2\n" + ")\n" + "ON key1 = keyN\n" + "JOIN test_grouped_join3\n" + "ON key1 = key3"; @Language("SQL") String expectedBucketedAndUnbucketedJoinQuery = "SELECT orderkey, comment, orderkey, comment, orderkey, comment, orderkey, comment FROM orders"; assertQuery(notColocated, bucketedAndUnbucketedJoin, expectedBucketedAndUnbucketedJoinQuery); assertQuery(colocatedAllGroupsAtOnce, bucketedAndUnbucketedJoin, expectedBucketedAndUnbucketedJoinQuery, assertRemoteExchangesCount(2)); assertQuery(colocatedOneGroupAtATime, bucketedAndUnbucketedJoin, expectedBucketedAndUnbucketedJoinQuery, assertRemoteExchangesCount(2)); assertQuery(colocatedOneGroupAtATimeDynamic, bucketedAndUnbucketedJoin, expectedBucketedAndUnbucketedJoinQuery, assertRemoteExchangesCount(2)); // // UNION ALL / GROUP BY // ==================== @Language("SQL") String groupBySingleBucketed = "SELECT\n" + " keyD,\n" + " count(valueD)\n" + "FROM\n" + " test_grouped_joinDual\n" + "GROUP BY keyD"; @Language("SQL") String expectedSingleGroupByQuery = "SELECT orderkey, 2 FROM orders"; @Language("SQL") String groupByOfUnionBucketed = "SELECT\n" + " key\n" + ", arbitrary(value1)\n" + ", arbitrary(value2)\n" + ", arbitrary(value3)\n" + "FROM (\n" + " SELECT key1 key, value1, NULL value2, NULL value3\n" + " FROM test_grouped_join1\n" + "UNION ALL\n" + " SELECT key2 key, NULL value1, value2, NULL value3\n" + " FROM test_grouped_join2\n" + " WHERE key2 % 2 = 0\n" + "UNION ALL\n" + " SELECT key3 key, NULL value1, NULL value2, value3\n" + " FROM test_grouped_join3\n" + " WHERE key3 % 3 = 0\n" + ")\n" + "GROUP BY key"; @Language("SQL") String groupByOfUnionMixed = "SELECT\n" + " key\n" + ", arbitrary(value1)\n" + ", arbitrary(value2)\n" + ", arbitrary(valueN)\n" + "FROM (\n" + " SELECT key1 key, value1, NULL value2, NULL valueN\n" + " FROM test_grouped_join1\n" + "UNION ALL\n" + " SELECT key2 key, NULL value1, value2, NULL valueN\n" + " FROM test_grouped_join2\n" + " WHERE key2 % 2 = 0\n" + "UNION ALL\n" + " SELECT keyN key, NULL value1, NULL value2, valueN\n" + " FROM test_grouped_joinN\n" + " WHERE keyN % 3 = 0\n" + ")\n" + "GROUP BY key"; @Language("SQL") String expectedGroupByOfUnion = "SELECT orderkey, comment, CASE mod(orderkey, 2) WHEN 0 THEN comment END, CASE mod(orderkey, 3) WHEN 0 THEN comment END FROM orders"; // In this case: // * left side can take advantage of bucketed execution // * right side does not have the necessary organization to allow its parent to take advantage of bucketed execution // In this scenario, we give up bucketed execution altogether. This can potentially be improved. // // AGG(key) // | // UNION ALL // / \ // AGG(key) Scan (not bucketed) // | // Scan (bucketed on key) @Language("SQL") String groupByOfUnionOfGroupByMixed = "SELECT\n" + " key, sum(cnt) cnt\n" + "FROM (\n" + " SELECT keyD key, count(valueD) cnt\n" + " FROM test_grouped_joinDual\n" + " GROUP BY keyD\n" + "UNION ALL\n" + " SELECT keyN key, 1 cnt\n" + " FROM test_grouped_joinN\n" + ")\n" + "group by key"; @Language("SQL") String expectedGroupByOfUnionOfGroupBy = "SELECT orderkey, 3 FROM orders"; // Eligible GROUP BYs run in the same fragment regardless of colocated_join flag assertQuery(colocatedAllGroupsAtOnce, groupBySingleBucketed, expectedSingleGroupByQuery, assertRemoteExchangesCount(1)); assertQuery(colocatedOneGroupAtATime, groupBySingleBucketed, expectedSingleGroupByQuery, assertRemoteExchangesCount(1)); assertQuery(colocatedOneGroupAtATimeDynamic, groupBySingleBucketed, expectedSingleGroupByQuery, assertRemoteExchangesCount(1)); assertQuery(colocatedAllGroupsAtOnce, groupByOfUnionBucketed, expectedGroupByOfUnion, assertRemoteExchangesCount(1)); assertQuery(colocatedOneGroupAtATime, groupByOfUnionBucketed, expectedGroupByOfUnion, assertRemoteExchangesCount(1)); assertQuery(colocatedOneGroupAtATimeDynamic, groupByOfUnionBucketed, expectedGroupByOfUnion, assertRemoteExchangesCount(1)); // cannot be executed in a grouped manner but should still produce correct result assertQuery(colocatedOneGroupAtATime, groupByOfUnionMixed, expectedGroupByOfUnion, assertRemoteExchangesCount(2)); assertQuery(colocatedOneGroupAtATime, groupByOfUnionOfGroupByMixed, expectedGroupByOfUnionOfGroupBy, assertRemoteExchangesCount(2)); // // GROUP BY and JOIN mixed // ======================== @Language("SQL") String joinGroupedWithGrouped = "SELECT key1, count1, count2\n" + "FROM (\n" + " SELECT keyD key1, count(valueD) count1\n" + " FROM test_grouped_joinDual\n" + " GROUP BY keyD\n" + ") JOIN (\n" + " SELECT keyD key2, count(valueD) count2\n" + " FROM test_grouped_joinDual\n" + " GROUP BY keyD\n" + ")\n" + "ON key1 = key2"; @Language("SQL") String expectedJoinGroupedWithGrouped = "SELECT orderkey, 2, 2 FROM orders"; @Language("SQL") String joinGroupedWithUngrouped = "SELECT keyD, countD, valueN\n" + "FROM (\n" + " SELECT keyD, count(valueD) countD\n" + " FROM test_grouped_joinDual\n" + " GROUP BY keyD\n" + ") JOIN (\n" + " SELECT keyN, valueN\n" + " FROM test_grouped_joinN\n" + ")\n" + "ON keyD = keyN"; @Language("SQL") String expectedJoinGroupedWithUngrouped = "SELECT orderkey, 2, comment FROM orders"; @Language("SQL") String joinUngroupedWithGrouped = "SELECT keyN, valueN, countD\n" + "FROM (\n" + " SELECT keyN, valueN\n" + " FROM test_grouped_joinN\n" + ") JOIN (\n" + " SELECT keyD, count(valueD) countD\n" + " FROM test_grouped_joinDual\n" + " GROUP BY keyD\n" + ")\n" + "ON keyN = keyD"; @Language("SQL") String expectedJoinUngroupedWithGrouped = "SELECT orderkey, comment, 2 FROM orders"; @Language("SQL") String groupOnJoinResult = "SELECT keyD, count(valueD), count(valueN)\n" + "FROM\n" + " test_grouped_joinDual\n" + "JOIN\n" + " test_grouped_joinN\n" + "ON keyD=keyN\n" + "GROUP BY keyD"; @Language("SQL") String expectedGroupOnJoinResult = "SELECT orderkey, 2, 2 FROM orders"; @Language("SQL") String groupOnUngroupedJoinResult = "SELECT key4_bucket, count(value4), count(valueN)\n" + "FROM\n" + " test_grouped_join4\n" + "JOIN\n" + " test_grouped_joinN\n" + "ON key4_non_bucket=keyN\n" + "GROUP BY key4_bucket"; @Language("SQL") String expectedGroupOnUngroupedJoinResult = "SELECT orderkey, count(*), count(*) FROM orders group by orderkey"; // Eligible GROUP BYs run in the same fragment regardless of colocated_join flag assertQuery(colocatedAllGroupsAtOnce, joinGroupedWithGrouped, expectedJoinGroupedWithGrouped, assertRemoteExchangesCount(1)); assertQuery(colocatedOneGroupAtATime, joinGroupedWithGrouped, expectedJoinGroupedWithGrouped, assertRemoteExchangesCount(1)); assertQuery(colocatedOneGroupAtATimeDynamic, joinGroupedWithGrouped, expectedJoinGroupedWithGrouped, assertRemoteExchangesCount(1)); assertQuery(colocatedAllGroupsAtOnce, joinGroupedWithUngrouped, expectedJoinGroupedWithUngrouped, assertRemoteExchangesCount(2)); assertQuery(colocatedOneGroupAtATime, joinGroupedWithUngrouped, expectedJoinGroupedWithUngrouped, assertRemoteExchangesCount(2)); assertQuery(colocatedOneGroupAtATimeDynamic, joinGroupedWithUngrouped, expectedJoinGroupedWithUngrouped, assertRemoteExchangesCount(2)); assertQuery(colocatedAllGroupsAtOnce, groupOnJoinResult, expectedGroupOnJoinResult, assertRemoteExchangesCount(2)); assertQuery(colocatedOneGroupAtATime, groupOnJoinResult, expectedGroupOnJoinResult, assertRemoteExchangesCount(2)); assertQuery(colocatedOneGroupAtATimeDynamic, groupOnJoinResult, expectedGroupOnJoinResult, assertRemoteExchangesCount(2)); assertQuery(broadcastOneGroupAtATime, groupOnJoinResult, expectedGroupOnJoinResult, assertRemoteExchangesCount(2)); assertQuery(broadcastOneGroupAtATime, groupOnUngroupedJoinResult, expectedGroupOnUngroupedJoinResult, assertRemoteExchangesCount(2)); assertQuery(broadcastOneGroupAtATimeDynamic, groupOnUngroupedJoinResult, expectedGroupOnUngroupedJoinResult, assertRemoteExchangesCount(2)); // cannot be executed in a grouped manner but should still produce correct result assertQuery(colocatedOneGroupAtATime, joinUngroupedWithGrouped, expectedJoinUngroupedWithGrouped, assertRemoteExchangesCount(2)); assertQuery(colocatedOneGroupAtATime, groupOnUngroupedJoinResult, expectedGroupOnUngroupedJoinResult, assertRemoteExchangesCount(4)); // // Outer JOIN (that involves LookupOuterOperator) // ============================================== // Chain on the probe side to test duplicating OperatorFactory @Language("SQL") String chainedOuterJoin = "SELECT key1, value1, key2, value2, key3, value3\n" + "FROM\n" + " (SELECT * FROM test_grouped_join1 WHERE mod(key1, 2) = 0)\n" + "RIGHT JOIN\n" + " (SELECT * FROM test_grouped_join2 WHERE mod(key2, 3) = 0)\n" + "ON key1 = key2\n" + "FULL JOIN\n" + " (SELECT * FROM test_grouped_join3 WHERE mod(key3, 5) = 0)\n" + "ON key2 = key3"; // Probe is grouped execution, but build is not @Language("SQL") String sharedBuildOuterJoin = "SELECT key1, value1, keyN, valueN\n" + "FROM\n" + " (SELECT key1, arbitrary(value1) value1 FROM test_grouped_join1 WHERE mod(key1, 2) = 0 group by key1)\n" + "RIGHT JOIN\n" + " (SELECT * FROM test_grouped_joinN WHERE mod(keyN, 3) = 0)\n" + "ON key1 = keyN"; // The preceding test case, which then feeds into another join @Language("SQL") String chainedSharedBuildOuterJoin = "SELECT key1, value1, keyN, valueN, key3, value3\n" + "FROM\n" + " (SELECT key1, arbitrary(value1) value1 FROM test_grouped_join1 WHERE mod(key1, 2) = 0 group by key1)\n" + "RIGHT JOIN\n" + " (SELECT * FROM test_grouped_joinN WHERE mod(keyN, 3) = 0)\n" + "ON key1 = keyN\n" + "FULL JOIN\n" + " (SELECT * FROM test_grouped_join3 WHERE mod(key3, 5) = 0)\n" + "ON keyN = key3"; @Language("SQL") String expectedChainedOuterJoinResult = "SELECT\n" + " CASE WHEN mod(orderkey, 2 * 3) = 0 THEN orderkey END,\n" + " CASE WHEN mod(orderkey, 2 * 3) = 0 THEN comment END,\n" + " CASE WHEN mod(orderkey, 3) = 0 THEN orderkey END,\n" + " CASE WHEN mod(orderkey, 3) = 0 THEN comment END,\n" + " CASE WHEN mod(orderkey, 5) = 0 THEN orderkey END,\n" + " CASE WHEN mod(orderkey, 5) = 0 THEN comment END\n" + "FROM ORDERS\n" + "WHERE mod(orderkey, 3) = 0 OR mod(orderkey, 5) = 0"; @Language("SQL") String expectedSharedBuildOuterJoinResult = "SELECT\n" + " CASE WHEN mod(orderkey, 2) = 0 THEN orderkey END,\n" + " CASE WHEN mod(orderkey, 2) = 0 THEN comment END,\n" + " orderkey,\n" + " comment\n" + "FROM ORDERS\n" + "WHERE mod(orderkey, 3) = 0"; assertQuery(notColocated, chainedOuterJoin, expectedChainedOuterJoinResult); assertQuery(colocatedAllGroupsAtOnce, chainedOuterJoin, expectedChainedOuterJoinResult, assertRemoteExchangesCount(1)); assertQuery(colocatedOneGroupAtATime, chainedOuterJoin, expectedChainedOuterJoinResult, assertRemoteExchangesCount(1)); assertQuery(colocatedOneGroupAtATimeDynamic, chainedOuterJoin, expectedChainedOuterJoinResult, assertRemoteExchangesCount(1)); assertQuery(notColocated, sharedBuildOuterJoin, expectedSharedBuildOuterJoinResult); assertQuery(colocatedAllGroupsAtOnce, sharedBuildOuterJoin, expectedSharedBuildOuterJoinResult, assertRemoteExchangesCount(2)); assertQuery(colocatedOneGroupAtATime, sharedBuildOuterJoin, expectedSharedBuildOuterJoinResult, assertRemoteExchangesCount(2)); assertQuery(colocatedOneGroupAtATimeDynamic, sharedBuildOuterJoin, expectedSharedBuildOuterJoinResult, assertRemoteExchangesCount(2)); assertQuery(colocatedOneGroupAtATime, chainedSharedBuildOuterJoin, expectedChainedOuterJoinResult, assertRemoteExchangesCount(2)); assertQuery(colocatedOneGroupAtATimeDynamic, chainedSharedBuildOuterJoin, expectedChainedOuterJoinResult, assertRemoteExchangesCount(2)); // // Window function // =============== assertQuery( colocatedOneGroupAtATime, "SELECT key, count(*) OVER (PARTITION BY key ORDER BY value) FROM test_grouped_window", "VALUES\n" + "(1, 1),\n" + "(2, 1),\n" + "(2, 2),\n" + "(4, 1),\n" + "(4, 2),\n" + "(4, 3),\n" + "(4, 4),\n" + "(4, 5),\n" + "(5, 1),\n" + "(5, 2)", assertRemoteExchangesCount(1)); assertQuery( colocatedOneGroupAtATime, "SELECT key, row_number() OVER (PARTITION BY key ORDER BY value) FROM test_grouped_window", "VALUES\n" + "(1, 1),\n" + "(2, 1),\n" + "(2, 2),\n" + "(4, 1),\n" + "(4, 2),\n" + "(4, 3),\n" + "(4, 4),\n" + "(4, 5),\n" + "(5, 1),\n" + "(5, 2)", assertRemoteExchangesCount(1)); assertQuery( colocatedOneGroupAtATime, "SELECT key, n FROM (SELECT key, row_number() OVER (PARTITION BY key ORDER BY value) AS n FROM test_grouped_window) WHERE n <= 2", "VALUES\n" + "(1, 1),\n" + "(2, 1),\n" + "(2, 2),\n" + "(4, 1),\n" + "(4, 2),\n" + "(5, 1),\n" + "(5, 2)", assertRemoteExchangesCount(1)); // // Filter out all or majority of splits // ==================================== @Language("SQL") String noSplits = "SELECT key1, arbitrary(value1)\n" + "FROM test_grouped_join1\n" + "WHERE \"$bucket\" < 0\n" + "GROUP BY key1"; @Language("SQL") String joinMismatchedBuckets = "SELECT key1, value1, key2, value2\n" + "FROM (\n" + " SELECT *\n" + " FROM test_grouped_join1\n" + " WHERE \"$bucket\"=1\n" + ")\n" + "FULL OUTER JOIN (\n" + " SELECT *\n" + " FROM test_grouped_join2\n" + " WHERE \"$bucket\"=11\n" + ")\n" + "ON key1=key2"; @Language("SQL") String expectedNoSplits = "SELECT 1, 'a' WHERE FALSE"; @Language("SQL") String expectedJoinMismatchedBuckets = "SELECT\n" + " CASE WHEN mod(orderkey, 13) = 1 THEN orderkey END,\n" + " CASE WHEN mod(orderkey, 13) = 1 THEN comment END,\n" + " CASE WHEN mod(orderkey, 13) = 11 THEN orderkey END,\n" + " CASE WHEN mod(orderkey, 13) = 11 THEN comment END\n" + "FROM ORDERS\n" + "WHERE mod(orderkey, 13) IN (1, 11)"; assertQuery(notColocated, noSplits, expectedNoSplits); assertQuery(colocatedAllGroupsAtOnce, noSplits, expectedNoSplits, assertRemoteExchangesCount(1)); assertQuery(colocatedOneGroupAtATime, noSplits, expectedNoSplits, assertRemoteExchangesCount(1)); assertQuery(notColocated, joinMismatchedBuckets, expectedJoinMismatchedBuckets); assertQuery(colocatedAllGroupsAtOnce, joinMismatchedBuckets, expectedJoinMismatchedBuckets, assertRemoteExchangesCount(1)); assertQuery(colocatedOneGroupAtATime, joinMismatchedBuckets, expectedJoinMismatchedBuckets, assertRemoteExchangesCount(1)); } finally { assertUpdate("DROP TABLE IF EXISTS test_grouped_join1"); assertUpdate("DROP TABLE IF EXISTS test_grouped_join2"); assertUpdate("DROP TABLE IF EXISTS test_grouped_join3"); assertUpdate("DROP TABLE IF EXISTS test_grouped_join4"); assertUpdate("DROP TABLE IF EXISTS test_grouped_joinN"); assertUpdate("DROP TABLE IF EXISTS test_grouped_joinDual"); assertUpdate("DROP TABLE IF EXISTS test_grouped_window"); } } private Consumer<Plan> assertRemoteExchangesCount(int expectedRemoteExchangesCount) { return assertRemoteExchangesCount(getSession(), expectedRemoteExchangesCount); } private Consumer<Plan> assertRemoteExchangesCount(Session session, int expectedRemoteExchangesCount) { return plan -> { int actualRemoteExchangesCount = searchFrom(plan.getRoot()) .where(node -> node instanceof ExchangeNode && ((ExchangeNode) node).getScope() == ExchangeNode.Scope.REMOTE) .findAll() .size(); if (actualRemoteExchangesCount != expectedRemoteExchangesCount) { Metadata metadata = getDistributedQueryRunner().getCoordinator().getMetadata(); String formattedPlan = textLogicalPlan(plan.getRoot(), plan.getTypes(), metadata, StatsAndCosts.empty(), session, 0, false); throw new AssertionError(format( "Expected [\n%s\n] remote exchanges but found [\n%s\n] remote exchanges. Actual plan is [\n\n%s\n]", expectedRemoteExchangesCount, actualRemoteExchangesCount, formattedPlan)); } }; } private Consumer<Plan> assertLocalRepartitionedExchangesCount(int expectedLocalExchangesCount) { return plan -> { int actualLocalExchangesCount = searchFrom(plan.getRoot()) .where(node -> { if (!(node instanceof ExchangeNode)) { return false; } ExchangeNode exchangeNode = (ExchangeNode) node; return exchangeNode.getScope() == ExchangeNode.Scope.LOCAL && exchangeNode.getType() == ExchangeNode.Type.REPARTITION; }) .findAll() .size(); if (actualLocalExchangesCount != expectedLocalExchangesCount) { Session session = getSession(); Metadata metadata = getDistributedQueryRunner().getCoordinator().getMetadata(); String formattedPlan = textLogicalPlan(plan.getRoot(), plan.getTypes(), metadata, StatsAndCosts.empty(), session, 0, false); throw new AssertionError(format( "Expected [\n%s\n] local repartitioned exchanges but found [\n%s\n] local repartitioned exchanges. Actual plan is [\n\n%s\n]", expectedLocalExchangesCount, actualLocalExchangesCount, formattedPlan)); } }; } @Test public void testRcTextCharDecoding() { assertUpdate("CREATE TABLE test_table_with_char_rc WITH (format = 'RCTEXT') AS SELECT CAST('khaki' AS CHAR(7)) char_column", 1); try { assertQuery( "SELECT * FROM test_table_with_char_rc WHERE char_column = 'khaki '", "VALUES (CAST('khaki' AS CHAR(7)))"); } finally { assertUpdate("DROP TABLE test_table_with_char_rc"); } } @Test public void testInvalidPartitionValue() { assertUpdate("CREATE TABLE invalid_partition_value (a int, b varchar) WITH (partitioned_by = ARRAY['b'])"); assertQueryFails( "INSERT INTO invalid_partition_value VALUES (4, 'test' || chr(13))", "\\QHive partition keys can only contain printable ASCII characters (0x20 - 0x7E). Invalid value: 74 65 73 74 0D\\E"); assertUpdate("DROP TABLE invalid_partition_value"); assertQueryFails( "CREATE TABLE invalid_partition_value (a, b) WITH (partitioned_by = ARRAY['b']) AS SELECT 4, chr(9731)", "\\QHive partition keys can only contain printable ASCII characters (0x20 - 0x7E). Invalid value: E2 98 83\\E"); } @Test public void testShowColumnMetadata() { String tableName = "test_show_column_table"; @Language("SQL") String createTable = "CREATE TABLE " + tableName + " (a bigint, b varchar, c double)"; Session testSession = testSessionBuilder() .setIdentity(ofUser("test_access_owner")) .setCatalog(getSession().getCatalog()) .setSchema(getSession().getSchema()) .build(); assertUpdate(createTable); // verify showing columns over a table requires SELECT privileges for the table assertAccessAllowed("SHOW COLUMNS FROM " + tableName); assertAccessDenied(testSession, "SHOW COLUMNS FROM " + tableName, "Cannot show columns of table .*." + tableName + ".*", privilege(tableName, SHOW_COLUMNS)); @Language("SQL") String getColumnsSql = "" + "SELECT lower(column_name) " + "FROM information_schema.columns " + "WHERE table_name = '" + tableName + "'"; assertEquals(computeActual(getColumnsSql).getOnlyColumnAsSet(), ImmutableSet.of("a", "b", "c")); // verify with no SELECT privileges on table, querying information_schema will return empty columns executeExclusively(() -> { try { getQueryRunner().getAccessControl().deny(privilege(tableName, SELECT_COLUMN)); assertQueryReturnsEmptyResult(testSession, getColumnsSql); } finally { getQueryRunner().getAccessControl().reset(); } }); assertUpdate("DROP TABLE " + tableName); } @Test public void testRoleAuthorizationDescriptors() { Session user = testSessionBuilder() .setCatalog(getSession().getCatalog()) .setIdentity(Identity.forUser("user").withPrincipal(getSession().getIdentity().getPrincipal()).build()) .build(); assertUpdate("CREATE ROLE test_r_a_d1 IN hive"); assertUpdate("CREATE ROLE test_r_a_d2 IN hive"); assertUpdate("CREATE ROLE test_r_a_d3 IN hive"); // nothing showing because no roles have been granted assertQueryReturnsEmptyResult("SELECT * FROM information_schema.role_authorization_descriptors"); // role_authorization_descriptors is not accessible for a non-admin user, even when it's empty assertQueryFails(user, "SELECT * FROM information_schema.role_authorization_descriptors", "Access Denied: Cannot select from table information_schema.role_authorization_descriptors"); assertUpdate("GRANT test_r_a_d1 TO USER user IN hive"); // user with same name as a role assertUpdate("GRANT test_r_a_d2 TO USER test_r_a_d1 IN hive"); assertUpdate("GRANT test_r_a_d2 TO USER user1 WITH ADMIN OPTION IN hive"); assertUpdate("GRANT test_r_a_d2 TO USER user2 IN hive"); assertUpdate("GRANT test_r_a_d2 TO ROLE test_r_a_d1 IN hive"); // role_authorization_descriptors is not accessible for a non-admin user assertQueryFails(user, "SELECT * FROM information_schema.role_authorization_descriptors", "Access Denied: Cannot select from table information_schema.role_authorization_descriptors"); assertQuery( "SELECT * FROM information_schema.role_authorization_descriptors", "VALUES " + "('test_r_a_d2', null, null, 'test_r_a_d1', 'ROLE', 'NO')," + "('test_r_a_d2', null, null, 'user2', 'USER', 'NO')," + "('test_r_a_d2', null, null, 'user1', 'USER', 'YES')," + "('test_r_a_d2', null, null, 'test_r_a_d1', 'USER', 'NO')," + "('test_r_a_d1', null, null, 'user', 'USER', 'NO')"); assertQuery( "SELECT * FROM information_schema.role_authorization_descriptors LIMIT 1000000000", "VALUES " + "('test_r_a_d2', null, null, 'test_r_a_d1', 'ROLE', 'NO')," + "('test_r_a_d2', null, null, 'user2', 'USER', 'NO')," + "('test_r_a_d2', null, null, 'user1', 'USER', 'YES')," + "('test_r_a_d2', null, null, 'test_r_a_d1', 'USER', 'NO')," + "('test_r_a_d1', null, null, 'user', 'USER', 'NO')"); assertQuery( "SELECT COUNT(*) FROM (SELECT * FROM information_schema.role_authorization_descriptors LIMIT 2)", "VALUES (2)"); assertQuery( "SELECT * FROM information_schema.role_authorization_descriptors WHERE role_name = 'test_r_a_d2'", "VALUES " + "('test_r_a_d2', null, null, 'test_r_a_d1', 'USER', 'NO')," + "('test_r_a_d2', null, null, 'test_r_a_d1', 'ROLE', 'NO')," + "('test_r_a_d2', null, null, 'user1', 'USER', 'YES')," + "('test_r_a_d2', null, null, 'user2', 'USER', 'NO')"); assertQuery( "SELECT COUNT(*) FROM (SELECT * FROM information_schema.role_authorization_descriptors WHERE role_name = 'test_r_a_d2' LIMIT 1)", "VALUES 1"); assertQuery( "SELECT * FROM information_schema.role_authorization_descriptors WHERE grantee = 'user'", "VALUES ('test_r_a_d1', null, null, 'user', 'USER', 'NO')"); assertQuery( "SELECT * FROM information_schema.role_authorization_descriptors WHERE grantee like 'user%'", "VALUES " + "('test_r_a_d1', null, null, 'user', 'USER', 'NO')," + "('test_r_a_d2', null, null, 'user2', 'USER', 'NO')," + "('test_r_a_d2', null, null, 'user1', 'USER', 'YES')"); assertQuery( "SELECT COUNT(*) FROM (SELECT * FROM information_schema.role_authorization_descriptors WHERE grantee like 'user%' LIMIT 2)", "VALUES 2"); assertQuery( "SELECT * FROM information_schema.role_authorization_descriptors WHERE grantee = 'test_r_a_d1'", "VALUES " + "('test_r_a_d2', null, null, 'test_r_a_d1', 'ROLE', 'NO')," + "('test_r_a_d2', null, null, 'test_r_a_d1', 'USER', 'NO')"); assertQuery( "SELECT * FROM information_schema.role_authorization_descriptors WHERE grantee = 'test_r_a_d1' LIMIT 1", "VALUES " + "('test_r_a_d2', null, null, 'test_r_a_d1', 'USER', 'NO')"); assertQuery( "SELECT * FROM information_schema.role_authorization_descriptors WHERE grantee = 'test_r_a_d1' AND grantee_type = 'USER'", "VALUES ('test_r_a_d2', null, null, 'test_r_a_d1', 'USER', 'NO')"); assertQuery( "SELECT * FROM information_schema.role_authorization_descriptors WHERE grantee = 'test_r_a_d1' AND grantee_type = 'ROLE'", "VALUES ('test_r_a_d2', null, null, 'test_r_a_d1', 'ROLE', 'NO')"); assertQuery( "SELECT * FROM information_schema.role_authorization_descriptors WHERE grantee_type = 'ROLE'", "VALUES ('test_r_a_d2', null, null, 'test_r_a_d1', 'ROLE', 'NO')"); assertUpdate("DROP ROLE test_r_a_d1 IN hive"); assertUpdate("DROP ROLE test_r_a_d2 IN hive"); assertUpdate("DROP ROLE test_r_a_d3 IN hive"); } @Test public void testShowViews() { String viewName = "test_show_views"; Session testSession = testSessionBuilder() .setIdentity(ofUser("test_view_access_owner")) .setCatalog(getSession().getCatalog()) .setSchema(getSession().getSchema()) .build(); assertUpdate("CREATE VIEW " + viewName + " AS SELECT abs(1) as whatever"); String showViews = format("SELECT * FROM information_schema.views WHERE table_name = '%s'", viewName); assertQuery( format("SELECT table_name FROM information_schema.views WHERE table_name = '%s'", viewName), format("VALUES '%s'", viewName)); executeExclusively(() -> { try { getQueryRunner().getAccessControl().denyTables(table -> false); assertQueryReturnsEmptyResult(testSession, showViews); } finally { getQueryRunner().getAccessControl().reset(); } }); assertUpdate("DROP VIEW " + viewName); } @Test public void testShowTablePrivileges() { try { assertUpdate("CREATE SCHEMA bar"); assertUpdate("CREATE TABLE bar.one(t integer)"); assertUpdate("CREATE TABLE bar.two(t integer)"); assertUpdate("CREATE VIEW bar.three AS SELECT t FROM bar.one"); assertUpdate("CREATE SCHEMA foo"); // `foo.two` does not exist. Make sure this doesn't incorrectly show up in listing. computeActual("SELECT * FROM information_schema.table_privileges"); // must not fail assertQuery( "SELECT * FROM information_schema.table_privileges WHERE table_schema = 'bar'", "VALUES " + "('admin', 'USER', 'hive', 'USER', 'hive', 'bar', 'one', 'SELECT', 'YES', null)," + "('admin', 'USER', 'hive', 'USER', 'hive', 'bar', 'one', 'DELETE', 'YES', null)," + "('admin', 'USER', 'hive', 'USER', 'hive', 'bar', 'one', 'INSERT', 'YES', null)," + "('admin', 'USER', 'hive', 'USER', 'hive', 'bar', 'one', 'UPDATE', 'YES', null)," + "('admin', 'USER', 'hive', 'USER', 'hive', 'bar', 'two', 'SELECT', 'YES', null)," + "('admin', 'USER', 'hive', 'USER', 'hive', 'bar', 'two', 'DELETE', 'YES', null)," + "('admin', 'USER', 'hive', 'USER', 'hive', 'bar', 'two', 'INSERT', 'YES', null)," + "('admin', 'USER', 'hive', 'USER', 'hive', 'bar', 'two', 'UPDATE', 'YES', null)," + "('admin', 'USER', 'hive', 'USER', 'hive', 'bar', 'three', 'SELECT', 'YES', null)," + "('admin', 'USER', 'hive', 'USER', 'hive', 'bar', 'three', 'DELETE', 'YES', null)," + "('admin', 'USER', 'hive', 'USER', 'hive', 'bar', 'three', 'INSERT', 'YES', null)," + "('admin', 'USER', 'hive', 'USER', 'hive', 'bar', 'three', 'UPDATE', 'YES', null)"); assertQuery( "SELECT * FROM information_schema.table_privileges WHERE table_schema = 'bar' AND table_name = 'two'", "VALUES " + "('admin', 'USER', 'hive', 'USER', 'hive', 'bar', 'two', 'SELECT', 'YES', null)," + "('admin', 'USER', 'hive', 'USER', 'hive', 'bar', 'two', 'DELETE', 'YES', null)," + "('admin', 'USER', 'hive', 'USER', 'hive', 'bar', 'two', 'INSERT', 'YES', null)," + "('admin', 'USER', 'hive', 'USER', 'hive', 'bar', 'two', 'UPDATE', 'YES', null)"); assertQuery( "SELECT * FROM information_schema.table_privileges WHERE table_name = 'two'", "VALUES " + "('admin', 'USER', 'hive', 'USER', 'hive', 'bar', 'two', 'SELECT', 'YES', null)," + "('admin', 'USER', 'hive', 'USER', 'hive', 'bar', 'two', 'DELETE', 'YES', null)," + "('admin', 'USER', 'hive', 'USER', 'hive', 'bar', 'two', 'INSERT', 'YES', null)," + "('admin', 'USER', 'hive', 'USER', 'hive', 'bar', 'two', 'UPDATE', 'YES', null)"); assertQuery( "SELECT * FROM information_schema.table_privileges WHERE table_name = 'three'", "VALUES " + "('admin', 'USER', 'hive', 'USER', 'hive', 'bar', 'three', 'SELECT', 'YES', null)," + "('admin', 'USER', 'hive', 'USER', 'hive', 'bar', 'three', 'DELETE', 'YES', null)," + "('admin', 'USER', 'hive', 'USER', 'hive', 'bar', 'three', 'INSERT', 'YES', null)," + "('admin', 'USER', 'hive', 'USER', 'hive', 'bar', 'three', 'UPDATE', 'YES', null)"); } finally { computeActual("DROP SCHEMA IF EXISTS foo"); computeActual("DROP VIEW IF EXISTS bar.three"); computeActual("DROP TABLE IF EXISTS bar.two"); computeActual("DROP TABLE IF EXISTS bar.one"); computeActual("DROP SCHEMA IF EXISTS bar"); } } @Test public void testCurrentUserInView() { checkState(getSession().getCatalog().isPresent(), "catalog is not set"); checkState(getSession().getSchema().isPresent(), "schema is not set"); String testAccountsUnqualifiedName = "test_accounts"; String testAccountsViewUnqualifiedName = "test_accounts_view"; String testAccountsViewFullyQualifiedName = format("%s.%s.%s", getSession().getCatalog().get(), getSession().getSchema().get(), testAccountsViewUnqualifiedName); assertUpdate(format("CREATE TABLE %s AS SELECT user_name, account_name" + " FROM (VALUES ('user1', 'account1'), ('user2', 'account2'))" + " t (user_name, account_name)", testAccountsUnqualifiedName), 2); assertUpdate(format("CREATE VIEW %s AS SELECT account_name FROM test_accounts WHERE user_name = CURRENT_USER", testAccountsViewUnqualifiedName)); assertUpdate(format("GRANT SELECT ON %s TO user1", testAccountsViewFullyQualifiedName)); assertUpdate(format("GRANT SELECT ON %s TO user2", testAccountsViewFullyQualifiedName)); Session user1 = testSessionBuilder() .setCatalog(getSession().getCatalog()) .setSchema(getSession().getSchema()) .setIdentity(Identity.forUser("user1").withPrincipal(getSession().getIdentity().getPrincipal()).build()) .build(); Session user2 = testSessionBuilder() .setCatalog(getSession().getCatalog()) .setSchema(getSession().getSchema()) .setIdentity(Identity.forUser("user2").withPrincipal(getSession().getIdentity().getPrincipal()).build()) .build(); assertQuery(user1, "SELECT account_name FROM test_accounts_view", "VALUES 'account1'"); assertQuery(user2, "SELECT account_name FROM test_accounts_view", "VALUES 'account2'"); assertUpdate("DROP VIEW test_accounts_view"); assertUpdate("DROP TABLE test_accounts"); } @Test public void testCollectColumnStatisticsOnCreateTable() { String tableName = "test_collect_column_statistics_on_create_table"; assertUpdate(format("" + "CREATE TABLE %s " + "WITH ( " + " partitioned_by = ARRAY['p_varchar'] " + ") " + "AS " + "SELECT c_boolean, c_bigint, c_double, c_timestamp, c_varchar, c_varbinary, p_varchar " + "FROM ( " + " VALUES " + " (null, null, null, null, null, null, 'p1'), " + " (null, null, null, null, null, null, 'p1'), " + " (true, BIGINT '1', DOUBLE '2.2', TIMESTAMP '2012-08-08 01:00:00.000', VARCHAR 'abc1', CAST('bcd1' AS VARBINARY), 'p1')," + " (false, BIGINT '0', DOUBLE '1.2', TIMESTAMP '2012-08-08 00:00:00.000', VARCHAR 'abc2', CAST('bcd2' AS VARBINARY), 'p1')," + " (null, null, null, null, null, null, 'p2'), " + " (null, null, null, null, null, null, 'p2'), " + " (true, BIGINT '2', DOUBLE '3.3', TIMESTAMP '2012-09-09 01:00:00.000', VARCHAR 'cba1', CAST('dcb1' AS VARBINARY), 'p2'), " + " (false, BIGINT '1', DOUBLE '2.3', TIMESTAMP '2012-09-09 00:00:00.000', VARCHAR 'cba2', CAST('dcb2' AS VARBINARY), 'p2') " + ") AS x (c_boolean, c_bigint, c_double, c_timestamp, c_varchar, c_varbinary, p_varchar)", tableName), 8); assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'p1')", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, 2.0E0, 0.5E0, null, null, null), " + "('c_bigint', null, 2.0E0, 0.5E0, null, '0', '1'), " + "('c_double', null, 2.0E0, 0.5E0, null, '1.2', '2.2'), " + "('c_timestamp', null, 2.0E0, 0.5E0, null, null, null), " + "('c_varchar', 8.0E0, 2.0E0, 0.5E0, null, null, null), " + "('c_varbinary', 8.0E0, null, 0.5E0, null, null, null), " + "('p_varchar', 8.0E0, 1.0E0, 0.0E0, null, null, null), " + "(null, null, null, null, 4.0E0, null, null)"); assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'p2')", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, 2.0E0, 0.5E0, null, null, null), " + "('c_bigint', null, 2.0E0, 0.5E0, null, '1', '2'), " + "('c_double', null, 2.0E0, 0.5E0, null, '2.3', '3.3'), " + "('c_timestamp', null, 2.0E0, 0.5E0, null, null, null), " + "('c_varchar', 8.0E0, 2.0E0, 0.5E0, null, null, null), " + "('c_varbinary', 8.0E0, null, 0.5E0, null, null, null), " + "('p_varchar', 8.0E0, 1.0E0, 0.0E0, null, null, null), " + "(null, null, null, null, 4.0E0, null, null)"); // non existing partition assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'p3')", tableName), "SELECT * FROM VALUES " + "('c_boolean', 0.0, 0.0, 1.0, null, null, null), " + "('c_bigint', 0.0, 0.0, 1.0, null, null, null), " + "('c_double', 0.0, 0.0, 1.0, null, null, null), " + "('c_timestamp', 0.0, 0.0, 1.0, null, null, null), " + "('c_varchar', 0.0, 0.0, 1.0, null, null, null), " + "('c_varbinary', 0.0, 0.0, 1.0, null, null, null), " + "('p_varchar', 0.0, 0.0, 1.0, null, null, null), " + "(null, null, null, null, 0.0, null, null)"); assertUpdate("DROP TABLE " + tableName); } @Test public void testCollectStatisticsOnCreateTableTimestampWithPrecision() { Session nanosecondsTimestamp = withTimestampPrecision(getSession(), HiveTimestampPrecision.NANOSECONDS); String tableName = "test_stats_on_create_timestamp_with_precision"; try { assertUpdate(nanosecondsTimestamp, "CREATE TABLE " + tableName + "(c_timestamp) AS VALUES " + "TIMESTAMP '1988-04-08 02:03:04.111', " + "TIMESTAMP '1988-04-08 02:03:04.115', " + "TIMESTAMP '1988-04-08 02:03:04.115', " + "TIMESTAMP '1988-04-08 02:03:04.119', " + "TIMESTAMP '1988-04-08 02:03:04.111111', " + "TIMESTAMP '1988-04-08 02:03:04.111115', " + "TIMESTAMP '1988-04-08 02:03:04.111115', " + "TIMESTAMP '1988-04-08 02:03:04.111999', " + "TIMESTAMP '1988-04-08 02:03:04.111111111', " + "TIMESTAMP '1988-04-08 02:03:04.111111115', " + "TIMESTAMP '1988-04-08 02:03:04.111111115', " + "TIMESTAMP '1988-04-08 02:03:04.111111999' ", 12); assertQuery("SHOW STATS FOR " + tableName, "SELECT * FROM VALUES " + "('c_timestamp', null, 9.0, 0.0, null, null, null), " + "(null, null, null, null, 12.0, null, null)"); } finally { assertUpdate("DROP TABLE IF EXISTS " + tableName); } } @Test public void testCollectColumnStatisticsOnInsert() { String tableName = "test_collect_column_statistics_on_insert"; assertUpdate(format("" + "CREATE TABLE %s ( " + " c_boolean BOOLEAN, " + " c_bigint BIGINT, " + " c_double DOUBLE, " + " c_timestamp TIMESTAMP, " + " c_varchar VARCHAR, " + " c_varbinary VARBINARY, " + " p_varchar VARCHAR " + ") " + "WITH ( " + " partitioned_by = ARRAY['p_varchar'] " + ")", tableName)); assertUpdate(format("" + "INSERT INTO %s " + "SELECT c_boolean, c_bigint, c_double, c_timestamp, c_varchar, c_varbinary, p_varchar " + "FROM ( " + " VALUES " + " (null, null, null, null, null, null, 'p1'), " + " (null, null, null, null, null, null, 'p1'), " + " (true, BIGINT '1', DOUBLE '2.2', TIMESTAMP '2012-08-08 01:00', VARCHAR 'abc1', CAST('bcd1' AS VARBINARY), 'p1')," + " (false, BIGINT '0', DOUBLE '1.2', TIMESTAMP '2012-08-08 00:00', VARCHAR 'abc2', CAST('bcd2' AS VARBINARY), 'p1')," + " (null, null, null, null, null, null, 'p2'), " + " (null, null, null, null, null, null, 'p2'), " + " (true, BIGINT '2', DOUBLE '3.3', TIMESTAMP '2012-09-09 01:00', VARCHAR 'cba1', CAST('dcb1' AS VARBINARY), 'p2'), " + " (false, BIGINT '1', DOUBLE '2.3', TIMESTAMP '2012-09-09 00:00', VARCHAR 'cba2', CAST('dcb2' AS VARBINARY), 'p2') " + ") AS x (c_boolean, c_bigint, c_double, c_timestamp, c_varchar, c_varbinary, p_varchar)", tableName), 8); assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'p1')", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, 2.0E0, 0.5E0, null, null, null), " + "('c_bigint', null, 2.0E0, 0.5E0, null, '0', '1'), " + "('c_double', null, 2.0E0, 0.5E0, null, '1.2', '2.2'), " + "('c_timestamp', null, 2.0E0, 0.5E0, null, null, null), " + "('c_varchar', 8.0E0, 2.0E0, 0.5E0, null, null, null), " + "('c_varbinary', 8.0E0, null, 0.5E0, null, null, null), " + "('p_varchar', 8.0E0, 1.0E0, 0.0E0, null, null, null), " + "(null, null, null, null, 4.0E0, null, null)"); assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'p2')", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, 2.0E0, 0.5E0, null, null, null), " + "('c_bigint', null, 2.0E0, 0.5E0, null, '1', '2'), " + "('c_double', null, 2.0E0, 0.5E0, null, '2.3', '3.3'), " + "('c_timestamp', null, 2.0E0, 0.5E0, null, null, null), " + "('c_varchar', 8.0E0, 2.0E0, 0.5E0, null, null, null), " + "('c_varbinary', 8.0E0, null, 0.5E0, null, null, null), " + "('p_varchar', 8.0E0, 1.0E0, 0.0E0, null, null, null), " + "(null, null, null, null, 4.0E0, null, null)"); // non existing partition assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'p3')", tableName), "SELECT * FROM VALUES " + "('c_boolean', 0.0, 0.0, 1.0, null, null, null), " + "('c_bigint', 0.0, 0.0, 1.0, null, null, null), " + "('c_double', 0.0, 0.0, 1.0, null, null, null), " + "('c_timestamp', 0.0, 0.0, 1.0, null, null, null), " + "('c_varchar', 0.0, 0.0, 1.0, null, null, null), " + "('c_varbinary', 0.0, 0.0, 1.0, null, null, null), " + "('p_varchar', 0.0, 0.0, 1.0, null, null, null), " + "(null, null, null, null, 0.0, null, null)"); assertUpdate("DROP TABLE " + tableName); } @Test public void testCollectColumnStatisticsOnInsertToEmptyTable() { String tableName = "test_collect_column_statistics_empty_table"; assertUpdate(format("CREATE TABLE %s (col INT)", tableName)); assertQuery("SHOW STATS FOR " + tableName, "SELECT * FROM VALUES " + "('col', 0.0, 0.0, 1.0, null, null, null), " + "(null, null, null, null, 0.0, null, null)"); assertUpdate(format("INSERT INTO %s (col) VALUES 50, 100, 1, 200, 2", tableName), 5); assertQuery(format("SHOW STATS FOR %s", tableName), "SELECT * FROM VALUES " + "('col', null, 5.0, 0.0, null, 1, 200), " + "(null, null, null, null, 5.0, null, null)"); assertUpdate("DROP TABLE " + tableName); } @Test public void testCollectColumnStatisticsOnInsertToPartiallyAnalyzedTable() { String tableName = "test_collect_column_statistics_partially_analyzed_table"; assertUpdate(format("CREATE TABLE %s (col INT, col2 INT)", tableName)); assertQuery("SHOW STATS FOR " + tableName, "SELECT * FROM VALUES " + "('col', 0.0, 0.0, 1.0, null, null, null), " + "('col2', 0.0, 0.0, 1.0, null, null, null), " + "(null, null, null, null, 0.0, null, null)"); assertUpdate(format("ANALYZE %s WITH (columns = ARRAY['col2'])", tableName), 0); assertQuery("SHOW STATS FOR " + tableName, "SELECT * FROM VALUES " + "('col', 0.0, 0.0, 1.0, null, null, null), " + "('col2', 0.0, 0.0, 1.0, null, null, null), " + "(null, null, null, null, 0.0, null, null)"); assertUpdate(format("INSERT INTO %s (col, col2) VALUES (50, 49), (100, 99), (1, 0), (200, 199), (2, 1)", tableName), 5); assertQuery(format("SHOW STATS FOR %s", tableName), "SELECT * FROM VALUES " + "('col', null, 5.0, 0.0, null, 1, 200), " + "('col2', null, 5.0, 0.0, null, 0, 199), " + "(null, null, null, null, 5.0, null, null)"); assertUpdate("DROP TABLE " + tableName); } @Test public void testAnalyzePropertiesSystemTable() { assertQuery( "SELECT * FROM system.metadata.analyze_properties WHERE catalog_name = 'hive'", "SELECT * FROM VALUES " + "('hive', 'partitions', '', 'array(array(varchar))', 'Partitions to be analyzed'), " + "('hive', 'columns', '', 'array(varchar)', 'Columns to be analyzed')"); } @Test public void testAnalyzeEmptyTable() { String tableName = "test_analyze_empty_table"; assertUpdate(format("CREATE TABLE %s (c_bigint BIGINT, c_varchar VARCHAR(2))", tableName)); assertUpdate("ANALYZE " + tableName, 0); } @Test public void testInvalidAnalyzePartitionedTable() { String tableName = "test_invalid_analyze_partitioned_table"; // Test table does not exist assertQueryFails("ANALYZE " + tableName, format(".*Table 'hive.tpch.%s' does not exist.*", tableName)); createPartitionedTableForAnalyzeTest(tableName); // Test invalid property assertQueryFails(format("ANALYZE %s WITH (error = 1)", tableName), ".*'hive' does not support analyze property 'error'.*"); assertQueryFails(format("ANALYZE %s WITH (partitions = 1)", tableName), "\\QInvalid value for analyze property 'partitions': Cannot convert [1] to array(array(varchar))\\E"); assertQueryFails(format("ANALYZE %s WITH (partitions = NULL)", tableName), "\\QInvalid value for analyze property 'partitions': Cannot convert [null] to array(array(varchar))\\E"); assertQueryFails(format("ANALYZE %s WITH (partitions = ARRAY[NULL])", tableName), ".*Invalid null value in analyze partitions property.*"); // Test non-existed partition assertQueryFails(format("ANALYZE %s WITH (partitions = ARRAY[ARRAY['p4', '10']])", tableName), ".*Partition no longer exists.*"); // Test partition schema mismatch assertQueryFails(format("ANALYZE %s WITH (partitions = ARRAY[ARRAY['p4']])", tableName), "Partition value count does not match partition column count"); assertQueryFails(format("ANALYZE %s WITH (partitions = ARRAY[ARRAY['p4', '10', 'error']])", tableName), "Partition value count does not match partition column count"); // Drop the partitioned test table assertUpdate("DROP TABLE " + tableName); } @Test public void testInvalidAnalyzeUnpartitionedTable() { String tableName = "test_invalid_analyze_unpartitioned_table"; // Test table does not exist assertQueryFails("ANALYZE " + tableName, ".*Table.*does not exist.*"); createUnpartitionedTableForAnalyzeTest(tableName); // Test partition properties on unpartitioned table assertQueryFails(format("ANALYZE %s WITH (partitions = ARRAY[])", tableName), "Partition list provided but table is not partitioned"); assertQueryFails(format("ANALYZE %s WITH (partitions = ARRAY[ARRAY['p1']])", tableName), "Partition list provided but table is not partitioned"); // Drop the partitioned test table assertUpdate("DROP TABLE " + tableName); } @Test public void testAnalyzePartitionedTable() { String tableName = "test_analyze_partitioned_table"; createPartitionedTableForAnalyzeTest(tableName); // No column stats before ANALYZE assertQuery("SHOW STATS FOR " + tableName, "SELECT * FROM VALUES " + "('c_boolean', null, null, null, null, null, null), " + "('c_bigint', null, null, null, null, null, null), " + "('c_double', null, null, null, null, null, null), " + "('c_timestamp', null, null, null, null, null, null), " + "('c_varchar', null, null, null, null, null, null), " + "('c_varbinary', null, null, null, null, null, null), " + "('p_varchar', 24.0, 3.0, 0.25, null, null, null), " + "('p_bigint', null, 2.0, 0.25, null, '7', '8'), " + "(null, null, null, null, 16.0, null, null)"); // No column stats after running an empty analyze assertUpdate(format("ANALYZE %s WITH (partitions = ARRAY[])", tableName), 0); assertQuery("SHOW STATS FOR " + tableName, "SELECT * FROM VALUES " + "('c_boolean', null, null, null, null, null, null), " + "('c_bigint', null, null, null, null, null, null), " + "('c_double', null, null, null, null, null, null), " + "('c_timestamp', null, null, null, null, null, null), " + "('c_varchar', null, null, null, null, null, null), " + "('c_varbinary', null, null, null, null, null, null), " + "('p_varchar', 24.0, 3.0, 0.25, null, null, null), " + "('p_bigint', null, 2.0, 0.25, null, '7', '8'), " + "(null, null, null, null, 16.0, null, null)"); // Run analyze on 3 partitions including a null partition and a duplicate partition assertUpdate(format("ANALYZE %s WITH (partitions = ARRAY[ARRAY['p1', '7'], ARRAY['p2', '7'], ARRAY['p2', '7'], ARRAY[NULL, NULL]])", tableName), 12); assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'p1' AND p_bigint = 7)", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, 2.0, 0.5, null, null, null), " + "('c_bigint', null, 2.0, 0.5, null, '0', '1'), " + "('c_double', null, 2.0, 0.5, null, '1.2', '2.2'), " + "('c_timestamp', null, 2.0, 0.5, null, null, null), " + "('c_varchar', 8.0, 2.0, 0.5, null, null, null), " + "('c_varbinary', 4.0, null, 0.5, null, null, null), " + "('p_varchar', 8.0, 1.0, 0.0, null, null, null), " + "('p_bigint', null, 1.0, 0.0, null, '7', '7'), " + "(null, null, null, null, 4.0, null, null)"); assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'p2' AND p_bigint = 7)", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, 2.0, 0.5, null, null, null), " + "('c_bigint', null, 2.0, 0.5, null, '1', '2'), " + "('c_double', null, 2.0, 0.5, null, '2.3', '3.3'), " + "('c_timestamp', null, 2.0, 0.5, null, null, null), " + "('c_varchar', 8.0, 2.0, 0.5, null, null, null), " + "('c_varbinary', 4.0, null, 0.5, null, null, null), " + "('p_varchar', 8.0, 1.0, 0.0, null, null, null), " + "('p_bigint', null, 1.0, 0.0, null, '7', '7'), " + "(null, null, null, null, 4.0, null, null)"); assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar IS NULL AND p_bigint IS NULL)", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, 1.0, 0.0, null, null, null), " + "('c_bigint', null, 4.0, 0.0, null, '4', '7'), " + "('c_double', null, 4.0, 0.0, null, '4.7', '7.7'), " + "('c_timestamp', null, 4.0, 0.0, null, null, null), " + "('c_varchar', 16.0, 4.0, 0.0, null, null, null), " + "('c_varbinary', 8.0, null, 0.0, null, null, null), " + "('p_varchar', 0.0, 0.0, 1.0, null, null, null), " + "('p_bigint', 0.0, 0.0, 1.0, null, null, null), " + "(null, null, null, null, 4.0, null, null)"); // Partition [p3, 8], [e1, 9], [e2, 9] have no column stats assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'p3' AND p_bigint = 8)", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, null, null, null, null, null), " + "('c_bigint', null, null, null, null, null, null), " + "('c_double', null, null, null, null, null, null), " + "('c_timestamp', null, null, null, null, null, null), " + "('c_varchar', null, null, null, null, null, null), " + "('c_varbinary', null, null, null, null, null, null), " + "('p_varchar', 8.0, 1.0, 0.0, null, null, null), " + "('p_bigint', null, 1.0, 0.0, null, '8', '8'), " + "(null, null, null, null, 4.0, null, null)"); assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'e1' AND p_bigint = 9)", tableName), "SELECT * FROM VALUES " + "('c_boolean', 0.0, 0.0, 1.0, null, null, null), " + "('c_bigint', 0.0, 0.0, 1.0, null, null, null), " + "('c_double', 0.0, 0.0, 1.0, null, null, null), " + "('c_timestamp', 0.0, 0.0, 1.0, null, null, null), " + "('c_varchar', 0.0, 0.0, 1.0, null, null, null), " + "('c_varbinary', 0.0, 0.0, 1.0, null, null, null), " + "('p_varchar', 0.0, 0.0, 1.0, null, null, null), " + "('p_bigint', 0.0, 0.0, 1.0, null, null, null), " + "(null, null, null, null, 0.0, null, null)"); assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'e2' AND p_bigint = 9)", tableName), "SELECT * FROM VALUES " + "('c_boolean', 0.0, 0.0, 1.0, null, null, null), " + "('c_bigint', 0.0, 0.0, 1.0, null, null, null), " + "('c_double', 0.0, 0.0, 1.0, null, null, null), " + "('c_timestamp', 0.0, 0.0, 1.0, null, null, null), " + "('c_varchar', 0.0, 0.0, 1.0, null, null, null), " + "('c_varbinary', 0.0, 0.0, 1.0, null, null, null), " + "('p_varchar', 0.0, 0.0, 1.0, null, null, null), " + "('p_bigint', 0.0, 0.0, 1.0, null, null, null), " + "(null, null, null, null, 0.0, null, null)"); // Run analyze on the whole table assertUpdate("ANALYZE " + tableName, 16); // All partitions except empty partitions have column stats assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'p1' AND p_bigint = 7)", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, 2.0, 0.5, null, null, null), " + "('c_bigint', null, 2.0, 0.5, null, '0', '1'), " + "('c_double', null, 2.0, 0.5, null, '1.2', '2.2'), " + "('c_timestamp', null, 2.0, 0.5, null, null, null), " + "('c_varchar', 8.0, 2.0, 0.5, null, null, null), " + "('c_varbinary', 4.0, null, 0.5, null, null, null), " + "('p_varchar', 8.0, 1.0, 0.0, null, null, null), " + "('p_bigint', null, 1.0, 0.0, null, '7', '7'), " + "(null, null, null, null, 4.0, null, null)"); assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'p2' AND p_bigint = 7)", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, 2.0, 0.5, null, null, null), " + "('c_bigint', null, 2.0, 0.5, null, '1', '2'), " + "('c_double', null, 2.0, 0.5, null, '2.3', '3.3'), " + "('c_timestamp', null, 2.0, 0.5, null, null, null), " + "('c_varchar', 8.0, 2.0, 0.5, null, null, null), " + "('c_varbinary', 4.0, null, 0.5, null, null, null), " + "('p_varchar', 8.0, 1.0, 0.0, null, null, null), " + "('p_bigint', null, 1.0, 0.0, null, '7', '7'), " + "(null, null, null, null, 4.0, null, null)"); assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar IS NULL AND p_bigint IS NULL)", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, 1.0, 0.0, null, null, null), " + "('c_bigint', null, 4.0, 0.0, null, '4', '7'), " + "('c_double', null, 4.0, 0.0, null, '4.7', '7.7'), " + "('c_timestamp', null, 4.0, 0.0, null, null, null), " + "('c_varchar', 16.0, 4.0, 0.0, null, null, null), " + "('c_varbinary', 8.0, null, 0.0, null, null, null), " + "('p_varchar', 0.0, 0.0, 1.0, null, null, null), " + "('p_bigint', 0.0, 0.0, 1.0, null, null, null), " + "(null, null, null, null, 4.0, null, null)"); assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'p3' AND p_bigint = 8)", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, 2.0, 0.5, null, null, null), " + "('c_bigint', null, 2.0, 0.5, null, '2', '3'), " + "('c_double', null, 2.0, 0.5, null, '3.4', '4.4'), " + "('c_timestamp', null, 2.0, 0.5, null, null, null), " + "('c_varchar', 8.0, 2.0, 0.5, null, null, null), " + "('c_varbinary', 4.0, null, 0.5, null, null, null), " + "('p_varchar', 8.0, 1.0, 0.0, null, null, null), " + "('p_bigint', null, 1.0, 0.0, null, '8', '8'), " + "(null, null, null, null, 4.0, null, null)"); assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'e1' AND p_bigint = 9)", tableName), "SELECT * FROM VALUES " + "('c_boolean', 0.0, 0.0, 1.0, null, null, null), " + "('c_bigint', 0.0, 0.0, 1.0, null, null, null), " + "('c_double', 0.0, 0.0, 1.0, null, null, null), " + "('c_timestamp', 0.0, 0.0, 1.0, null, null, null), " + "('c_varchar', 0.0, 0.0, 1.0, null, null, null), " + "('c_varbinary', 0.0, 0.0, 1.0, null, null, null), " + "('p_varchar', 0.0, 0.0, 1.0, null, null, null), " + "('p_bigint', 0.0, 0.0, 1.0, null, null, null), " + "(null, null, null, null, 0.0, null, null)"); assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'e2' AND p_bigint = 9)", tableName), "SELECT * FROM VALUES " + "('c_boolean', 0.0, 0.0, 1.0, null, null, null), " + "('c_bigint', 0.0, 0.0, 1.0, null, null, null), " + "('c_double', 0.0, 0.0, 1.0, null, null, null), " + "('c_timestamp', 0.0, 0.0, 1.0, null, null, null), " + "('c_varchar', 0.0, 0.0, 1.0, null, null, null), " + "('c_varbinary', 0.0, 0.0, 1.0, null, null, null), " + "('p_varchar', 0.0, 0.0, 1.0, null, null, null), " + "('p_bigint', 0.0, 0.0, 1.0, null, null, null), " + "(null, null, null, null, 0.0, null, null)"); // Drop the partitioned test table assertUpdate("DROP TABLE " + tableName); } @Test public void testAnalyzePartitionedTableWithColumnSubset() { String tableName = "test_analyze_columns_partitioned_table"; createPartitionedTableForAnalyzeTest(tableName); // No column stats before ANALYZE assertQuery( "SHOW STATS FOR " + tableName, "SELECT * FROM VALUES " + "('c_boolean', null, null, null, null, null, null), " + "('c_bigint', null, null, null, null, null, null), " + "('c_double', null, null, null, null, null, null), " + "('c_timestamp', null, null, null, null, null, null), " + "('c_varchar', null, null, null, null, null, null), " + "('c_varbinary', null, null, null, null, null, null), " + "('p_varchar', 24.0, 3.0, 0.25, null, null, null), " + "('p_bigint', null, 2.0, 0.25, null, '7', '8'), " + "(null, null, null, null, 16.0, null, null)"); // Run analyze on 3 partitions including a null partition and a duplicate partition, // restricting to just 2 columns (one duplicate) assertUpdate( format("ANALYZE %s WITH (partitions = ARRAY[ARRAY['p1', '7'], ARRAY['p2', '7'], ARRAY['p2', '7'], ARRAY[NULL, NULL]], " + "columns = ARRAY['c_timestamp', 'c_varchar', 'c_timestamp'])", tableName), 12); assertQuery( format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'p1' AND p_bigint = 7)", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, null, null, null, null, null), " + "('c_bigint', null, null, null, null, null, null), " + "('c_double', null, null, null, null, null, null), " + "('c_timestamp', null, 2.0, 0.5, null, null, null), " + "('c_varchar', 8.0, 2.0, 0.5, null, null, null), " + "('c_varbinary', null, null, null, null, null, null), " + "('p_varchar', 8.0, 1.0, 0.0, null, null, null), " + "('p_bigint', null, 1.0, 0.0, null, '7', '7'), " + "(null, null, null, null, 4.0, null, null)"); assertQuery( format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'p2' AND p_bigint = 7)", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, null, null, null, null, null), " + "('c_bigint', null, null, null, null, null, null), " + "('c_double', null, null, null, null, null, null), " + "('c_timestamp', null, 2.0, 0.5, null, null, null), " + "('c_varchar', 8.0, 2.0, 0.5, null, null, null), " + "('c_varbinary', null, null, null, null, null, null), " + "('p_varchar', 8.0, 1.0, 0.0, null, null, null), " + "('p_bigint', null, 1.0, 0.0, null, '7', '7'), " + "(null, null, null, null, 4.0, null, null)"); assertQuery( format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar IS NULL AND p_bigint IS NULL)", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, null, null, null, null, null), " + "('c_bigint', null, null, null, null, null, null), " + "('c_double', null, null, null, null, null, null), " + "('c_timestamp', null, 4.0, 0.0, null, null, null), " + "('c_varchar', 16.0, 4.0, 0.0, null, null, null), " + "('c_varbinary', null, null, null, null, null, null), " + "('p_varchar', 0.0, 0.0, 1.0, null, null, null), " + "('p_bigint', 0.0, 0.0, 1.0, null, null, null), " + "(null, null, null, null, 4.0, null, null)"); // Partition [p3, 8], [e1, 9], [e2, 9] have no column stats assertQuery( format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'p3' AND p_bigint = 8)", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, null, null, null, null, null), " + "('c_bigint', null, null, null, null, null, null), " + "('c_double', null, null, null, null, null, null), " + "('c_timestamp', null, null, null, null, null, null), " + "('c_varchar', null, null, null, null, null, null), " + "('c_varbinary', null, null, null, null, null, null), " + "('p_varchar', 8.0, 1.0, 0.0, null, null, null), " + "('p_bigint', null, 1.0, 0.0, null, '8', '8'), " + "(null, null, null, null, 4.0, null, null)"); assertQuery( format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'e1' AND p_bigint = 9)", tableName), "SELECT * FROM VALUES " + "('c_boolean', 0.0, 0.0, 1.0, null, null, null), " + "('c_bigint', 0.0, 0.0, 1.0, null, null, null), " + "('c_double', 0.0, 0.0, 1.0, null, null, null), " + "('c_timestamp', 0.0, 0.0, 1.0, null, null, null), " + "('c_varchar', 0.0, 0.0, 1.0, null, null, null), " + "('c_varbinary', 0.0, 0.0, 1.0, null, null, null), " + "('p_varchar', 0.0, 0.0, 1.0, null, null, null), " + "('p_bigint', 0.0, 0.0, 1.0, null, null, null), " + "(null, null, null, null, 0.0, null, null)"); assertQuery( format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'e2' AND p_bigint = 9)", tableName), "SELECT * FROM VALUES " + "('c_boolean', 0.0, 0.0, 1.0, null, null, null), " + "('c_bigint', 0.0, 0.0, 1.0, null, null, null), " + "('c_double', 0.0, 0.0, 1.0, null, null, null), " + "('c_timestamp', 0.0, 0.0, 1.0, null, null, null), " + "('c_varchar', 0.0, 0.0, 1.0, null, null, null), " + "('c_varbinary', 0.0, 0.0, 1.0, null, null, null), " + "('p_varchar', 0.0, 0.0, 1.0, null, null, null), " + "('p_bigint', 0.0, 0.0, 1.0, null, null, null), " + "(null, null, null, null, 0.0, null, null)"); // Run analyze again, this time on 2 new columns (for all partitions); the previously computed stats // should be preserved assertUpdate( format("ANALYZE %s WITH (columns = ARRAY['c_bigint', 'c_double'])", tableName), 16); assertQuery( format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'p1' AND p_bigint = 7)", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, null, null, null, null, null), " + "('c_bigint', null, 2.0, 0.5, null, '0', '1'), " + "('c_double', null, 2.0, 0.5, null, '1.2', '2.2'), " + "('c_timestamp', null, 2.0, 0.5, null, null, null), " + "('c_varchar', 8.0, 2.0, 0.5, null, null, null), " + "('c_varbinary', null, null, null, null, null, null), " + "('p_varchar', 8.0, 1.0, 0.0, null, null, null), " + "('p_bigint', null, 1.0, 0.0, null, '7', '7'), " + "(null, null, null, null, 4.0, null, null)"); assertQuery( format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'p2' AND p_bigint = 7)", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, null, null, null, null, null), " + "('c_bigint', null, 2.0, 0.5, null, '1', '2'), " + "('c_double', null, 2.0, 0.5, null, '2.3', '3.3'), " + "('c_timestamp', null, 2.0, 0.5, null, null, null), " + "('c_varchar', 8.0, 2.0, 0.5, null, null, null), " + "('c_varbinary', null, null, null, null, null, null), " + "('p_varchar', 8.0, 1.0, 0.0, null, null, null), " + "('p_bigint', null, 1.0, 0.0, null, '7', '7'), " + "(null, null, null, null, 4.0, null, null)"); assertQuery( format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar IS NULL AND p_bigint IS NULL)", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, null, null, null, null, null), " + "('c_bigint', null, 4.0, 0.0, null, '4', '7'), " + "('c_double', null, 4.0, 0.0, null, '4.7', '7.7'), " + "('c_timestamp', null, 4.0, 0.0, null, null, null), " + "('c_varchar', 16.0, 4.0, 0.0, null, null, null), " + "('c_varbinary', null, null, null, null, null, null), " + "('p_varchar', 0.0, 0.0, 1.0, null, null, null), " + "('p_bigint', 0.0, 0.0, 1.0, null, null, null), " + "(null, null, null, null, 4.0, null, null)"); assertQuery( format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'p3' AND p_bigint = 8)", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, null, null, null, null, null), " + "('c_bigint', null, 2.0, 0.5, null, '2', '3'), " + "('c_double', null, 2.0, 0.5, null, '3.4', '4.4'), " + "('c_timestamp', null, null, null, null, null, null), " + "('c_varchar', null, null, null, null, null, null), " + "('c_varbinary', null, null, null, null, null, null), " + "('p_varchar', 8.0, 1.0, 0.0, null, null, null), " + "('p_bigint', null, 1.0, 0.0, null, '8', '8'), " + "(null, null, null, null, 4.0, null, null)"); assertQuery( format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'e1' AND p_bigint = 9)", tableName), "SELECT * FROM VALUES " + "('c_boolean', 0.0, 0.0, 1.0, null, null, null), " + "('c_bigint', 0.0, 0.0, 1.0, null, null, null), " + "('c_double', 0.0, 0.0, 1.0, null, null, null), " + "('c_timestamp', 0.0, 0.0, 1.0, null, null, null), " + "('c_varchar', 0.0, 0.0, 1.0, null, null, null), " + "('c_varbinary', 0.0, 0.0, 1.0, null, null, null), " + "('p_varchar', 0.0, 0.0, 1.0, null, null, null), " + "('p_bigint', 0.0, 0.0, 1.0, null, null, null), " + "(null, null, null, null, 0.0, null, null)"); assertQuery( format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'e2' AND p_bigint = 9)", tableName), "SELECT * FROM VALUES " + "('c_boolean', 0.0, 0.0, 1.0, null, null, null), " + "('c_bigint', 0.0, 0.0, 1.0, null, null, null), " + "('c_double', 0.0, 0.0, 1.0, null, null, null), " + "('c_timestamp', 0.0, 0.0, 1.0, null, null, null), " + "('c_varchar', 0.0, 0.0, 1.0, null, null, null), " + "('c_varbinary', 0.0, 0.0, 1.0, null, null, null), " + "('p_varchar', 0.0, 0.0, 1.0, null, null, null), " + "('p_bigint', 0.0, 0.0, 1.0, null, null, null), " + "(null, null, null, null, 0.0, null, null)"); assertUpdate("DROP TABLE " + tableName); } @Test public void testAnalyzeUnpartitionedTable() { String tableName = "test_analyze_unpartitioned_table"; createUnpartitionedTableForAnalyzeTest(tableName); // No column stats before ANALYZE assertQuery("SHOW STATS FOR " + tableName, "SELECT * FROM VALUES " + "('c_boolean', null, null, null, null, null, null), " + "('c_bigint', null, null, null, null, null, null), " + "('c_double', null, null, null, null, null, null), " + "('c_timestamp', null, null, null, null, null, null), " + "('c_varchar', null, null, null, null, null, null), " + "('c_varbinary', null, null, null, null, null, null), " + "('p_varchar', null, null, null, null, null, null), " + "('p_bigint', null, null, null, null, null, null), " + "(null, null, null, null, 16.0, null, null)"); // Run analyze on the whole table assertUpdate("ANALYZE " + tableName, 16); assertQuery("SHOW STATS FOR " + tableName, "SELECT * FROM VALUES " + "('c_boolean', null, 2.0, 0.375, null, null, null), " + "('c_bigint', null, 8.0, 0.375, null, '0', '7'), " + "('c_double', null, 10.0, 0.375, null, '1.2', '7.7'), " + "('c_timestamp', null, 10.0, 0.375, null, null, null), " + "('c_varchar', 40.0, 10.0, 0.375, null, null, null), " + "('c_varbinary', 20.0, null, 0.375, null, null, null), " + "('p_varchar', 24.0, 3.0, 0.25, null, null, null), " + "('p_bigint', null, 2.0, 0.25, null, '7', '8'), " + "(null, null, null, null, 16.0, null, null)"); // Drop the unpartitioned test table assertUpdate("DROP TABLE " + tableName); } @Test public void testAnalyzeTableTimestampWithPrecision() { String catalog = getSession().getCatalog().get(); Session nanosecondsTimestamp = Session.builder(withTimestampPrecision(getSession(), HiveTimestampPrecision.NANOSECONDS)) // Disable column statistics collection when creating the table .setCatalogSessionProperty(catalog, "collect_column_statistics_on_write", "false") .build(); Session microsecondsTimestamp = withTimestampPrecision(getSession(), HiveTimestampPrecision.MICROSECONDS); Session millisecondsTimestamp = withTimestampPrecision(getSession(), HiveTimestampPrecision.MILLISECONDS); String tableName = "test_analyze_timestamp_with_precision"; try { assertUpdate( nanosecondsTimestamp, "CREATE TABLE " + tableName + "(c_timestamp) AS VALUES " + "TIMESTAMP '1988-04-08 02:03:04.111', " + "TIMESTAMP '1988-04-08 02:03:04.115', " + "TIMESTAMP '1988-04-08 02:03:04.115', " + "TIMESTAMP '1988-04-08 02:03:04.119', " + "TIMESTAMP '1988-04-08 02:03:04.111111', " + "TIMESTAMP '1988-04-08 02:03:04.111115', " + "TIMESTAMP '1988-04-08 02:03:04.111115', " + "TIMESTAMP '1988-04-08 02:03:04.111999', " + "TIMESTAMP '1988-04-08 02:03:04.111111111', " + "TIMESTAMP '1988-04-08 02:03:04.111111115', " + "TIMESTAMP '1988-04-08 02:03:04.111111115', " + "TIMESTAMP '1988-04-08 02:03:04.111111999' ", 12); assertQuery("SHOW STATS FOR " + tableName, "SELECT * FROM VALUES " + "('c_timestamp', null, null, null, null, null, null), " + "(null, null, null, null, 12.0, null, null)"); assertUpdate(format("CALL system.drop_stats('%s', '%s')", TPCH_SCHEMA, tableName)); assertUpdate(nanosecondsTimestamp, "ANALYZE " + tableName, 12); assertQuery("SHOW STATS FOR " + tableName, "SELECT * FROM VALUES " + "('c_timestamp', null, 9.0, 0.0, null, null, null), " + "(null, null, null, null, 12.0, null, null)"); assertUpdate(format("CALL system.drop_stats('%s', '%s')", TPCH_SCHEMA, tableName)); assertUpdate(microsecondsTimestamp, "ANALYZE " + tableName, 12); assertQuery("SHOW STATS FOR " + tableName, "SELECT * FROM VALUES " + "('c_timestamp', null, 7.0, 0.0, null, null, null), " + "(null, null, null, null, 12.0, null, null)"); assertUpdate(format("CALL system.drop_stats('%s', '%s')", TPCH_SCHEMA, tableName)); assertUpdate(millisecondsTimestamp, "ANALYZE " + tableName, 12); assertQuery("SHOW STATS FOR " + tableName, "SELECT * FROM VALUES " + "('c_timestamp', null, 4.0, 0.0, null, null, null), " + "(null, null, null, null, 12.0, null, null)"); } finally { assertUpdate("DROP TABLE IF EXISTS " + tableName); } } @Test public void testInvalidColumnsAnalyzeTable() { String tableName = "test_invalid_analyze_table"; createUnpartitionedTableForAnalyzeTest(tableName); // Specifying a null column name is not cool assertQueryFails( "ANALYZE " + tableName + " WITH (columns = ARRAY[null])", ".*Invalid null value in analyze columns property.*"); // You must specify valid column names assertQueryFails( "ANALYZE " + tableName + " WITH (columns = ARRAY['invalid_name'])", ".*Invalid columns specified for analysis.*"); // Column names must be strings assertQueryFails( "ANALYZE " + tableName + " WITH (columns = ARRAY[42])", "\\QInvalid value for analyze property 'columns': Cannot convert [ARRAY[42]] to array(varchar)\\E"); assertUpdate("DROP TABLE " + tableName); } @Test public void testAnalyzeUnpartitionedTableWithColumnSubset() { String tableName = "test_analyze_columns_unpartitioned_table"; createUnpartitionedTableForAnalyzeTest(tableName); // No column stats before ANALYZE assertQuery( "SHOW STATS FOR " + tableName, "SELECT * FROM VALUES " + "('c_boolean', null, null, null, null, null, null), " + "('c_bigint', null, null, null, null, null, null), " + "('c_double', null, null, null, null, null, null), " + "('c_timestamp', null, null, null, null, null, null), " + "('c_varchar', null, null, null, null, null, null), " + "('c_varbinary', null, null, null, null, null, null), " + "('p_varchar', null, null, null, null, null, null), " + "('p_bigint', null, null, null, null, null, null), " + "(null, null, null, null, 16.0, null, null)"); // Run analyze on the whole table assertUpdate("ANALYZE " + tableName + " WITH (columns = ARRAY['c_bigint', 'c_double'])", 16); assertQuery( "SHOW STATS FOR " + tableName, "SELECT * FROM VALUES " + "('c_boolean', null, null, null, null, null, null), " + "('c_bigint', null, 8.0, 0.375, null, '0', '7'), " + "('c_double', null, 10.0, 0.375, null, '1.2', '7.7'), " + "('c_timestamp', null, null, null, null, null, null), " + "('c_varchar', null, null, null, null, null, null), " + "('c_varbinary', null, null, null, null, null, null), " + "('p_varchar', null, null, null, null, null, null), " + "('p_bigint', null, null, null, null, null, null), " + "(null, null, null, null, 16.0, null, null)"); assertUpdate("DROP TABLE " + tableName); } @Test public void testAnalyzeUnpartitionedTableWithEmptyColumnSubset() { String tableName = "test_analyze_columns_unpartitioned_table_with_empty_column_subset"; createUnpartitionedTableForAnalyzeTest(tableName); // Clear table stats assertUpdate(format("CALL system.drop_stats('%s', '%s')", TPCH_SCHEMA, tableName)); // No stats before ANALYZE assertQuery( "SHOW STATS FOR " + tableName, "SELECT * FROM VALUES " + "('c_boolean', null, null, null, null, null, null), " + "('c_bigint', null, null, null, null, null, null), " + "('c_double', null, null, null, null, null, null), " + "('c_timestamp', null, null, null, null, null, null), " + "('c_varchar', null, null, null, null, null, null), " + "('c_varbinary', null, null, null, null, null, null), " + "('p_varchar', null, null, null, null, null, null), " + "('p_bigint', null, null, null, null, null, null), " + "(null, null, null, null, null, null, null)"); // Run analyze on the whole table assertUpdate("ANALYZE " + tableName + " WITH (columns = ARRAY[])", 16); assertQuery( "SHOW STATS FOR " + tableName, "SELECT * FROM VALUES " + "('c_boolean', null, null, null, null, null, null), " + "('c_bigint', null, null, null, null, null, null), " + "('c_double', null, null, null, null, null, null), " + "('c_timestamp', null, null, null, null, null, null), " + "('c_varchar', null, null, null, null, null, null), " + "('c_varbinary', null, null, null, null, null, null), " + "('p_varchar', null, null, null, null, null, null), " + "('p_bigint', null, null, null, null, null, null), " + "(null, null, null, null, 16.0, null, null)"); assertUpdate("DROP TABLE " + tableName); } @Test public void testDropStatsPartitionedTable() { String tableName = "test_drop_stats_partitioned_table"; createPartitionedTableForAnalyzeTest(tableName); // Run analyze on the entire table assertUpdate("ANALYZE " + tableName, 16); // All partitions except empty partitions have column stats assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'p1' AND p_bigint = 7)", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, 2.0, 0.5, null, null, null), " + "('c_bigint', null, 2.0, 0.5, null, '0', '1'), " + "('c_double', null, 2.0, 0.5, null, '1.2', '2.2'), " + "('c_timestamp', null, 2.0, 0.5, null, null, null), " + "('c_varchar', 8.0, 2.0, 0.5, null, null, null), " + "('c_varbinary', 4.0, null, 0.5, null, null, null), " + "('p_varchar', 8.0, 1.0, 0.0, null, null, null), " + "('p_bigint', null, 1.0, 0.0, null, '7', '7'), " + "(null, null, null, null, 4.0, null, null)"); assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'p2' AND p_bigint = 7)", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, 2.0, 0.5, null, null, null), " + "('c_bigint', null, 2.0, 0.5, null, '1', '2'), " + "('c_double', null, 2.0, 0.5, null, '2.3', '3.3'), " + "('c_timestamp', null, 2.0, 0.5, null, null, null), " + "('c_varchar', 8.0, 2.0, 0.5, null, null, null), " + "('c_varbinary', 4.0, null, 0.5, null, null, null), " + "('p_varchar', 8.0, 1.0, 0.0, null, null, null), " + "('p_bigint', null, 1.0, 0.0, null, '7', '7'), " + "(null, null, null, null, 4.0, null, null)"); assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar IS NULL AND p_bigint IS NULL)", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, 1.0, 0.0, null, null, null), " + "('c_bigint', null, 4.0, 0.0, null, '4', '7'), " + "('c_double', null, 4.0, 0.0, null, '4.7', '7.7'), " + "('c_timestamp', null, 4.0, 0.0, null, null, null), " + "('c_varchar', 16.0, 4.0, 0.0, null, null, null), " + "('c_varbinary', 8.0, null, 0.0, null, null, null), " + "('p_varchar', 0.0, 0.0, 1.0, null, null, null), " + "('p_bigint', 0.0, 0.0, 1.0, null, null, null), " + "(null, null, null, null, 4.0, null, null)"); assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'p3' AND p_bigint = 8)", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, 2.0, 0.5, null, null, null), " + "('c_bigint', null, 2.0, 0.5, null, '2', '3'), " + "('c_double', null, 2.0, 0.5, null, '3.4', '4.4'), " + "('c_timestamp', null, 2.0, 0.5, null, null, null), " + "('c_varchar', 8.0, 2.0, 0.5, null, null, null), " + "('c_varbinary', 4.0, null, 0.5, null, null, null), " + "('p_varchar', 8.0, 1.0, 0.0, null, null, null), " + "('p_bigint', null, 1.0, 0.0, null, '8', '8'), " + "(null, null, null, null, 4.0, null, null)"); assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'e1' AND p_bigint = 9)", tableName), "SELECT * FROM VALUES " + "('c_boolean', 0.0, 0.0, 1.0, null, null, null), " + "('c_bigint', 0.0, 0.0, 1.0, null, null, null), " + "('c_double', 0.0, 0.0, 1.0, null, null, null), " + "('c_timestamp', 0.0, 0.0, 1.0, null, null, null), " + "('c_varchar', 0.0, 0.0, 1.0, null, null, null), " + "('c_varbinary', 0.0, 0.0, 1.0, null, null, null), " + "('p_varchar', 0.0, 0.0, 1.0, null, null, null), " + "('p_bigint', 0.0, 0.0, 1.0, null, null, null), " + "(null, null, null, null, 0.0, null, null)"); assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'e2' AND p_bigint = 9)", tableName), "SELECT * FROM VALUES " + "('c_boolean', 0.0, 0.0, 1.0, null, null, null), " + "('c_bigint', 0.0, 0.0, 1.0, null, null, null), " + "('c_double', 0.0, 0.0, 1.0, null, null, null), " + "('c_timestamp', 0.0, 0.0, 1.0, null, null, null), " + "('c_varchar', 0.0, 0.0, 1.0, null, null, null), " + "('c_varbinary', 0.0, 0.0, 1.0, null, null, null), " + "('p_varchar', 0.0, 0.0, 1.0, null, null, null), " + "('p_bigint', 0.0, 0.0, 1.0, null, null, null), " + "(null, null, null, null, 0.0, null, null)"); // Drop stats for 2 partitions assertUpdate(format("CALL system.drop_stats('%s', '%s', ARRAY[ARRAY['p2', '7'], ARRAY['p3', '8']])", TPCH_SCHEMA, tableName)); // Only stats for the specified partitions should be removed // no change assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'p1' AND p_bigint = 7)", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, 2.0, 0.5, null, null, null), " + "('c_bigint', null, 2.0, 0.5, null, '0', '1'), " + "('c_double', null, 2.0, 0.5, null, '1.2', '2.2'), " + "('c_timestamp', null, 2.0, 0.5, null, null, null), " + "('c_varchar', 8.0, 2.0, 0.5, null, null, null), " + "('c_varbinary', 4.0, null, 0.5, null, null, null), " + "('p_varchar', 8.0, 1.0, 0.0, null, null, null), " + "('p_bigint', null, 1.0, 0.0, null, '7', '7'), " + "(null, null, null, null, 4.0, null, null)"); // [p2, 7] had stats dropped assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'p2' AND p_bigint = 7)", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, null, null, null, null, null), " + "('c_bigint', null, null, null, null, null, null), " + "('c_double', null, null, null, null, null, null), " + "('c_timestamp', null, null, null, null, null, null), " + "('c_varchar', null, null, null, null, null, null), " + "('c_varbinary', null, null, null, null, null, null), " + "('p_varchar', null, null, null, null, null, null), " + "('p_bigint', null, null, null, null, null, null), " + "(null, null, null, null, null, null, null)"); // no change assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar IS NULL AND p_bigint IS NULL)", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, 1.0, 0.0, null, null, null), " + "('c_bigint', null, 4.0, 0.0, null, '4', '7'), " + "('c_double', null, 4.0, 0.0, null, '4.7', '7.7'), " + "('c_timestamp', null, 4.0, 0.0, null, null, null), " + "('c_varchar', 16.0, 4.0, 0.0, null, null, null), " + "('c_varbinary', 8.0, null, 0.0, null, null, null), " + "('p_varchar', 0.0, 0.0, 1.0, null, null, null), " + "('p_bigint', 0.0, 0.0, 1.0, null, null, null), " + "(null, null, null, null, 4.0, null, null)"); // [p3, 8] had stats dropped assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'p3' AND p_bigint = 8)", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, null, null, null, null, null), " + "('c_bigint', null, null, null, null, null, null), " + "('c_double', null, null, null, null, null, null), " + "('c_timestamp', null, null, null, null, null, null), " + "('c_varchar', null, null, null, null, null, null), " + "('c_varbinary', null, null, null, null, null, null), " + "('p_varchar', null, null, null, null, null, null), " + "('p_bigint', null, null, null, null, null, null), " + "(null, null, null, null, null, null, null)"); assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'e1' AND p_bigint = 9)", tableName), "SELECT * FROM VALUES " + "('c_boolean', 0.0, 0.0, 1.0, null, null, null), " + "('c_bigint', 0.0, 0.0, 1.0, null, null, null), " + "('c_double', 0.0, 0.0, 1.0, null, null, null), " + "('c_timestamp', 0.0, 0.0, 1.0, null, null, null), " + "('c_varchar', 0.0, 0.0, 1.0, null, null, null), " + "('c_varbinary', 0.0, 0.0, 1.0, null, null, null), " + "('p_varchar', 0.0, 0.0, 1.0, null, null, null), " + "('p_bigint', 0.0, 0.0, 1.0, null, null, null), " + "(null, null, null, null, 0.0, null, null)"); assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'e2' AND p_bigint = 9)", tableName), "SELECT * FROM VALUES " + "('c_boolean', 0.0, 0.0, 1.0, null, null, null), " + "('c_bigint', 0.0, 0.0, 1.0, null, null, null), " + "('c_double', 0.0, 0.0, 1.0, null, null, null), " + "('c_timestamp', 0.0, 0.0, 1.0, null, null, null), " + "('c_varchar', 0.0, 0.0, 1.0, null, null, null), " + "('c_varbinary', 0.0, 0.0, 1.0, null, null, null), " + "('p_varchar', 0.0, 0.0, 1.0, null, null, null), " + "('p_bigint', 0.0, 0.0, 1.0, null, null, null), " + "(null, null, null, null, 0.0, null, null)"); // Drop stats for the entire table assertUpdate(format("CALL system.drop_stats('%s', '%s')", TPCH_SCHEMA, tableName)); assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'p1' AND p_bigint = 7)", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, null, null, null, null, null), " + "('c_bigint', null, null, null, null, null, null), " + "('c_double', null, null, null, null, null, null), " + "('c_timestamp', null, null, null, null, null, null), " + "('c_varchar', null, null, null, null, null, null), " + "('c_varbinary', null, null, null, null, null, null), " + "('p_varchar', null, null, null, null, null, null), " + "('p_bigint', null, null, null, null, null, null), " + "(null, null, null, null, null, null, null)"); assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'p2' AND p_bigint = 7)", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, null, null, null, null, null), " + "('c_bigint', null, null, null, null, null, null), " + "('c_double', null, null, null, null, null, null), " + "('c_timestamp', null, null, null, null, null, null), " + "('c_varchar', null, null, null, null, null, null), " + "('c_varbinary', null, null, null, null, null, null), " + "('p_varchar', null, null, null, null, null, null), " + "('p_bigint', null, null, null, null, null, null), " + "(null, null, null, null, null, null, null)"); assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar IS NULL AND p_bigint IS NULL)", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, null, null, null, null, null), " + "('c_bigint', null, null, null, null, null, null), " + "('c_double', null, null, null, null, null, null), " + "('c_timestamp', null, null, null, null, null, null), " + "('c_varchar', null, null, null, null, null, null), " + "('c_varbinary', null, null, null, null, null, null), " + "('p_varchar', null, null, null, null, null, null), " + "('p_bigint', null, null, null, null, null, null), " + "(null, null, null, null, null, null, null)"); assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'p3' AND p_bigint = 8)", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, null, null, null, null, null), " + "('c_bigint', null, null, null, null, null, null), " + "('c_double', null, null, null, null, null, null), " + "('c_timestamp', null, null, null, null, null, null), " + "('c_varchar', null, null, null, null, null, null), " + "('c_varbinary', null, null, null, null, null, null), " + "('p_varchar', null, null, null, null, null, null), " + "('p_bigint', null, null, null, null, null, null), " + "(null, null, null, null, null, null, null)"); assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'e1' AND p_bigint = 9)", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, null, null, null, null, null), " + "('c_bigint', null, null, null, null, null, null), " + "('c_double', null, null, null, null, null, null), " + "('c_timestamp', null, null, null, null, null, null), " + "('c_varchar', null, null, null, null, null, null), " + "('c_varbinary', null, null, null, null, null, null), " + "('p_varchar', null, null, null, null, null, null), " + "('p_bigint', null, null, null, null, null, null), " + "(null, null, null, null, null, null, null)"); assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar = 'e2' AND p_bigint = 9)", tableName), "SELECT * FROM VALUES " + "('c_boolean', null, null, null, null, null, null), " + "('c_bigint', null, null, null, null, null, null), " + "('c_double', null, null, null, null, null, null), " + "('c_timestamp', null, null, null, null, null, null), " + "('c_varchar', null, null, null, null, null, null), " + "('c_varbinary', null, null, null, null, null, null), " + "('p_varchar', null, null, null, null, null, null), " + "('p_bigint', null, null, null, null, null, null), " + "(null, null, null, null, null, null, null)"); // All table stats are gone assertQuery( "SHOW STATS FOR " + tableName, "SELECT * FROM VALUES " + "('c_boolean', null, null, null, null, null, null), " + "('c_bigint', null, null, null, null, null, null), " + "('c_double', null, null, null, null, null, null), " + "('c_timestamp', null, null, null, null, null, null), " + "('c_varchar', null, null, null, null, null, null), " + "('c_varbinary', null, null, null, null, null, null), " + "('p_varchar', null, null, null, null, null, null), " + "('p_bigint', null, null, null, null, null, null), " + "(null, null, null, null, null, null, null)"); assertUpdate("DROP TABLE " + tableName); } @Test public void testDropStatsUnpartitionedTable() { String tableName = "test_drop_all_stats_unpartitioned_table"; createUnpartitionedTableForAnalyzeTest(tableName); // Run analyze on the whole table assertUpdate("ANALYZE " + tableName, 16); assertQuery("SHOW STATS FOR " + tableName, "SELECT * FROM VALUES " + "('c_boolean', null, 2.0, 0.375, null, null, null), " + "('c_bigint', null, 8.0, 0.375, null, '0', '7'), " + "('c_double', null, 10.0, 0.375, null, '1.2', '7.7'), " + "('c_timestamp', null, 10.0, 0.375, null, null, null), " + "('c_varchar', 40.0, 10.0, 0.375, null, null, null), " + "('c_varbinary', 20.0, null, 0.375, null, null, null), " + "('p_varchar', 24.0, 3.0, 0.25, null, null, null), " + "('p_bigint', null, 2.0, 0.25, null, '7', '8'), " + "(null, null, null, null, 16.0, null, null)"); // Drop stats for the entire table assertUpdate(format("CALL system.drop_stats('%s', '%s')", TPCH_SCHEMA, tableName)); // All table stats are gone assertQuery( "SHOW STATS FOR " + tableName, "SELECT * FROM VALUES " + "('c_boolean', null, null, null, null, null, null), " + "('c_bigint', null, null, null, null, null, null), " + "('c_double', null, null, null, null, null, null), " + "('c_timestamp', null, null, null, null, null, null), " + "('c_varchar', null, null, null, null, null, null), " + "('c_varbinary', null, null, null, null, null, null), " + "('p_varchar', null, null, null, null, null, null), " + "('p_bigint', null, null, null, null, null, null), " + "(null, null, null, null, null, null, null)"); assertUpdate("DROP TABLE " + tableName); } @Test public void testInvalidDropStats() { String unpartitionedTableName = "test_invalid_drop_all_stats_unpartitioned_table"; createUnpartitionedTableForAnalyzeTest(unpartitionedTableName); String partitionedTableName = "test_invalid_drop_all_stats_partitioned_table"; createPartitionedTableForAnalyzeTest(partitionedTableName); assertQueryFails( format("CALL system.drop_stats('%s', '%s', ARRAY[ARRAY['p2', '7']])", TPCH_SCHEMA, unpartitionedTableName), "Cannot specify partition values for an unpartitioned table"); assertQueryFails( format("CALL system.drop_stats('%s', '%s', ARRAY[ARRAY['p2', '7'], NULL])", TPCH_SCHEMA, partitionedTableName), "Null partition value"); assertQueryFails( format("CALL system.drop_stats('%s', '%s', ARRAY[])", TPCH_SCHEMA, partitionedTableName), "No partitions provided"); assertQueryFails( format("CALL system.drop_stats('%s', '%s', ARRAY[ARRAY['p2', '7', 'dummy']])", TPCH_SCHEMA, partitionedTableName), ".*don't match the number of partition columns.*"); assertQueryFails( format("CALL system.drop_stats('%s', '%s', ARRAY[ARRAY['WRONG', 'KEY']])", TPCH_SCHEMA, partitionedTableName), "Partition '.*' not found"); assertQueryFails( format("CALL system.drop_stats('%s', '%s', ARRAY[ARRAY['WRONG', 'KEY']])", TPCH_SCHEMA, "non_existing_table"), format("Table '%s.non_existing_table' does not exist", TPCH_SCHEMA)); assertAccessDenied( format("CALL system.drop_stats('%s', '%s')", TPCH_SCHEMA, unpartitionedTableName), format("Cannot insert into table hive.tpch.%s", unpartitionedTableName), privilege(unpartitionedTableName, INSERT_TABLE)); assertUpdate("DROP TABLE " + unpartitionedTableName); assertUpdate("DROP TABLE " + partitionedTableName); } protected void createPartitionedTableForAnalyzeTest(String tableName) { createTableForAnalyzeTest(tableName, true); } protected void createUnpartitionedTableForAnalyzeTest(String tableName) { createTableForAnalyzeTest(tableName, false); } private void createTableForAnalyzeTest(String tableName, boolean partitioned) { Session defaultSession = getSession(); // Disable column statistics collection when creating the table Session disableColumnStatsSession = Session.builder(defaultSession) .setCatalogSessionProperty(defaultSession.getCatalog().get(), "collect_column_statistics_on_write", "false") .build(); assertUpdate( disableColumnStatsSession, "" + "CREATE TABLE " + tableName + (partitioned ? " WITH (partitioned_by = ARRAY['p_varchar', 'p_bigint'])\n" : " ") + "AS " + "SELECT c_boolean, c_bigint, c_double, c_timestamp, c_varchar, c_varbinary, p_varchar, p_bigint " + "FROM ( " + " VALUES " + // p_varchar = 'p1', p_bigint = BIGINT '7' " (null, null, null, null, null, null, 'p1', BIGINT '7'), " + " (null, null, null, null, null, null, 'p1', BIGINT '7'), " + " (true, BIGINT '1', DOUBLE '2.2', TIMESTAMP '2012-08-08 01:00:00.000', 'abc1', X'bcd1', 'p1', BIGINT '7'), " + " (false, BIGINT '0', DOUBLE '1.2', TIMESTAMP '2012-08-08 00:00:00.000', 'abc2', X'bcd2', 'p1', BIGINT '7'), " + // p_varchar = 'p2', p_bigint = BIGINT '7' " (null, null, null, null, null, null, 'p2', BIGINT '7'), " + " (null, null, null, null, null, null, 'p2', BIGINT '7'), " + " (true, BIGINT '2', DOUBLE '3.3', TIMESTAMP '2012-09-09 01:00:00.000', 'cba1', X'dcb1', 'p2', BIGINT '7'), " + " (false, BIGINT '1', DOUBLE '2.3', TIMESTAMP '2012-09-09 00:00:00.000', 'cba2', X'dcb2', 'p2', BIGINT '7'), " + // p_varchar = 'p3', p_bigint = BIGINT '8' " (null, null, null, null, null, null, 'p3', BIGINT '8'), " + " (null, null, null, null, null, null, 'p3', BIGINT '8'), " + " (true, BIGINT '3', DOUBLE '4.4', TIMESTAMP '2012-10-10 01:00:00.000', 'bca1', X'cdb1', 'p3', BIGINT '8'), " + " (false, BIGINT '2', DOUBLE '3.4', TIMESTAMP '2012-10-10 00:00:00.000', 'bca2', X'cdb2', 'p3', BIGINT '8'), " + // p_varchar = NULL, p_bigint = NULL " (false, BIGINT '7', DOUBLE '7.7', TIMESTAMP '1977-07-07 07:07:00.000', 'efa1', X'efa1', NULL, NULL), " + " (false, BIGINT '6', DOUBLE '6.7', TIMESTAMP '1977-07-07 07:06:00.000', 'efa2', X'efa2', NULL, NULL), " + " (false, BIGINT '5', DOUBLE '5.7', TIMESTAMP '1977-07-07 07:05:00.000', 'efa3', X'efa3', NULL, NULL), " + " (false, BIGINT '4', DOUBLE '4.7', TIMESTAMP '1977-07-07 07:04:00.000', 'efa4', X'efa4', NULL, NULL) " + ") AS x (c_boolean, c_bigint, c_double, c_timestamp, c_varchar, c_varbinary, p_varchar, p_bigint)", 16); if (partitioned) { // Create empty partitions assertUpdate(disableColumnStatsSession, format("CALL system.create_empty_partition('%s', '%s', ARRAY['p_varchar', 'p_bigint'], ARRAY['%s', '%s'])", TPCH_SCHEMA, tableName, "e1", "9")); assertUpdate(disableColumnStatsSession, format("CALL system.create_empty_partition('%s', '%s', ARRAY['p_varchar', 'p_bigint'], ARRAY['%s', '%s'])", TPCH_SCHEMA, tableName, "e2", "9")); } } @Test public void testInsertMultipleColumnsFromSameChannel() { String tableName = "test_insert_multiple_columns_same_channel"; assertUpdate(format("" + "CREATE TABLE %s ( " + " c_bigint_1 BIGINT, " + " c_bigint_2 BIGINT, " + " p_varchar_1 VARCHAR, " + " p_varchar_2 VARCHAR " + ") " + "WITH ( " + " partitioned_by = ARRAY['p_varchar_1', 'p_varchar_2'] " + ")", tableName)); assertUpdate(format("" + "INSERT INTO %s " + "SELECT 1 c_bigint_1, 1 c_bigint_2, '2' p_varchar_1, '2' p_varchar_2 ", tableName), 1); assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar_1 = '2' AND p_varchar_2 = '2')", tableName), "SELECT * FROM VALUES " + "('c_bigint_1', null, 1.0E0, 0.0E0, null, '1', '1'), " + "('c_bigint_2', null, 1.0E0, 0.0E0, null, '1', '1'), " + "('p_varchar_1', 1.0E0, 1.0E0, 0.0E0, null, null, null), " + "('p_varchar_2', 1.0E0, 1.0E0, 0.0E0, null, null, null), " + "(null, null, null, null, 1.0E0, null, null)"); assertUpdate(format("" + "INSERT INTO %s (c_bigint_1, c_bigint_2, p_varchar_1, p_varchar_2) " + "SELECT orderkey, orderkey, orderstatus, orderstatus " + "FROM orders " + "WHERE orderstatus='O' AND orderkey = 15008", tableName), 1); assertQuery(format("SHOW STATS FOR (SELECT * FROM %s WHERE p_varchar_1 = 'O' AND p_varchar_2 = 'O')", tableName), "SELECT * FROM VALUES " + "('c_bigint_1', null, 1.0E0, 0.0E0, null, '15008', '15008'), " + "('c_bigint_2', null, 1.0E0, 0.0E0, null, '15008', '15008'), " + "('p_varchar_1', 1.0E0, 1.0E0, 0.0E0, null, null, null), " + "('p_varchar_2', 1.0E0, 1.0E0, 0.0E0, null, null, null), " + "(null, null, null, null, 1.0E0, null, null)"); assertUpdate("DROP TABLE " + tableName); } @Test public void testCreateAvroTableWithSchemaUrl() throws Exception { String tableName = "test_create_avro_table_with_schema_url"; File schemaFile = createAvroSchemaFile(); String createTableSql = getAvroCreateTableSql(tableName, schemaFile.getAbsolutePath()); String expectedShowCreateTable = getAvroCreateTableSql(tableName, schemaFile.toURI().toString()); assertUpdate(createTableSql); try { MaterializedResult actual = computeActual("SHOW CREATE TABLE " + tableName); assertEquals(actual.getOnlyValue(), expectedShowCreateTable); } finally { assertUpdate("DROP TABLE " + tableName); verify(schemaFile.delete(), "cannot delete temporary file: %s", schemaFile); } } @Test public void testAlterAvroTableWithSchemaUrl() throws Exception { testAlterAvroTableWithSchemaUrl(true, true, true); } protected void testAlterAvroTableWithSchemaUrl(boolean renameColumn, boolean addColumn, boolean dropColumn) throws Exception { String tableName = "test_alter_avro_table_with_schema_url"; File schemaFile = createAvroSchemaFile(); assertUpdate(getAvroCreateTableSql(tableName, schemaFile.getAbsolutePath())); try { if (renameColumn) { assertQueryFails(format("ALTER TABLE %s RENAME COLUMN dummy_col TO new_dummy_col", tableName), "ALTER TABLE not supported when Avro schema url is set"); } if (addColumn) { assertQueryFails(format("ALTER TABLE %s ADD COLUMN new_dummy_col VARCHAR", tableName), "ALTER TABLE not supported when Avro schema url is set"); } if (dropColumn) { assertQueryFails(format("ALTER TABLE %s DROP COLUMN dummy_col", tableName), "ALTER TABLE not supported when Avro schema url is set"); } } finally { assertUpdate("DROP TABLE " + tableName); verify(schemaFile.delete(), "cannot delete temporary file: %s", schemaFile); } } private String getAvroCreateTableSql(String tableName, String schemaFile) { return format("CREATE TABLE %s.%s.%s (\n" + " dummy_col varchar,\n" + " another_dummy_col varchar\n" + ")\n" + "WITH (\n" + " avro_schema_url = '%s',\n" + " format = 'AVRO'\n" + ")", getSession().getCatalog().get(), getSession().getSchema().get(), tableName, schemaFile); } private static File createAvroSchemaFile() throws Exception { File schemaFile = File.createTempFile("avro_single_column-", ".avsc"); String schema = "{\n" + " \"namespace\": \"io.trino.test\",\n" + " \"name\": \"single_column\",\n" + " \"type\": \"record\",\n" + " \"fields\": [\n" + " { \"name\":\"string_col\", \"type\":\"string\" }\n" + "]}"; asCharSink(schemaFile, UTF_8).write(schema); return schemaFile; } @Test public void testCreateOrcTableWithSchemaUrl() { @Language("SQL") String createTableSql = format("" + "CREATE TABLE %s.%s.test_orc (\n" + " dummy_col varchar\n" + ")\n" + "WITH (\n" + " avro_schema_url = 'dummy.avsc',\n" + " format = 'ORC'\n" + ")", getSession().getCatalog().get(), getSession().getSchema().get()); assertQueryFails(createTableSql, "Cannot specify avro_schema_url table property for storage format: ORC"); } @Test public void testCtasFailsWithAvroSchemaUrl() { @Language("SQL") String ctasSqlWithoutData = "CREATE TABLE create_avro\n" + "WITH (avro_schema_url = 'dummy_schema')\n" + "AS SELECT 'dummy_value' as dummy_col WITH NO DATA"; assertQueryFails(ctasSqlWithoutData, "CREATE TABLE AS not supported when Avro schema url is set"); @Language("SQL") String ctasSql = "CREATE TABLE create_avro\n" + "WITH (avro_schema_url = 'dummy_schema')\n" + "AS SELECT * FROM (VALUES('a')) t (a)"; assertQueryFails(ctasSql, "CREATE TABLE AS not supported when Avro schema url is set"); } @Test public void testBucketedTablesFailWithAvroSchemaUrl() { @Language("SQL") String createSql = "CREATE TABLE create_avro (dummy VARCHAR)\n" + "WITH (avro_schema_url = 'dummy_schema',\n" + " bucket_count = 2, bucketed_by=ARRAY['dummy'])"; assertQueryFails(createSql, "Bucketing/Partitioning columns not supported when Avro schema url is set"); } @Test public void testPartitionedTablesFailWithAvroSchemaUrl() { @Language("SQL") String createSql = "CREATE TABLE create_avro (dummy VARCHAR)\n" + "WITH (avro_schema_url = 'dummy_schema',\n" + " partitioned_by=ARRAY['dummy'])"; assertQueryFails(createSql, "Bucketing/Partitioning columns not supported when Avro schema url is set"); } @Test public void testPrunePartitionFailure() { assertUpdate("CREATE TABLE test_prune_failure\n" + "WITH (partitioned_by = ARRAY['p']) AS\n" + "SELECT 123 x, 'abc' p", 1); assertQueryReturnsEmptyResult("" + "SELECT * FROM test_prune_failure\n" + "WHERE x < 0 AND cast(p AS int) > 0"); assertUpdate("DROP TABLE test_prune_failure"); } @Test public void testTemporaryStagingDirectorySessionProperties() { String tableName = "test_temporary_staging_directory_session_properties"; assertUpdate(format("CREATE TABLE %s(i int)", tableName)); Session session = Session.builder(getSession()) .setCatalogSessionProperty("hive", "temporary_staging_directory_enabled", "false") .build(); HiveInsertTableHandle hiveInsertTableHandle = getHiveInsertTableHandle(session, tableName); assertEquals(hiveInsertTableHandle.getLocationHandle().getWritePath(), hiveInsertTableHandle.getLocationHandle().getTargetPath()); session = Session.builder(getSession()) .setCatalogSessionProperty("hive", "temporary_staging_directory_enabled", "true") .setCatalogSessionProperty("hive", "temporary_staging_directory_path", "/tmp/custom/temporary-${USER}") .build(); hiveInsertTableHandle = getHiveInsertTableHandle(session, tableName); assertNotEquals(hiveInsertTableHandle.getLocationHandle().getWritePath(), hiveInsertTableHandle.getLocationHandle().getTargetPath()); assertTrue(hiveInsertTableHandle.getLocationHandle().getWritePath().toString().startsWith("file:/tmp/custom/temporary-")); assertUpdate("DROP TABLE " + tableName); } private HiveInsertTableHandle getHiveInsertTableHandle(Session session, String tableName) { Metadata metadata = getDistributedQueryRunner().getCoordinator().getMetadata(); return transaction(getQueryRunner().getTransactionManager(), getQueryRunner().getAccessControl()) .execute(session, transactionSession -> { QualifiedObjectName objectName = new QualifiedObjectName(catalog, TPCH_SCHEMA, tableName); Optional<TableHandle> handle = metadata.getTableHandle(transactionSession, objectName); List<ColumnHandle> columns = ImmutableList.copyOf(metadata.getColumnHandles(transactionSession, handle.get()).values()); InsertTableHandle insertTableHandle = metadata.beginInsert(transactionSession, handle.get(), columns); HiveInsertTableHandle hiveInsertTableHandle = (HiveInsertTableHandle) insertTableHandle.getConnectorHandle(); metadata.finishInsert(transactionSession, insertTableHandle, ImmutableList.of(), ImmutableList.of()); return hiveInsertTableHandle; }); } @Test public void testSortedWritingTempStaging() { String tableName = "test_sorted_writing"; @Language("SQL") String createTableSql = format("" + "CREATE TABLE %s " + "WITH (" + " bucket_count = 7," + " bucketed_by = ARRAY['shipmode']," + " sorted_by = ARRAY['shipmode']" + ") AS " + "SELECT * FROM tpch.tiny.lineitem", tableName); Session session = Session.builder(getSession()) .setCatalogSessionProperty("hive", "sorted_writing_enabled", "true") .setCatalogSessionProperty("hive", "temporary_staging_directory_enabled", "true") .setCatalogSessionProperty("hive", "temporary_staging_directory_path", "/tmp/custom/temporary-${USER}") .build(); assertUpdate(session, createTableSql, 60175L); MaterializedResult expected = computeActual("SELECT * FROM tpch.tiny.lineitem"); MaterializedResult actual = computeActual("SELECT * FROM " + tableName); assertEqualsIgnoreOrder(actual.getMaterializedRows(), expected.getMaterializedRows()); assertUpdate("DROP TABLE " + tableName); } @Test public void testUseSortedProperties() { String tableName = "test_propagate_table_scan_sorting_properties"; @Language("SQL") String createTableSql = format("" + "CREATE TABLE %s " + "WITH (" + " bucket_count = 8," + " bucketed_by = ARRAY['custkey']," + " sorted_by = ARRAY['custkey']" + ") AS " + "SELECT * FROM tpch.tiny.customer", tableName); assertUpdate(createTableSql, 1500L); @Language("SQL") String expected = "SELECT custkey FROM customer ORDER BY 1 NULLS FIRST LIMIT 100"; @Language("SQL") String actual = format("SELECT custkey FROM %s ORDER BY 1 NULLS FIRST LIMIT 100", tableName); Session session = getSession(); assertQuery(session, actual, expected, assertPartialLimitWithPreSortedInputsCount(session, 0)); session = Session.builder(getSession()) .setCatalogSessionProperty("hive", "propagate_table_scan_sorting_properties", "true") .build(); assertQuery(session, actual, expected, assertPartialLimitWithPreSortedInputsCount(session, 1)); assertUpdate("DROP TABLE " + tableName); } @Test(dataProvider = "testCreateTableWithCompressionCodecDataProvider") public void testCreateTableWithCompressionCodec(HiveCompressionCodec compressionCodec) { testWithAllStorageFormats((session, hiveStorageFormat) -> { if (isNativeParquetWriter(session, hiveStorageFormat) && compressionCodec == HiveCompressionCodec.LZ4) { // TODO (https://github.com/trinodb/trino/issues/9142) Support LZ4 compression with native Parquet writer assertThatThrownBy(() -> testCreateTableWithCompressionCodec(session, hiveStorageFormat, compressionCodec)) .hasMessage("Unsupported codec: LZ4"); return; } testCreateTableWithCompressionCodec(session, hiveStorageFormat, compressionCodec); }); } @DataProvider public Object[][] testCreateTableWithCompressionCodecDataProvider() { return Stream.of(HiveCompressionCodec.values()) .collect(toDataProvider()); } private void testCreateTableWithCompressionCodec(Session session, HiveStorageFormat storageFormat, HiveCompressionCodec compressionCodec) { session = Session.builder(session) .setCatalogSessionProperty(session.getCatalog().orElseThrow(), "compression_codec", compressionCodec.name()) .build(); String tableName = "test_table_with_compression_" + compressionCodec; assertUpdate(session, format("CREATE TABLE %s WITH (format = '%s') AS TABLE tpch.tiny.nation", tableName, storageFormat), 25); assertQuery("SELECT * FROM " + tableName, "SELECT * FROM nation"); assertQuery("SELECT count(*) FROM " + tableName, "VALUES 25"); assertUpdate("DROP TABLE " + tableName); } @Test public void testSelectWithNoColumns() { testWithAllStorageFormats(this::testSelectWithNoColumns); } private void testSelectWithNoColumns(Session session, HiveStorageFormat storageFormat) { String tableName = "test_select_with_no_columns"; @Language("SQL") String createTable = format( "CREATE TABLE %s (col0) WITH (format = '%s') AS VALUES 5, 6, 7", tableName, storageFormat); assertUpdate(session, createTable, 3); assertTrue(getQueryRunner().tableExists(getSession(), tableName)); assertQuery("SELECT 1 FROM " + tableName, "VALUES 1, 1, 1"); assertQuery("SELECT count(*) FROM " + tableName, "SELECT 3"); assertUpdate("DROP TABLE " + tableName); } @Test public void testColumnPruning() { Session session = Session.builder(getSession()) .setCatalogSessionProperty(catalog, "orc_use_column_names", "true") .setCatalogSessionProperty(catalog, "parquet_use_column_names", "true") .build(); testWithStorageFormat(new TestingHiveStorageFormat(session, HiveStorageFormat.ORC), this::testColumnPruning); testWithStorageFormat(new TestingHiveStorageFormat(session, HiveStorageFormat.PARQUET), this::testColumnPruning); } @Override protected boolean isColumnNameRejected(Exception exception, String columnName, boolean delimited) { switch (columnName) { case " aleadingspace": return "Hive column names must not start with a space: ' aleadingspace'".equals(exception.getMessage()); case "atrailingspace ": return "Hive column names must not end with a space: 'atrailingspace '".equals(exception.getMessage()); case "a,comma": return "Hive column names must not contain commas: 'a,comma'".equals(exception.getMessage()); } return false; } private void testColumnPruning(Session session, HiveStorageFormat storageFormat) { String tableName = "test_schema_evolution_column_pruning_" + storageFormat.name().toLowerCase(ENGLISH); String evolvedTableName = tableName + "_evolved"; assertUpdate(session, "DROP TABLE IF EXISTS " + tableName); assertUpdate(session, "DROP TABLE IF EXISTS " + evolvedTableName); assertUpdate(session, format( "CREATE TABLE %s(" + " a bigint, " + " b varchar, " + " c row(" + " f1 row(" + " g1 bigint," + " g2 bigint), " + " f2 varchar, " + " f3 varbinary), " + " d integer) " + "WITH (format='%s')", tableName, storageFormat)); assertUpdate(session, "INSERT INTO " + tableName + " VALUES (42, 'ala', ROW(ROW(177, 873321), 'ma kota', X'abcdef'), 12345678)", 1); // All data assertQuery( session, "SELECT a, b, c.f1.g1, c.f1.g2, c.f2, c.f3, d FROM " + tableName, "VALUES (42, 'ala', 177, 873321, 'ma kota', X'abcdef', 12345678)"); // Pruning assertQuery( session, "SELECT b, c.f1.g2, c.f3, d FROM " + tableName, "VALUES ('ala', 873321, X'abcdef', 12345678)"); String tableLocation = (String) computeActual("SELECT DISTINCT regexp_replace(\"$path\", '/[^/]*$', '') FROM " + tableName).getOnlyValue(); assertUpdate(session, format( "CREATE TABLE %s(" + " e tinyint, " + // added " a bigint, " + " bxx varchar, " + // renamed " c row(" + " f1 row(" + " g1xx bigint," + // renamed " g2 bigint), " + " f2xx varchar, " + // renamed " f3 varbinary), " + " d integer, " + " f smallint) " + // added "WITH (format='%s', external_location='%s')", evolvedTableName, storageFormat, tableLocation)); // Pruning being an effected of renamed fields (schema evolution) assertQuery( session, "SELECT a, bxx, c.f1.g1xx, c.f1.g2, c.f2xx, c.f3, d, e, f FROM " + evolvedTableName + " t", "VALUES (42, NULL, NULL, 873321, NULL, X'abcdef', 12345678, NULL, NULL)"); assertUpdate(session, "DROP TABLE " + evolvedTableName); assertUpdate(session, "DROP TABLE " + tableName); } @Test public void testUnsupportedCsvTable() { assertQueryFails( "CREATE TABLE create_unsupported_csv(i INT, bound VARCHAR(10), unbound VARCHAR, dummy VARCHAR) WITH (format = 'CSV')", "\\QHive CSV storage format only supports VARCHAR (unbounded). Unsupported columns: i integer, bound varchar(10)\\E"); } @Test public void testWriteInvalidPrecisionTimestamp() { Session session = withTimestampPrecision(getSession(), HiveTimestampPrecision.MICROSECONDS); assertQueryFails( session, "CREATE TABLE test_invalid_precision_timestamp(ts) AS SELECT TIMESTAMP '2001-02-03 11:22:33.123456789'", "\\QIncorrect timestamp precision for timestamp(9); the configured precision is " + HiveTimestampPrecision.MICROSECONDS); assertQueryFails( session, "CREATE TABLE test_invalid_precision_timestamp (ts TIMESTAMP(9))", "\\QIncorrect timestamp precision for timestamp(9); the configured precision is " + HiveTimestampPrecision.MICROSECONDS); assertQueryFails( session, "CREATE TABLE test_invalid_precision_timestamp(ts) AS SELECT TIMESTAMP '2001-02-03 11:22:33.123'", "\\QIncorrect timestamp precision for timestamp(3); the configured precision is " + HiveTimestampPrecision.MICROSECONDS); assertQueryFails( session, "CREATE TABLE test_invalid_precision_timestamp (ts TIMESTAMP(3))", "\\QIncorrect timestamp precision for timestamp(3); the configured precision is " + HiveTimestampPrecision.MICROSECONDS); } @Test public void testTimestampPrecisionInsert() { testWithAllStorageFormats(this::testTimestampPrecisionInsert); } private void testTimestampPrecisionInsert(Session session, HiveStorageFormat storageFormat) { if (storageFormat == HiveStorageFormat.AVRO) { // Avro timestamps are stored with millisecond precision return; } String tableName = "test_timestamp_precision_" + randomTableSuffix(); String createTable = "CREATE TABLE " + tableName + " (ts TIMESTAMP) WITH (format = '%s')"; @Language("SQL") String insert = "INSERT INTO " + tableName + " VALUES (TIMESTAMP '%s')"; testTimestampPrecisionWrites( session, tableName, (ts, precision) -> { assertUpdate("DROP TABLE IF EXISTS " + tableName); assertUpdate(format(createTable, storageFormat)); assertUpdate(withTimestampPrecision(session, precision), format(insert, ts), 1); }); } @Test public void testTimestampPrecisionCtas() { testWithAllStorageFormats((session, storageFormat) -> testTimestampPrecisionCtas(session, storageFormat)); } private void testTimestampPrecisionCtas(Session session, HiveStorageFormat storageFormat) { if (storageFormat == HiveStorageFormat.AVRO) { // Avro timestamps are stored with millisecond precision return; } String tableName = "test_timestamp_precision_" + randomTableSuffix(); String createTableAs = "CREATE TABLE " + tableName + " WITH (format = '%s') AS SELECT TIMESTAMP '%s' ts"; testTimestampPrecisionWrites( session, tableName, (ts, precision) -> { assertUpdate("DROP TABLE IF EXISTS " + tableName); assertUpdate(withTimestampPrecision(session, precision), format(createTableAs, storageFormat, ts), 1); }); } private void testTimestampPrecisionWrites(Session session, String tableName, BiConsumer<String, HiveTimestampPrecision> populateData) { populateData.accept("2019-02-03 18:30:00.123", HiveTimestampPrecision.MILLISECONDS); @Language("SQL") String sql = "SELECT ts FROM " + tableName; assertQuery(withTimestampPrecision(session, HiveTimestampPrecision.MILLISECONDS), sql, "VALUES ('2019-02-03 18:30:00.123')"); assertQuery(withTimestampPrecision(session, HiveTimestampPrecision.MICROSECONDS), sql, "VALUES ('2019-02-03 18:30:00.123')"); assertQuery(withTimestampPrecision(session, HiveTimestampPrecision.NANOSECONDS), sql, "VALUES ('2019-02-03 18:30:00.123')"); populateData.accept("2019-02-03 18:30:00.456789", HiveTimestampPrecision.MICROSECONDS); assertQuery(withTimestampPrecision(session, HiveTimestampPrecision.MILLISECONDS), sql, "VALUES ('2019-02-03 18:30:00.457')"); assertQuery(withTimestampPrecision(session, HiveTimestampPrecision.MICROSECONDS), sql, "VALUES ('2019-02-03 18:30:00.456789')"); assertQuery(withTimestampPrecision(session, HiveTimestampPrecision.NANOSECONDS), sql, "VALUES ('2019-02-03 18:30:00.456789000')"); populateData.accept("2019-02-03 18:30:00.456789876", HiveTimestampPrecision.NANOSECONDS); assertQuery(withTimestampPrecision(session, HiveTimestampPrecision.MILLISECONDS), sql, "VALUES ('2019-02-03 18:30:00.457')"); assertQuery(withTimestampPrecision(session, HiveTimestampPrecision.MICROSECONDS), sql, "VALUES ('2019-02-03 18:30:00.456790')"); assertQuery(withTimestampPrecision(session, HiveTimestampPrecision.NANOSECONDS), sql, "VALUES ('2019-02-03 18:30:00.456789876')"); // some rounding edge cases populateData.accept("2019-02-03 18:30:00.999999", HiveTimestampPrecision.MICROSECONDS); assertQuery(withTimestampPrecision(session, HiveTimestampPrecision.MILLISECONDS), sql, "VALUES ('2019-02-03 18:30:01.000')"); assertQuery(withTimestampPrecision(session, HiveTimestampPrecision.MICROSECONDS), sql, "VALUES ('2019-02-03 18:30:00.999999')"); populateData.accept("2019-02-03 18:30:00.999999999", HiveTimestampPrecision.NANOSECONDS); assertQuery(withTimestampPrecision(session, HiveTimestampPrecision.MILLISECONDS), sql, "VALUES ('2019-02-03 18:30:01.000')"); assertQuery(withTimestampPrecision(session, HiveTimestampPrecision.MICROSECONDS), sql, "VALUES ('2019-02-03 18:30:01.000000')"); assertQuery(withTimestampPrecision(session, HiveTimestampPrecision.NANOSECONDS), sql, "VALUES ('2019-02-03 18:30:00.999999999')"); } @Test public void testSelectFromViewWithoutDefaultCatalogAndSchema() { String viewName = "select_from_view_without_catalog_and_schema_" + randomTableSuffix(); assertUpdate("CREATE VIEW " + viewName + " AS SELECT * FROM nation WHERE nationkey=1"); assertQuery("SELECT count(*) FROM " + viewName, "VALUES 1"); assertQuery("SELECT count(*) FROM hive.tpch." + viewName, "VALUES 1"); Session sessionNoCatalog = Session.builder(getSession()) .setCatalog(Optional.empty()) .setSchema(Optional.empty()) .build(); assertQueryFails(sessionNoCatalog, "SELECT count(*) FROM " + viewName, ".*Schema must be specified when session schema is not set.*"); assertQuery(sessionNoCatalog, "SELECT count(*) FROM hive.tpch." + viewName, "VALUES 1"); } @Test public void testSelectFromPrestoViewReferencingHiveTableWithTimestamps() { Session defaultSession = getSession(); Session millisSession = Session.builder(defaultSession) .setCatalogSessionProperty("hive", "timestamp_precision", "MILLISECONDS") .setCatalogSessionProperty("hive_timestamp_nanos", "timestamp_precision", "MILLISECONDS") .build(); Session nanosSessions = Session.builder(defaultSession) .setCatalogSessionProperty("hive", "timestamp_precision", "NANOSECONDS") .setCatalogSessionProperty("hive_timestamp_nanos", "timestamp_precision", "NANOSECONDS") .build(); // Hive views tests covered in TestHiveViews.testTimestampHiveView and TestHiveViesLegacy.testTimestampHiveView String tableName = "ts_hive_table_" + randomTableSuffix(); assertUpdate( withTimestampPrecision(defaultSession, HiveTimestampPrecision.NANOSECONDS), "CREATE TABLE " + tableName + " AS SELECT TIMESTAMP '1990-01-02 12:13:14.123456789' ts", 1); // Presto view created with config property set to MILLIS and session property not set String prestoViewNameDefault = "presto_view_ts_default_" + randomTableSuffix(); assertUpdate(defaultSession, "CREATE VIEW " + prestoViewNameDefault + " AS SELECT * FROM " + tableName); assertThat(query(defaultSession, "SELECT ts FROM " + prestoViewNameDefault)).matches("VALUES TIMESTAMP '1990-01-02 12:13:14.123'"); // TODO(https://github.com/prestosql/presto/issues/6295) Presto view schema is fixed on creation // should be: assertThat(query(defaultSession, "SELECT ts FROM hive_timestamp_nanos.tpch." + prestoViewNameDefault)).matches("VALUES TIMESTAMP '1990-01-02 12:13:14.123456789'") assertThat(query(defaultSession, "SELECT ts FROM hive_timestamp_nanos.tpch." + prestoViewNameDefault)).matches("VALUES TIMESTAMP '1990-01-02 12:13:14.123'"); assertThat(query(millisSession, "SELECT ts FROM " + prestoViewNameDefault)).matches("VALUES TIMESTAMP '1990-01-02 12:13:14.123'"); assertThat(query(millisSession, "SELECT ts FROM hive_timestamp_nanos.tpch." + prestoViewNameDefault)).matches("VALUES TIMESTAMP '1990-01-02 12:13:14.123'"); // TODO(https://github.com/prestosql/presto/issues/6295) Presto view schema is fixed on creation // should be: assertThat(query(nanosSessions, "SELECT ts FROM " + prestoViewNameDefault)).matches("VALUES TIMESTAMP '1990-01-02 12:13:14.123456789'") assertThat(query(nanosSessions, "SELECT ts FROM " + prestoViewNameDefault)).matches("VALUES TIMESTAMP '1990-01-02 12:13:14.123'"); // TODO(https://github.com/prestosql/presto/issues/6295) Presto view schema is fixed on creation // should be: assertThat(query(nanosSessions, "SELECT ts FROM hive_timestamp_nanos.tpch." + prestoViewNameDefault)).matches("VALUES TIMESTAMP '1990-01-02 12:13:14.123456789'") assertThat(query(nanosSessions, "SELECT ts FROM hive_timestamp_nanos.tpch." + prestoViewNameDefault)).matches("VALUES TIMESTAMP '1990-01-02 12:13:14.123'"); // Presto view created with config property set to MILLIS and session property set to NANOS String prestoViewNameNanos = "presto_view_ts_nanos_" + randomTableSuffix(); assertUpdate(nanosSessions, "CREATE VIEW " + prestoViewNameNanos + " AS SELECT * FROM " + tableName); // TODO(https://github.com/prestosql/presto/issues/6295) Presto view schema is fixed on creation // should be: assertThat(query(defaultSession, "SELECT ts FROM " + prestoViewNameNanos)).matches("VALUES TIMESTAMP '1990-01-02 12:13:14.123'") assertThat(query(defaultSession, "SELECT ts FROM " + prestoViewNameNanos)).matches("VALUES TIMESTAMP '1990-01-02 12:13:14.123000000'"); // TODO(https://github.com/prestosql/presto/issues/6295) Presto view schema is fixed on creation // should be: assertThat(query(defaultSession, "SELECT ts FROM hive_timestamp_nanos.tpch." + prestoViewNameNanos)).matches("VALUES TIMESTAMP '1990-01-02 12:13:14.123456789'") assertThat(query(defaultSession, "SELECT ts FROM hive_timestamp_nanos.tpch." + prestoViewNameNanos)).matches("VALUES TIMESTAMP '1990-01-02 12:13:14.123000000'"); // TODO(https://github.com/prestosql/presto/issues/6295) Presto view schema is fixed on creation // should be: assertThat(query(millisSession, "SELECT ts FROM " + prestoViewNameNanos)).matches("VALUES TIMESTAMP '1990-01-02 12:13:14.123'") assertThat(query(millisSession, "SELECT ts FROM " + prestoViewNameNanos)).matches("VALUES TIMESTAMP '1990-01-02 12:13:14.123000000'"); // TODO(https://github.com/prestosql/presto/issues/6295) Presto view schema is fixed on creation // should be: assertThat(query(millisSession, "SELECT ts FROM hive_timestamp_nanos.tpch." + prestoViewNameNanos)).matches("VALUES TIMESTAMP '1990-01-02 12:13:14.123'") assertThat(query(millisSession, "SELECT ts FROM hive_timestamp_nanos.tpch." + prestoViewNameNanos)).matches("VALUES TIMESTAMP '1990-01-02 12:13:14.123000000'"); // TODO(https://github.com/prestosql/presto/issues/6295) Presto view schema is fixed on creation // should be: assertThat(query(nanosSessions, "SELECT ts FROM " + prestoViewNameNanos)).matches("VALUES TIMESTAMP '1990-01-02 12:13:14.123456789'") assertThat(query(nanosSessions, "SELECT ts FROM " + prestoViewNameNanos)).matches("VALUES TIMESTAMP '1990-01-02 12:13:14.123000000'"); // TODO(https://github.com/prestosql/presto/issues/6295) Presto view schema is fixed on creation // should be: assertThat(query(nanosSessions, "SELECT ts FROM hive_timestamp_nanos.tpch." + prestoViewNameNanos)).matches("VALUES TIMESTAMP '1990-01-02 12:13:14.123456789'") assertThat(query(nanosSessions, "SELECT ts FROM hive_timestamp_nanos.tpch." + prestoViewNameNanos)).matches("VALUES TIMESTAMP '1990-01-02 12:13:14.123000000'"); } @Test(dataProvider = "legalUseColumnNamesProvider") public void testUseColumnNames(HiveStorageFormat format, boolean formatUseColumnNames) { String lowerCaseFormat = format.name().toLowerCase(Locale.ROOT); Session.SessionBuilder builder = Session.builder(getSession()); if (format == HiveStorageFormat.ORC || format == HiveStorageFormat.PARQUET) { builder.setCatalogSessionProperty(catalog, lowerCaseFormat + "_use_column_names", String.valueOf(formatUseColumnNames)); } Session admin = builder.build(); String tableName = format("test_renames_%s_%s_%s", lowerCaseFormat, formatUseColumnNames, randomTableSuffix()); assertUpdate(admin, format("CREATE TABLE %s (id BIGINT, old_name VARCHAR, age INT, state VARCHAR) WITH (format = '%s', partitioned_by = ARRAY['state'])", tableName, format)); assertUpdate(admin, format("INSERT INTO %s VALUES(111, 'Katy', 57, 'CA')", tableName), 1); assertQuery(admin, "SELECT * FROM " + tableName, "VALUES(111, 'Katy', 57, 'CA')"); assertUpdate(admin, format("ALTER TABLE %s RENAME COLUMN old_name TO new_name", tableName)); boolean canSeeOldData = !formatUseColumnNames && !NAMED_COLUMN_ONLY_FORMATS.contains(format); String katyValue = canSeeOldData ? "'Katy'" : "null"; assertQuery(admin, "SELECT * FROM " + tableName, format("VALUES(111, %s, 57, 'CA')", katyValue)); assertUpdate(admin, format("INSERT INTO %s (id, new_name, age, state) VALUES(333, 'Cary', 35, 'WA')", tableName), 1); assertQuery(admin, "SELECT * FROM " + tableName, format("VALUES(111, %s, 57, 'CA'), (333, 'Cary', 35, 'WA')", katyValue)); assertUpdate(admin, format("ALTER TABLE %s RENAME COLUMN new_name TO old_name", tableName)); String caryValue = canSeeOldData ? "'Cary'" : null; assertQuery(admin, "SELECT * FROM " + tableName, format("VALUES(111, 'Katy', 57, 'CA'), (333, %s, 35, 'WA')", caryValue)); assertUpdate("DROP TABLE " + tableName); } @Test(dataProvider = "legalUseColumnNamesProvider") public void testUseColumnAddDrop(HiveStorageFormat format, boolean formatUseColumnNames) { String lowerCaseFormat = format.name().toLowerCase(Locale.ROOT); Session.SessionBuilder builder = Session.builder(getSession()); if (format == HiveStorageFormat.ORC || format == HiveStorageFormat.PARQUET) { builder.setCatalogSessionProperty(catalog, lowerCaseFormat + "_use_column_names", String.valueOf(formatUseColumnNames)); } Session admin = builder.build(); String tableName = format("test_add_drop_%s_%s_%s", lowerCaseFormat, formatUseColumnNames, randomTableSuffix()); assertUpdate(admin, format("CREATE TABLE %s (id BIGINT, old_name VARCHAR, age INT, state VARCHAR) WITH (format = '%s')", tableName, format)); assertUpdate(admin, format("INSERT INTO %s VALUES(111, 'Katy', 57, 'CA')", tableName), 1); assertQuery(admin, "SELECT * FROM " + tableName, "VALUES(111, 'Katy', 57, 'CA')"); assertUpdate(admin, format("ALTER TABLE %s DROP COLUMN state", tableName)); assertQuery(admin, "SELECT * FROM " + tableName, format("VALUES(111, 'Katy', 57)")); assertUpdate(admin, format("INSERT INTO %s VALUES(333, 'Cary', 35)", tableName), 1); assertQuery(admin, "SELECT * FROM " + tableName, "VALUES(111, 'Katy', 57), (333, 'Cary', 35)"); assertUpdate(admin, format("ALTER TABLE %s ADD COLUMN state VARCHAR", tableName)); assertQuery(admin, "SELECT * FROM " + tableName, "VALUES(111, 'Katy', 57, 'CA'), (333, 'Cary', 35, null)"); assertUpdate(admin, format("ALTER TABLE %s DROP COLUMN state", tableName)); assertQuery(admin, "SELECT * FROM " + tableName, "VALUES(111, 'Katy', 57), (333, 'Cary', 35)"); assertUpdate(admin, format("ALTER TABLE %s ADD COLUMN new_state VARCHAR", tableName)); boolean canSeeOldData = !formatUseColumnNames && !NAMED_COLUMN_ONLY_FORMATS.contains(format); String katyState = canSeeOldData ? "'CA'" : "null"; assertQuery(admin, "SELECT * FROM " + tableName, format("VALUES(111, 'Katy', 57, %s), (333, 'Cary', 35, null)", katyState)); if (formatUseColumnNames) { assertUpdate(admin, format("ALTER TABLE %s DROP COLUMN age", tableName)); assertQuery(admin, "SELECT * FROM " + tableName, format("VALUES(111, 'Katy', %s), (333, 'Cary', null)", katyState)); assertUpdate(admin, format("ALTER TABLE %s ADD COLUMN age INT", tableName)); assertQuery(admin, "SELECT * FROM " + tableName, "VALUES(111, 'Katy', null, 57), (333, 'Cary', null, 35)"); } assertUpdate("DROP TABLE " + tableName); } @Test public void testExplainOfCreateTableAs() { String query = "CREATE TABLE copy_orders AS SELECT * FROM orders"; MaterializedResult result = computeActual("EXPLAIN " + query); assertEquals(getOnlyElement(result.getOnlyColumnAsSet()), getExplainPlan(query, DISTRIBUTED)); } private static final Set<HiveStorageFormat> NAMED_COLUMN_ONLY_FORMATS = ImmutableSet.of(HiveStorageFormat.AVRO, HiveStorageFormat.JSON); @DataProvider public Object[][] legalUseColumnNamesProvider() { return new Object[][] { {HiveStorageFormat.ORC, true}, {HiveStorageFormat.ORC, false}, {HiveStorageFormat.PARQUET, true}, {HiveStorageFormat.PARQUET, false}, {HiveStorageFormat.AVRO, false}, {HiveStorageFormat.JSON, false}, {HiveStorageFormat.RCBINARY, false}, {HiveStorageFormat.RCTEXT, false}, {HiveStorageFormat.SEQUENCEFILE, false}, {HiveStorageFormat.TEXTFILE, false}, }; } private Session getParallelWriteSession() { return Session.builder(getSession()) .setSystemProperty("task_writer_count", "4") .build(); } private void assertOneNotNullResult(@Language("SQL") String query) { assertOneNotNullResult(getSession(), query); } private void assertOneNotNullResult(Session session, @Language("SQL") String query) { MaterializedResult results = getQueryRunner().execute(session, query).toTestTypes(); assertEquals(results.getRowCount(), 1); assertEquals(results.getMaterializedRows().get(0).getFieldCount(), 1); assertNotNull(results.getMaterializedRows().get(0).getField(0)); } private Type canonicalizeType(Type type) { return TYPE_MANAGER.getType(toHiveType(type).getTypeSignature()); } private void assertColumnType(TableMetadata tableMetadata, String columnName, Type expectedType) { assertEquals(tableMetadata.getColumn(columnName).getType(), canonicalizeType(expectedType)); } private void assertConstraints(@Language("SQL") String query, Set<ColumnConstraint> expected) { MaterializedResult result = computeActual("EXPLAIN (TYPE IO, FORMAT JSON) " + query); Set<ColumnConstraint> constraints = getIoPlanCodec().fromJson((String) getOnlyElement(result.getOnlyColumnAsSet())) .getInputTableColumnInfos().stream() .findFirst().get() .getColumnConstraints(); assertTrue(constraints.containsAll(expected)); } private void verifyPartition(boolean hasPartition, TableMetadata tableMetadata, List<String> partitionKeys) { Object partitionByProperty = tableMetadata.getMetadata().getProperties().get(PARTITIONED_BY_PROPERTY); if (hasPartition) { assertEquals(partitionByProperty, partitionKeys); for (ColumnMetadata columnMetadata : tableMetadata.getColumns()) { boolean partitionKey = partitionKeys.contains(columnMetadata.getName()); assertEquals(columnMetadata.getExtraInfo(), columnExtraInfo(partitionKey)); } } else { assertNull(partitionByProperty); } } private void rollback() { throw new RollbackException(); } private static class RollbackException extends RuntimeException { } private void testWithAllStorageFormats(BiConsumer<Session, HiveStorageFormat> test) { for (TestingHiveStorageFormat storageFormat : getAllTestingHiveStorageFormat()) { testWithStorageFormat(storageFormat, test); } } private static void testWithStorageFormat(TestingHiveStorageFormat storageFormat, BiConsumer<Session, HiveStorageFormat> test) { requireNonNull(storageFormat, "storageFormat is null"); requireNonNull(test, "test is null"); Session session = storageFormat.getSession(); try { test.accept(session, storageFormat.getFormat()); } catch (Exception | AssertionError e) { fail(format("Failure for format %s with properties %s / %s", storageFormat.getFormat(), session.getConnectorProperties(), session.getUnprocessedCatalogProperties()), e); } } private boolean isNativeParquetWriter(Session session, HiveStorageFormat storageFormat) { return storageFormat == HiveStorageFormat.PARQUET && ("true".equals(session.getConnectorProperties(new CatalogName("hive")).get("experimental_parquet_optimized_writer_enabled")) || "true".equals(session.getUnprocessedCatalogProperties().getOrDefault("hive", Map.of()).get("experimental_parquet_optimized_writer_enabled"))); } private List<TestingHiveStorageFormat> getAllTestingHiveStorageFormat() { Session session = getSession(); String catalog = session.getCatalog().orElseThrow(); ImmutableList.Builder<TestingHiveStorageFormat> formats = ImmutableList.builder(); for (HiveStorageFormat hiveStorageFormat : HiveStorageFormat.values()) { if (hiveStorageFormat == HiveStorageFormat.CSV) { // CSV supports only unbounded VARCHAR type continue; } if (hiveStorageFormat == HiveStorageFormat.PARQUET) { formats.add(new TestingHiveStorageFormat( Session.builder(session) .setCatalogSessionProperty(catalog, "experimental_parquet_optimized_writer_enabled", "false") .build(), hiveStorageFormat)); formats.add(new TestingHiveStorageFormat( Session.builder(session) .setCatalogSessionProperty(catalog, "experimental_parquet_optimized_writer_enabled", "true") .build(), hiveStorageFormat)); continue; } formats.add(new TestingHiveStorageFormat(session, hiveStorageFormat)); } return formats.build(); } private JsonCodec<IoPlan> getIoPlanCodec() { ObjectMapperProvider objectMapperProvider = new ObjectMapperProvider(); objectMapperProvider.setJsonDeserializers(ImmutableMap.of(Type.class, new TypeDeserializer(getQueryRunner().getMetadata()))); return new JsonCodecFactory(objectMapperProvider).jsonCodec(IoPlan.class); } private static class TestingHiveStorageFormat { private final Session session; private final HiveStorageFormat format; TestingHiveStorageFormat(Session session, HiveStorageFormat format) { this.session = requireNonNull(session, "session is null"); this.format = requireNonNull(format, "format is null"); } public Session getSession() { return session; } public HiveStorageFormat getFormat() { return format; } } private static class TypeAndEstimate { public final Type type; public final EstimatedStatsAndCost estimate; public TypeAndEstimate(Type type, EstimatedStatsAndCost estimate) { this.type = requireNonNull(type, "type is null"); this.estimate = requireNonNull(estimate, "estimate is null"); } } private static class ExponentialSleeper { private Duration nextSleepTime; private final Duration maxSleepTime; private final Duration minSleepIncrement; private final double sleepIncrementFactor; ExponentialSleeper(Duration minSleepTime, Duration maxSleepTime, Duration minSleepIncrement, double sleepIncrementFactor) { this.nextSleepTime = minSleepTime; this.maxSleepTime = maxSleepTime; this.minSleepIncrement = minSleepIncrement; this.sleepIncrementFactor = sleepIncrementFactor; } ExponentialSleeper() { this( new Duration(0, SECONDS), new Duration(5, SECONDS), new Duration(100, MILLISECONDS), 2.0); } public void sleep() { try { Thread.sleep(nextSleepTime.toMillis()); long incrementMillis = (long) (nextSleepTime.toMillis() * sleepIncrementFactor - nextSleepTime.toMillis()); if (incrementMillis < minSleepIncrement.toMillis()) { incrementMillis = minSleepIncrement.toMillis(); } nextSleepTime = new Duration(nextSleepTime.toMillis() + incrementMillis, MILLISECONDS); if (nextSleepTime.compareTo(maxSleepTime) > 0) { nextSleepTime = maxSleepTime; } } catch (InterruptedException e) { Thread.currentThread().interrupt(); throw new RuntimeException(e); } } } @DataProvider public Object[][] timestampPrecision() { return new Object[][] { {HiveTimestampPrecision.MILLISECONDS}, {HiveTimestampPrecision.MICROSECONDS}, {HiveTimestampPrecision.NANOSECONDS}}; } @Override protected Optional<DataMappingTestSetup> filterDataMappingSmokeTestData(DataMappingTestSetup dataMappingTestSetup) { String typeName = dataMappingTestSetup.getTrinoTypeName(); if (typeName.equals("time") || typeName.equals("timestamp(3) with time zone")) { return Optional.of(dataMappingTestSetup.asUnsupported()); } return Optional.of(dataMappingTestSetup); } @Override protected TestTable createTableWithDefaultColumns() { throw new SkipException("Hive connector does not support column default values"); } private Session withTimestampPrecision(Session session, HiveTimestampPrecision precision) { return Session.builder(session) .setCatalogSessionProperty(catalog, "timestamp_precision", precision.name()) .build(); } }
dain/presto
plugin/trino-hive/src/test/java/io/trino/plugin/hive/TestHiveConnectorTest.java
Java
apache-2.0
437,398
/** * Copyright &copy; 2012-2016 <a href="https://github.com/thinkgem/jeesite">JeeSite</a> All rights reserved. */ package cn.tomsnail.dev.console.config.indexpage.dao; import com.thinkgem.jeesite.common.persistence.TreeDao; import com.thinkgem.jeesite.common.persistence.annotation.MyBatisDao; import cn.tomsnail.dev.console.config.indexpage.entity.JsIndex; /** * 首页配置DAO接口 * @author yangsong * @version 2017-12-20 */ @MyBatisDao public interface JsIndexDao extends TreeDao<JsIndex> { }
tomsnail/snail-dev-console
src/main/java/cn/tomsnail/dev/console/config/indexpage/dao/JsIndexDao.java
Java
apache-2.0
509
// Copyright 2017 The Grin Developers // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. //! Low-Level manager for loading and unloading plugins. These functions //! should generally not be called directly by most consumers, who should //! be using the high level interfaces found in the config, manager, and //! miner modules. These functions are meant for internal cuckoo-miner crates, //! and will not be exposed to other projects including the cuckoo-miner crate. //! //! Note that plugins are shared libraries, not objects. You can have multiple //! instances of a PluginLibrary, but all of them will reference the same //! loaded code. Plugins aren't threadsafe, so only one thread should ever //! be calling a particular plugin at a time. use std::sync::Mutex; use libloading; use libc::*; use error::error::CuckooMinerError; // PRIVATE MEMBERS // Type definitions corresponding to each function that the plugin implements type CuckooInit = unsafe extern "C" fn(); type CuckooCall = unsafe extern "C" fn(*const c_uchar, uint32_t, *mut uint32_t, *mut uint32_t) -> uint32_t; type CuckooParameterList = unsafe extern "C" fn(*mut c_uchar, *mut uint32_t) -> uint32_t; type CuckooSetParameter = unsafe extern "C" fn(*const c_uchar, uint32_t, uint32_t, uint32_t) -> uint32_t; type CuckooGetParameter = unsafe extern "C" fn(*const c_uchar, uint32_t, uint32_t, *mut uint32_t) -> uint32_t; type CuckooIsQueueUnderLimit = unsafe extern "C" fn() -> uint32_t; type CuckooPushToInputQueue = unsafe extern "C" fn(uint32_t, *const c_uchar, uint32_t, *const c_uchar) -> uint32_t; type CuckooReadFromOutputQueue = unsafe extern "C" fn(*mut uint32_t, *mut uint32_t, *mut uint32_t, *mut c_uchar) -> uint32_t; type CuckooClearQueues = unsafe extern "C" fn(); type CuckooStartProcessing = unsafe extern "C" fn() -> uint32_t; type CuckooStopProcessing = unsafe extern "C" fn() -> uint32_t; type CuckooResetProcessing = unsafe extern "C" fn() -> uint32_t; type CuckooHasProcessingStopped = unsafe extern "C" fn() -> uint32_t; type CuckooGetStats = unsafe extern "C" fn(*mut c_uchar, *mut uint32_t) -> uint32_t; /// Struct to hold instances of loaded plugins pub struct PluginLibrary { ///The full file path to the plugin loaded by this instance pub lib_full_path: String, loaded_library: Mutex<libloading::Library>, cuckoo_init: Mutex<CuckooInit>, cuckoo_call: Mutex<CuckooCall>, cuckoo_parameter_list: Mutex<CuckooParameterList>, cuckoo_get_parameter: Mutex<CuckooGetParameter>, cuckoo_set_parameter: Mutex<CuckooSetParameter>, cuckoo_is_queue_under_limit: Mutex<CuckooIsQueueUnderLimit>, cuckoo_clear_queues: Mutex<CuckooClearQueues>, cuckoo_push_to_input_queue: Mutex<CuckooPushToInputQueue>, cuckoo_read_from_output_queue: Mutex<CuckooReadFromOutputQueue>, cuckoo_start_processing: Mutex<CuckooStartProcessing>, cuckoo_stop_processing: Mutex<CuckooStopProcessing>, cuckoo_reset_processing: Mutex<CuckooResetProcessing>, cuckoo_has_processing_stopped: Mutex<CuckooHasProcessingStopped>, cuckoo_get_stats: Mutex<CuckooGetStats>, } impl PluginLibrary { //Loads the library at the specified path /// #Description /// /// Loads the specified library, readying it for use /// via the exposed wrapper functions. A plugin can be /// loaded into multiple PluginLibrary instances, however /// they will all reference the same loaded library. One /// should only exist per library in a given thread. /// /// #Arguments /// /// * `lib_full_path` The full path to the library that is /// to be loaded. /// /// #Returns /// /// * `Ok()` is the library was successfully loaded. /// * a [CuckooMinerError](enum.CuckooMinerError.html) /// with specific detail if an error was encountered. /// /// #Example /// /// ``` /// # use cuckoo_miner::PluginLibrary; /// # use std::env; /// # use std::path::PathBuf; /// # static DLL_SUFFIX: &str = ".cuckooplugin"; /// # let mut d = PathBuf::from(env!("CARGO_MANIFEST_DIR")); /// # d.push(format!("./target/debug/plugins/lean_cpu_16{}", DLL_SUFFIX).as_str()); /// # let plugin_path = d.to_str().unwrap(); /// let pl=PluginLibrary::new(plugin_path).unwrap(); /// pl.call_cuckoo_init(); /// ``` /// pub fn new(lib_full_path: &str) -> Result<PluginLibrary, CuckooMinerError> { debug!("Loading miner plugin: {}", &lib_full_path); let result = libloading::Library::new(lib_full_path); if let Err(e) = result { return Err(CuckooMinerError::PluginNotFoundError( String::from(format!("{} - {:?}", lib_full_path, e)), )); } let loaded_library = result.unwrap(); PluginLibrary::load_symbols(loaded_library, lib_full_path) } fn load_symbols( loaded_library: libloading::Library, path: &str ) -> Result<PluginLibrary, CuckooMinerError> { unsafe { let ret_val = PluginLibrary { lib_full_path: String::from(path), cuckoo_init: { let cuckoo_init: libloading::Symbol<CuckooInit> = loaded_library.get(b"cuckoo_init\0").unwrap(); Mutex::new(*cuckoo_init.into_raw()) }, cuckoo_call: { let cuckoo_call: libloading::Symbol<CuckooCall> = loaded_library.get(b"cuckoo_call\0").unwrap(); Mutex::new(*cuckoo_call.into_raw()) }, cuckoo_parameter_list: { let cuckoo_parameter_list:libloading::Symbol<CuckooParameterList> = loaded_library.get(b"cuckoo_parameter_list\0").unwrap(); Mutex::new(*cuckoo_parameter_list.into_raw()) }, cuckoo_get_parameter: { let cuckoo_get_parameter:libloading::Symbol<CuckooGetParameter> = loaded_library.get(b"cuckoo_get_parameter\0").unwrap(); Mutex::new(*cuckoo_get_parameter.into_raw()) }, cuckoo_set_parameter: { let cuckoo_set_parameter:libloading::Symbol<CuckooSetParameter> = loaded_library.get(b"cuckoo_set_parameter\0").unwrap(); Mutex::new(*cuckoo_set_parameter.into_raw()) }, cuckoo_is_queue_under_limit: { let cuckoo_is_queue_under_limit:libloading::Symbol<CuckooIsQueueUnderLimit> = loaded_library.get(b"cuckoo_is_queue_under_limit\0").unwrap(); Mutex::new(*cuckoo_is_queue_under_limit.into_raw()) }, cuckoo_clear_queues: { let cuckoo_clear_queues:libloading::Symbol<CuckooClearQueues> = loaded_library.get(b"cuckoo_clear_queues\0").unwrap(); Mutex::new(*cuckoo_clear_queues.into_raw()) }, cuckoo_push_to_input_queue: { let cuckoo_push_to_input_queue:libloading::Symbol<CuckooPushToInputQueue> = loaded_library.get(b"cuckoo_push_to_input_queue\0").unwrap(); Mutex::new(*cuckoo_push_to_input_queue.into_raw()) }, cuckoo_read_from_output_queue: { let cuckoo_read_from_output_queue:libloading::Symbol<CuckooReadFromOutputQueue> = loaded_library.get(b"cuckoo_read_from_output_queue\0").unwrap(); Mutex::new(*cuckoo_read_from_output_queue.into_raw()) }, cuckoo_start_processing: { let cuckoo_start_processing:libloading::Symbol<CuckooStartProcessing> = loaded_library.get(b"cuckoo_start_processing\0").unwrap(); Mutex::new(*cuckoo_start_processing.into_raw()) }, cuckoo_stop_processing: { let cuckoo_stop_processing:libloading::Symbol<CuckooStopProcessing> = loaded_library.get(b"cuckoo_stop_processing\0").unwrap(); Mutex::new(*cuckoo_stop_processing.into_raw()) }, cuckoo_reset_processing: { let cuckoo_reset_processing:libloading::Symbol<CuckooResetProcessing> = loaded_library.get(b"cuckoo_reset_processing\0").unwrap(); Mutex::new(*cuckoo_reset_processing.into_raw()) }, cuckoo_has_processing_stopped: { let cuckoo_has_processing_stopped:libloading::Symbol<CuckooHasProcessingStopped> = loaded_library.get(b"cuckoo_has_processing_stopped\0").unwrap(); Mutex::new(*cuckoo_has_processing_stopped.into_raw()) }, cuckoo_get_stats: { let cuckoo_get_stats: libloading::Symbol<CuckooGetStats> = loaded_library.get(b"cuckoo_get_stats\0").unwrap(); Mutex::new(*cuckoo_get_stats.into_raw()) }, loaded_library: Mutex::new(loaded_library), }; ret_val.call_cuckoo_init(); return Ok(ret_val); } } /// #Description /// /// Unloads the currently loaded plugin and all symbols. /// /// #Arguments /// /// None /// /// #Returns /// /// Nothing /// pub fn unload(&self) { let cuckoo_get_parameter_ref = self.cuckoo_get_parameter.lock().unwrap(); drop(cuckoo_get_parameter_ref); let cuckoo_set_parameter_ref = self.cuckoo_set_parameter.lock().unwrap(); drop(cuckoo_set_parameter_ref); let cuckoo_parameter_list_ref = self.cuckoo_parameter_list.lock().unwrap(); drop(cuckoo_parameter_list_ref); let cuckoo_call_ref = self.cuckoo_call.lock().unwrap(); drop(cuckoo_call_ref); let cuckoo_is_queue_under_limit_ref = self.cuckoo_is_queue_under_limit.lock().unwrap(); drop(cuckoo_is_queue_under_limit_ref); let cuckoo_clear_queues_ref = self.cuckoo_clear_queues.lock().unwrap(); drop(cuckoo_clear_queues_ref); let cuckoo_push_to_input_queue_ref = self.cuckoo_push_to_input_queue.lock().unwrap(); drop(cuckoo_push_to_input_queue_ref); let cuckoo_read_from_output_queue_ref = self.cuckoo_read_from_output_queue.lock().unwrap(); drop(cuckoo_read_from_output_queue_ref); let cuckoo_start_processing_ref = self.cuckoo_start_processing.lock().unwrap(); drop(cuckoo_start_processing_ref); let cuckoo_stop_processing_ref = self.cuckoo_stop_processing.lock().unwrap(); drop(cuckoo_stop_processing_ref); let cuckoo_reset_processing_ref = self.cuckoo_reset_processing.lock().unwrap(); drop(cuckoo_reset_processing_ref); let cuckoo_has_processing_stopped_ref = self.cuckoo_has_processing_stopped.lock().unwrap(); drop(cuckoo_has_processing_stopped_ref); let cuckoo_get_stats_ref = self.cuckoo_get_stats.lock().unwrap(); drop(cuckoo_get_stats_ref); let loaded_library_ref = self.loaded_library.lock().unwrap(); drop(loaded_library_ref); } /// #Description /// /// Initialises the cuckoo plugin, mostly allowing it to write a list of /// its accepted parameters. This should be called just after the plugin /// is loaded, and before anything else is called. /// /// #Arguments /// /// * None /// /// #Returns /// /// * Nothing /// /// #Example /// /// ``` /// # use cuckoo_miner::PluginLibrary; /// # use std::env; /// # use std::path::PathBuf; /// # static DLL_SUFFIX: &str = ".cuckooplugin"; /// # let mut d = PathBuf::from(env!("CARGO_MANIFEST_DIR")); /// # d.push(format!("./target/debug/plugins/lean_cpu_16{}", DLL_SUFFIX).as_str()); /// # let plugin_path = d.to_str().unwrap(); /// let pl=PluginLibrary::new(plugin_path).unwrap(); /// pl.call_cuckoo_init(); /// ``` /// pub fn call_cuckoo_init(&self) { let cuckoo_init_ref = self.cuckoo_init.lock().unwrap(); unsafe { cuckoo_init_ref(); }; } /// #Description /// /// Call to the cuckoo_call function of the currently loaded plugin, which /// will perform a Cuckoo Cycle on the given seed, returning the first /// solution (a length 42 cycle) that is found. The implementation details /// are dependent on particular loaded plugin. /// /// #Arguments /// /// * `header` (IN) A reference to a block of [u8] bytes to use for the /// seed to the internal SIPHASH function which generates edge locations /// in the graph. In practice, this is a Grin blockheader, /// but from the plugin's perspective this can be anything. /// /// * `solutions` (OUT) A caller-allocated array of 42 unsigned bytes. This /// currently must be of size 42, corresponding to a conventional /// cuckoo-cycle solution length. If a solution is found, the solution /// nonces will be stored in this array, otherwise, they will be left /// untouched. /// /// #Returns /// /// 1 if a solution is found, with the 42 solution nonces contained /// within `sol_nonces`. 0 if no solution is found and `sol_nonces` /// remains untouched. /// /// #Example /// /// ``` /// # use cuckoo_miner::PluginLibrary; /// # use std::env; /// # use std::path::PathBuf; /// # static DLL_SUFFIX: &str = ".cuckooplugin"; /// # let mut d = PathBuf::from(env!("CARGO_MANIFEST_DIR")); /// # d.push(format!("./target/debug/plugins/lean_cpu_16{}", DLL_SUFFIX).as_str()); /// # let plugin_path = d.to_str().unwrap(); /// let pl = PluginLibrary::new(plugin_path).unwrap(); /// let header:[u8;40] = [0;40]; /// let mut solution:[u32; 42] = [0;42]; /// let mut cuckoo_size = 0; /// let result=pl.call_cuckoo(&header, &mut cuckoo_size, &mut solution); /// if result==0 { /// println!("Solution Found!"); /// } else { /// println!("No Solution Found"); /// } /// /// ``` /// pub fn call_cuckoo(&self, header: &[u8], cuckoo_size: &mut u32, solutions: &mut [u32; 42]) -> u32 { let cuckoo_call_ref = self.cuckoo_call.lock().unwrap(); unsafe { cuckoo_call_ref(header.as_ptr(), header.len() as u32, cuckoo_size, solutions.as_mut_ptr()) } } /// #Description /// /// Call to the cuckoo_call_parameter_list function of the currently loaded /// plugin, which will provide an informative JSON array of the parameters that the /// plugin supports, as well as their descriptions and range of values. /// /// #Arguments /// /// * `param_list_bytes` (OUT) A reference to a block of [u8] bytes to fill /// with the JSON result array /// /// * `param_list_len` (IN-OUT) When called, this should contain the /// maximum number of bytes the plugin should write to `param_list_bytes`. /// Upon return, this is filled with the number of bytes that were written to /// `param_list_bytes`. /// /// #Returns /// /// 0 if okay, with the result is stored in `param_list_bytes` /// 3 if the provided array is too short /// /// #Example /// /// ``` /// # use cuckoo_miner::PluginLibrary; /// # use std::env; /// # use std::path::PathBuf; /// # static DLL_SUFFIX: &str = ".cuckooplugin"; /// # let mut d = PathBuf::from(env!("CARGO_MANIFEST_DIR")); /// # d.push(format!("./target/debug/plugins/lean_cpu_16{}", DLL_SUFFIX).as_str()); /// # let plugin_path = d.to_str().unwrap(); /// let pl=PluginLibrary::new(plugin_path).unwrap(); /// pl.call_cuckoo_init(); /// let mut param_list_bytes:[u8;1024]=[0;1024]; /// let mut param_list_len=param_list_bytes.len() as u32; /// //get a list of json parameters /// let parameter_list=pl.call_cuckoo_parameter_list(&mut param_list_bytes, /// &mut param_list_len); /// ``` /// pub fn call_cuckoo_parameter_list( &self, param_list_bytes: &mut [u8], param_list_len: &mut u32, ) -> u32 { let cuckoo_parameter_list_ref = self.cuckoo_parameter_list.lock().unwrap(); unsafe { cuckoo_parameter_list_ref(param_list_bytes.as_mut_ptr(), param_list_len) } } /// #Description /// /// Retrieves the value of a parameter from the currently loaded plugin /// /// #Arguments /// /// * `name_bytes` (IN) A reference to a block of [u8] bytes storing the /// parameter name /// /// * `device_id` (IN) The device ID to which the parameter applies (if applicable) /// * `value` (OUT) A reference where the parameter value will be stored /// /// #Returns /// /// 0 if the parameter was retrived, and the result is stored in `value` /// 1 if the parameter does not exist /// 4 if the provided parameter name was too long /// /// #Example /// ``` /// # use cuckoo_miner::PluginLibrary; /// # use std::env; /// # use std::path::PathBuf; /// # static DLL_SUFFIX: &str = ".cuckooplugin"; /// # let mut d = PathBuf::from(env!("CARGO_MANIFEST_DIR")); /// # d.push(format!("./target/debug/plugins/lean_cpu_16{}", DLL_SUFFIX).as_str()); /// # let plugin_path = d.to_str().unwrap(); /// let pl=PluginLibrary::new(plugin_path).unwrap(); /// pl.call_cuckoo_init(); /// let name = "NUM_THREADS"; /// let mut num_threads:u32 = 0; /// let ret_val = pl.call_cuckoo_get_parameter(name.as_bytes(), 0, &mut num_threads); /// ``` /// pub fn call_cuckoo_get_parameter(&self, name_bytes: &[u8], device_id: u32, value: &mut u32) -> u32 { let cuckoo_get_parameter_ref = self.cuckoo_get_parameter.lock().unwrap(); unsafe { cuckoo_get_parameter_ref(name_bytes.as_ptr(), name_bytes.len() as u32, device_id, value) } } /// Sets the value of a parameter in the currently loaded plugin /// /// #Arguments /// /// * `name_bytes` (IN) A reference to a block of [u8] bytes storing the /// parameter name /// /// * `device_id` (IN) The deviceID to which the parameter applies (if applicable) /// * `value` (IN) The value to which to set the parameter /// /// #Returns /// /// 0 if the parameter was retrieved, and the result is stored in `value` /// 1 if the parameter does not exist /// 2 if the parameter exists, but the provided value is outside the /// allowed range determined by the plugin /// 4 if the provided parameter name is too long /// /// #Example /// /// ``` /// # use cuckoo_miner::PluginLibrary; /// # use std::env; /// # use std::path::PathBuf; /// # static DLL_SUFFIX: &str = ".cuckooplugin"; /// # let mut d = PathBuf::from(env!("CARGO_MANIFEST_DIR")); /// # d.push(format!("./target/debug/plugins/lean_cpu_16{}", DLL_SUFFIX).as_str()); /// # let plugin_path = d.to_str().unwrap(); /// let pl=PluginLibrary::new(plugin_path).unwrap(); /// let name = "NUM_THREADS"; /// let return_code = pl.call_cuckoo_set_parameter(name.as_bytes(), 0, 8); /// ``` /// pub fn call_cuckoo_set_parameter(&self, name_bytes: &[u8], device_id: u32, value: u32) -> u32 { let cuckoo_set_parameter_ref = self.cuckoo_set_parameter.lock().unwrap(); unsafe { cuckoo_set_parameter_ref(name_bytes.as_ptr(), name_bytes.len() as u32, device_id, value) } } /// #Description /// /// For Async/Queued mode, check whether the plugin is ready /// to accept more headers. /// /// #Arguments /// /// * None /// /// #Returns /// /// * 1 if the queue can accept more hashes, 0 otherwise /// pub fn call_cuckoo_is_queue_under_limit(&self) -> u32 { let cuckoo_is_queue_under_limit_ref = self.cuckoo_is_queue_under_limit.lock().unwrap(); unsafe { cuckoo_is_queue_under_limit_ref() } } /// #Description /// /// Pushes header data to the loaded plugin for later processing in /// asyncronous/queued mode. /// /// #Arguments /// /// * `data` (IN) A block of bytes to use for the seed to the internal /// SIPHASH function which generates edge locations in the graph. In /// practice, this is a Grin blockheader, but from the /// plugin's perspective this can be anything. /// /// * `nonce` (IN) The nonce that was used to generate this data, for /// identification purposes in the solution queue /// /// #Returns /// /// 0 if the hash was successfully added to the queue /// 1 if the queue is full /// 2 if the length of the data is greater than the plugin allows /// 4 if the plugin has been told to shutdown /// /// #Unsafe /// /// Provided values are copied within the plugin, and will not be /// modified /// /// #Example /// ``` /// # use cuckoo_miner::PluginLibrary; /// # use std::env; /// # use std::path::PathBuf; /// # static DLL_SUFFIX: &str = ".cuckooplugin"; /// # let mut d = PathBuf::from(env!("CARGO_MANIFEST_DIR")); /// # d.push(format!("./target/debug/plugins/lean_cpu_16{}", DLL_SUFFIX).as_str()); /// # let plugin_path = d.to_str().unwrap(); /// let pl=PluginLibrary::new(plugin_path).unwrap(); /// //Processing started after call to cuckoo_start_processing() /// //a test hash of zeroes /// let hash:[u8;32]=[0;32]; /// //test nonce (u64, basically) should be unique /// let nonce:[u8;8]=[0;8]; /// let result=pl.call_cuckoo_push_to_input_queue(&hash, &nonce); /// ``` /// pub fn call_cuckoo_push_to_input_queue(&self, id: u32, data: &[u8], nonce: &[u8;8]) -> u32 { let cuckoo_push_to_input_queue_ref = self.cuckoo_push_to_input_queue.lock().unwrap(); unsafe { cuckoo_push_to_input_queue_ref(id, data.as_ptr(), data.len() as u32, nonce.as_ptr()) } } /// #Description /// /// Clears internal queues of all data /// /// #Arguments /// /// * None /// /// #Returns /// /// * Nothing /// /// #Example /// ``` /// # use cuckoo_miner::PluginLibrary; /// # use std::env; /// # use std::path::PathBuf; /// # static DLL_SUFFIX: &str = ".cuckooplugin"; /// # let mut d = PathBuf::from(env!("CARGO_MANIFEST_DIR")); /// # d.push(format!("./target/debug/plugins/lean_cpu_16{}", DLL_SUFFIX).as_str()); /// # let plugin_path = d.to_str().unwrap(); /// let pl=PluginLibrary::new(plugin_path).unwrap(); /// //Processing started after call to cuckoo_start_processing() /// //a test hash of zeroes /// let hash:[u8;32]=[0;32]; /// //test nonce (u64, basically) should be unique /// let nonce:[u8;8]=[0;8]; /// let result=pl.call_cuckoo_push_to_input_queue(&hash, &nonce); /// //clear queues /// pl.call_cuckoo_clear_queues(); /// ``` /// pub fn call_cuckoo_clear_queues(&self) { let cuckoo_clear_queues_ref = self.cuckoo_clear_queues.lock().unwrap(); unsafe { cuckoo_clear_queues_ref() } } /// #Description /// /// Reads the next solution from the output queue, if one exists. Only /// solutions which meet the target difficulty specified in the preceeding /// call to 'notify' will be placed in the output queue. Read solutions /// are popped from the queue. Does not block, and intended to be called /// continually as part of a mining loop. /// /// #Arguments /// /// * `sol_nonces` (OUT) A block of 42 u32s in which the solution nonces /// will be stored, if any exist. /// /// * `nonce` (OUT) A block of 8 u8s representing a Big-Endian u64, used /// for identification purposes so the caller can reconstruct the header /// used to generate the solution. /// /// #Returns /// /// 1 if a solution was popped from the queue /// 0 if a solution is not available /// /// #Example /// ``` /// # use cuckoo_miner::PluginLibrary; /// # use std::env; /// # use std::path::PathBuf; /// # static DLL_SUFFIX: &str = ".cuckooplugin"; /// # let mut d = PathBuf::from(env!("CARGO_MANIFEST_DIR")); /// # d.push(format!("./target/debug/plugins/lean_cpu_16{}", DLL_SUFFIX).as_str()); /// # let plugin_path = d.to_str().unwrap(); /// let pl=PluginLibrary::new(plugin_path).unwrap(); /// //Processing started after call to cuckoo_start_processing() /// //a test hash of zeroes /// let hash:[u8;32]=[0;32]; /// //test nonce (u64, basically) should be unique /// let nonce:[u8;8]=[0;8]; /// let result=pl.call_cuckoo_push_to_input_queue(&hash, &nonce); /// /// //within loop /// let mut sols:[u32; 42] = [0; 42]; /// let mut nonce: [u8; 8] = [0;8]; /// let mut cuckoo_size = 0; /// let found = pl.call_cuckoo_read_from_output_queue(&mut sols, &mut cuckoo_size, &mut nonce); /// ``` /// pub fn call_cuckoo_read_from_output_queue( &self, id: &mut u32, solutions: &mut [u32; 42], cuckoo_size: &mut u32, nonce: &mut [u8; 8], ) -> u32 { let cuckoo_read_from_output_queue_ref = self.cuckoo_read_from_output_queue.lock().unwrap(); let ret = unsafe { cuckoo_read_from_output_queue_ref(id, solutions.as_mut_ptr(), cuckoo_size, nonce.as_mut_ptr()) }; ret } /// #Description /// /// Starts asyncronous processing. The plugin will start reading hashes /// from the input queue, delegate them internally as it sees fit, and /// put solutions into the output queue. It is up to the plugin /// implementation to manage how the workload is spread across /// devices/threads. Once processing is started, communication with /// the started process happens via reading and writing from the /// input and output queues. /// /// #Arguments /// /// * None /// /// #Returns /// /// * 1 if processing was successfully started /// * Another value if procesing failed to start (return codes TBD) /// /// #Unsafe /// /// The caller is reponsible for calling call_cuckoo_stop_processing() /// before exiting its thread, which will signal the internally detached /// thread to stop processing, clean up, and exit. /// /// #Example /// ``` /// # use cuckoo_miner::PluginLibrary; /// # use std::env; /// # use std::path::PathBuf; /// # static DLL_SUFFIX: &str = ".cuckooplugin"; /// # let mut d = PathBuf::from(env!("CARGO_MANIFEST_DIR")); /// # d.push(format!("./target/debug/plugins/lean_cpu_16{}", DLL_SUFFIX).as_str()); /// /// # let plugin_path = d.to_str().unwrap(); /// let pl=PluginLibrary::new(plugin_path).unwrap(); /// let ret_val=pl.call_cuckoo_start_processing(); /// ``` pub fn call_cuckoo_start_processing(&self) -> u32 { let cuckoo_start_processing_ref = self.cuckoo_start_processing.lock().unwrap(); unsafe { cuckoo_start_processing_ref() } } /// #Description /// /// Stops asyncronous processing. The plugin should signal to shut down /// processing, as quickly as possible, clean up all threads/devices/memory /// it may have allocated, and clear its queues. Note this merely sets /// a flag indicating that the threads started by 'cuckoo_start_processing' /// should shut down, and will return instantly. Use 'cuckoo_has_processing_stopped' /// to check on the shutdown status. /// /// #Arguments /// /// * None /// /// #Returns /// /// * 1 in all cases, indicating the stop flag was set.. /// /// #Example /// ``` /// # use cuckoo_miner::PluginLibrary; /// # use std::env; /// # use std::path::PathBuf; /// # static DLL_SUFFIX: &str = ".cuckooplugin"; /// # let mut d = PathBuf::from(env!("CARGO_MANIFEST_DIR")); /// # d.push(format!("./target/debug/plugins/lean_cpu_16{}", DLL_SUFFIX).as_str()); /// # let plugin_path = d.to_str().unwrap(); /// let pl=PluginLibrary::new(plugin_path).unwrap(); /// let mut ret_val=pl.call_cuckoo_start_processing(); /// //Send data into queue, read results, etc /// ret_val=pl.call_cuckoo_stop_processing(); /// while pl.call_cuckoo_has_processing_stopped() == 0 { /// //don't continue/exit thread until plugin is stopped /// } /// ``` pub fn call_cuckoo_stop_processing(&self) -> u32 { let cuckoo_stop_processing_ref = self.cuckoo_stop_processing.lock().unwrap(); unsafe { cuckoo_stop_processing_ref() } } /// #Description /// /// Resets the internal processing flag so that processing may begin again. /// /// #Arguments /// /// * None /// /// #Returns /// /// * 1 in all cases, indicating the stop flag was reset /// /// #Example /// ``` /// # use cuckoo_miner::PluginLibrary; /// # use std::env; /// # use std::path::PathBuf; /// # static DLL_SUFFIX: &str = ".cuckooplugin"; /// # let mut d = PathBuf::from(env!("CARGO_MANIFEST_DIR")); /// # d.push(format!("./target/debug/plugins/lean_cpu_16{}", DLL_SUFFIX).as_str()); /// # let plugin_path = d.to_str().unwrap(); /// let pl=PluginLibrary::new(plugin_path).unwrap(); /// let mut ret_val=pl.call_cuckoo_start_processing(); /// //Send data into queue, read results, etc /// ret_val=pl.call_cuckoo_stop_processing(); /// while pl.call_cuckoo_has_processing_stopped() == 0 { /// //don't continue/exit thread until plugin is stopped /// } /// // later on /// pl.call_cuckoo_reset_processing(); /// //restart /// ``` pub fn call_cuckoo_reset_processing(&self) -> u32 { let cuckoo_reset_processing_ref = self.cuckoo_reset_processing.lock().unwrap(); unsafe { cuckoo_reset_processing_ref() } } /// #Description /// /// Returns whether all internal processing within the plugin has stopped, /// meaning it's safe to exit the calling thread after a call to /// cuckoo_stop_processing() /// /// #Arguments /// /// * None /// /// #Returns /// /// 1 if all internal processing has been stopped. /// 0 if processing activity is still in progress /// /// #Example /// ``` /// # use cuckoo_miner::PluginLibrary; /// # use std::env; /// # use std::path::PathBuf; /// # static DLL_SUFFIX: &str = ".cuckooplugin"; /// # let mut d = PathBuf::from(env!("CARGO_MANIFEST_DIR")); /// # d.push(format!("./target/debug/plugins/lean_cpu_16{}", DLL_SUFFIX).as_str()); /// # let plugin_path = d.to_str().unwrap(); /// let pl=PluginLibrary::new(plugin_path).unwrap(); /// let ret_val=pl.call_cuckoo_start_processing(); /// //Things happen in between, within a loop /// pl.call_cuckoo_stop_processing(); /// while pl.call_cuckoo_has_processing_stopped() == 0 { /// //don't continue/exit thread until plugin is stopped /// } /// ``` pub fn call_cuckoo_has_processing_stopped(&self) -> u32 { let cuckoo_has_processing_stopped_ref = self.cuckoo_has_processing_stopped.lock().unwrap(); unsafe { cuckoo_has_processing_stopped_ref() } } /// #Description /// /// Retrieves a JSON list of the plugin's current stats for all running /// devices. In the case of a plugin running GPUs in parallel, it should /// be a list of running devices. In the case of a CPU plugin, it will /// most likely be a single CPU. e.g: /// /// ```text /// [{ /// device_id:"0", /// device_name:"NVIDIA GTX 1080", /// last_start_time: "23928329382", /// last_end_time: "23928359382", /// last_solution_time: "3382", /// }, /// { /// device_id:"1", /// device_name:"NVIDIA GTX 1080ti", /// last_start_time: "23928329382", /// last_end_time: "23928359382", /// last_solution_time: "3382", /// }] /// ``` /// #Arguments /// /// * `stat_bytes` (OUT) A reference to a block of [u8] bytes to fill with /// the JSON result array /// /// * `stat_bytes_len` (IN-OUT) When called, this should contain the /// maximum number of bytes the plugin should write to `stat_bytes`. Upon return, /// this is filled with the number of bytes that were written to `stat_bytes`. /// /// #Returns /// /// 0 if okay, with the result is stored in `stat_bytes` /// 3 if the provided array is too short /// /// #Example /// /// ``` /// # use cuckoo_miner::PluginLibrary; /// # use std::env; /// # use std::path::PathBuf; /// # static DLL_SUFFIX: &str = ".cuckooplugin"; /// # let mut d = PathBuf::from(env!("CARGO_MANIFEST_DIR")); /// # d.push(format!("./target/debug/plugins/lean_cpu_16{}", DLL_SUFFIX).as_str()); /// # let plugin_path = d.to_str().unwrap(); /// let pl=PluginLibrary::new(plugin_path).unwrap(); /// pl.call_cuckoo_init(); /// ///start plugin+processing, and then within the loop: /// let mut stat_bytes:[u8;1024]=[0;1024]; /// let mut stat_len=stat_bytes.len() as u32; /// //get a list of json parameters /// let parameter_list=pl.call_cuckoo_get_stats(&mut stat_bytes, /// &mut stat_len); /// ``` /// pub fn call_cuckoo_get_stats(&self, stat_bytes: &mut [u8], stat_bytes_len: &mut u32) -> u32 { let cuckoo_get_stats_ref = self.cuckoo_get_stats.lock().unwrap(); unsafe { cuckoo_get_stats_ref(stat_bytes.as_mut_ptr(), stat_bytes_len) } } }
mimblewimble/cuckoo-miner
src/cuckoo_sys/manager.rs
Rust
apache-2.0
31,387
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.jmeter.visualizers; import java.awt.Dimension; import java.awt.FlowLayout; import javax.swing.ImageIcon; import javax.swing.JPanel; import javax.swing.JLabel; import org.apache.jmeter.util.JMeterUtils; import org.apache.jmeter.monitor.util.Stats; /** * The purpose of ServerPanel is to display an unique server and its current * status. The server label consist of the protocol, host and port. For example, * a system with multiple Tomcat's running on different ports would be different * ServerPanel. */ public class ServerPanel extends JPanel implements MonitorGuiListener { private static final long serialVersionUID = 240L; private JLabel serverField; private JLabel timestampField; /** * Preference size for the health icon */ private final Dimension prefsize = new Dimension(25, 75); private JLabel healthIcon; private JLabel loadIcon; /** * Health Icons */ private static final ImageIcon HEALTHY = JMeterUtils.getImage("monitor-healthy.gif"); private static final ImageIcon ACTIVE = JMeterUtils.getImage("monitor-active.gif"); private static final ImageIcon WARNING = JMeterUtils.getImage("monitor-warning.gif"); private static final ImageIcon DEAD = JMeterUtils.getImage("monitor-dead.gif"); /** * Load Icons */ private static final ImageIcon LOAD_0 = JMeterUtils.getImage("monitor-load-0.gif"); private static final ImageIcon LOAD_1 = JMeterUtils.getImage("monitor-load-1.gif"); private static final ImageIcon LOAD_2 = JMeterUtils.getImage("monitor-load-2.gif"); private static final ImageIcon LOAD_3 = JMeterUtils.getImage("monitor-load-3.gif"); private static final ImageIcon LOAD_4 = JMeterUtils.getImage("monitor-load-4.gif"); private static final ImageIcon LOAD_5 = JMeterUtils.getImage("monitor-load-5.gif"); private static final ImageIcon LOAD_6 = JMeterUtils.getImage("monitor-load-6.gif"); private static final ImageIcon LOAD_7 = JMeterUtils.getImage("monitor-load-7.gif"); private static final ImageIcon LOAD_8 = JMeterUtils.getImage("monitor-load-8.gif"); private static final ImageIcon LOAD_9 = JMeterUtils.getImage("monitor-load-9.gif"); private static final ImageIcon LOAD_10 = JMeterUtils.getImage("monitor-load-10.gif"); /** * Creates a new server panel for a monitored server * * @param model * information about the monitored server */ public ServerPanel(MonitorModel model) { super(); init(model); } /** * * @deprecated Only for use in unit testing */ @Deprecated public ServerPanel() { // log.warn("Only for use in unit testing"); } /** * Init will create the JLabel widgets for the host, health, load and * timestamp. * * @param model information about the monitored server */ private void init(MonitorModel model) { this.setLayout(new FlowLayout()); serverField = new JLabel(model.getURL()); this.add(serverField); healthIcon = new JLabel(getHealthyImageIcon(model.getHealth())); healthIcon.setPreferredSize(prefsize); this.add(healthIcon); loadIcon = new JLabel(getLoadImageIcon(model.getLoad())); this.add(loadIcon); timestampField = new JLabel(model.getTimestampString()); this.add(timestampField); } /** * Static method for getting the right ImageIcon for the health. * * @param health * @return image for the status */ private static ImageIcon getHealthyImageIcon(int health) { ImageIcon i = null; switch (health) { case Stats.HEALTHY: i = HEALTHY; break; case Stats.ACTIVE: i = ACTIVE; break; case Stats.WARNING: i = WARNING; break; case Stats.DEAD: i = DEAD; break; default: // better than returning null ... throw new IllegalStateException("Unexpected health value: " + health); } return i; } /** * Static method looks up the right ImageIcon from the load value. * * @param load * @return image for the load */ private static ImageIcon getLoadImageIcon(int load) { if (load == 0) { return LOAD_0; } else if (load > 0 && load <= 10) { return LOAD_1; } else if (load > 10 && load <= 20) { return LOAD_2; } else if (load > 20 && load <= 30) { return LOAD_3; } else if (load > 30 && load <= 40) { return LOAD_4; } else if (load > 40 && load <= 50) { return LOAD_5; } else if (load > 50 && load <= 60) { return LOAD_6; } else if (load > 60 && load <= 70) { return LOAD_7; } else if (load > 70 && load <= 80) { return LOAD_8; } else if (load > 80 && load <= 90) { return LOAD_9; } else { return LOAD_10; } } /** * Method will update the ServerPanel's health, load, and timestamp. For * efficiency, it uses the static method to lookup the images. */ @Override public void updateGui(MonitorModel stat) { loadIcon.setIcon(getLoadImageIcon(stat.getLoad())); healthIcon.setIcon(getHealthyImageIcon(stat.getHealth())); timestampField.setText(stat.getTimestampString()); this.updateGui(); } /** * update the gui */ @Override public void updateGui() { this.repaint(); } }
yuyupapa/OpenSource
apache-jmeter-3.0/src/monitor/components/org/apache/jmeter/visualizers/ServerPanel.java
Java
apache-2.0
6,694
<!-- Navigation --> <nav class="navbar navbar-inverse navbar-fixed-top" role="navigation"> <div class="container"> <!-- Brand and toggle get grouped for better mobile display --> <div class="navbar-header"> <button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1"> <span class="sr-only">Toggle navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <a class="navbar-brand" href="index.php">TechITEasy</a> </div> <!-- Collect the nav links, forms, and other content for toggling --> <div class="collapse navbar-collapse" style="background-color: blue;" id="bs-example-navbar-collapse-1"> <ul class="nav navbar-nav navbar-right"> <li> <a href="about.php">About Us</a> </li> <li> <a href="services.php">Services</a> </li> <li> <a href="contact.php">Contact Us</a> </li> </ul> </div> <!-- /.navbar-collapse --> </div> <!-- /.container --> </nav>
aarocard/TechITEasy
navigation.php
PHP
apache-2.0
1,429
package com.droidutils.http; import com.droidutils.http.builder.HttpRequest; import com.droidutils.http.builder.HttpResponse; import com.droidutils.http.cache.Cache; /** * Created by Misha on 08.09.2014. */ public class HttpExecutor { private HttpConnection mHttpConnection; public HttpExecutor(HttpConnection httpConnection) { mHttpConnection = httpConnection; } public <T> HttpResponse execute(HttpRequest httpRequest, Class<T> responseType) throws Exception { return executeRequest(httpRequest, responseType, null); } public <T> HttpResponse execute(HttpRequest httpRequest, Class<T> responseType, Cache<T> cache) throws Exception { return executeRequest(httpRequest, responseType, cache); } private <T> HttpResponse executeRequest(HttpRequest httpRequest, Class<T> responseType, Cache<T> cache) throws Exception { HttpResponse result = null; switch (httpRequest.getHttpMethod()) { case GET: result = mHttpConnection.get(httpRequest, responseType, cache); break; case POST: result = mHttpConnection.post(httpRequest, responseType, cache); break; case HEAD: result = mHttpConnection.head(httpRequest, responseType, cache); break; case OPTIONS: result = mHttpConnection.options(httpRequest, responseType, cache); break; case PUT: result = mHttpConnection.put(httpRequest, responseType, cache); break; case DELETE: result = mHttpConnection.delete(httpRequest, responseType, cache); break; case TRACE: result = mHttpConnection.trace(httpRequest, responseType, cache); break; } return result; } }
justplay1/Droidutils
library/src/main/java/com/droidutils/http/HttpExecutor.java
Java
apache-2.0
1,908
# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import configparser import io import os import subprocess from rally.common import logging from rally.utils import encodeutils LOG = logging.getLogger(__name__) def check_output(*args, **kwargs): """Run command with arguments and return its output. If the exit code was non-zero it raises a CalledProcessError. The CalledProcessError object will have the return code in the returncode attribute and output in the output attribute. The difference between check_output from subprocess package and this function: * Additional arguments: - "msg_on_err" argument. It is a message that should be written in case of error. Reduces a number of try...except blocks - "debug_output" argument(Defaults to True). Print or not output to LOG.debug * stderr is hardcoded to stdout * In case of error, prints failed command and output to LOG.error * Prints output to LOG.debug """ msg_on_err = kwargs.pop("msg_on_err", None) debug_output = kwargs.pop("debug_output", True) kwargs["stderr"] = subprocess.STDOUT try: output = subprocess.check_output(*args, **kwargs) except subprocess.CalledProcessError as exc: if msg_on_err: LOG.error(msg_on_err) LOG.error("Failed cmd: '%s'" % exc.cmd) LOG.error("Error output: '%s'" % encodeutils.safe_decode(exc.output)) raise output = encodeutils.safe_decode(output) if output and debug_output: LOG.debug("Subprocess output: '%s'" % output) return output def create_dir(dir_path): if not os.path.isdir(dir_path): os.makedirs(dir_path) return dir_path def extend_configfile(extra_options, conf_path): conf_object = configparser.ConfigParser() conf_object.optionxform = str conf_object.read(conf_path) conf_object = add_extra_options(extra_options, conf_object) with open(conf_path, "w") as configfile: conf_object.write(configfile) raw_conf = io.StringIO() conf_object.write(raw_conf) return raw_conf.getvalue() def add_extra_options(extra_options, conf_object): conf_object.optionxform = str for section in extra_options: if section not in (conf_object.sections() + ["DEFAULT"]): conf_object.add_section(section) for option, value in extra_options[section].items(): conf_object.set(section, option, value) return conf_object
openstack/rally
rally/verification/utils.py
Python
apache-2.0
3,055
/* * $Id:$ * IzPack - Copyright 2001-2008 Julien Ponge, All Rights Reserved. * * http://izpack.org/ * http://izpack.codehaus.org/ * * Copyright 2007 Klaus Bartz * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.izforge.izpack.compiler.packager.impl; import com.izforge.izpack.api.data.*; import com.izforge.izpack.api.rules.Condition; import com.izforge.izpack.compiler.compressor.PackCompressor; import com.izforge.izpack.compiler.data.CompilerData; import com.izforge.izpack.compiler.listener.PackagerListener; import com.izforge.izpack.compiler.merge.CompilerPathResolver; import com.izforge.izpack.compiler.merge.PanelMerge; import com.izforge.izpack.compiler.packager.IPackager; import com.izforge.izpack.compiler.stream.JarOutputStream; import com.izforge.izpack.data.CustomData; import com.izforge.izpack.data.PackInfo; import com.izforge.izpack.merge.MergeManager; import com.izforge.izpack.merge.resolve.MergeableResolver; import com.izforge.izpack.util.FileUtil; import com.izforge.izpack.util.IoHelper; import java.io.*; import java.net.URL; import java.util.*; import java.util.jar.Manifest; import java.util.zip.ZipInputStream; /** * The packager base class. The packager interface <code>IPackager</code> is used by the compiler to put files into an installer, and * create the actual installer files. The packager implementation depends on different requirements (e.g. normal packager versus multi volume packager). * This class implements the common used method which can also be overload as needed. * * @author Klaus Bartz */ public abstract class PackagerBase implements IPackager { /** * Path to resources in jar */ public static final String RESOURCES_PATH = "resources/"; /** * Variables. */ private final Properties properties; /** * The listeners. */ private final PackagerListener listener; /** * Executable zipped output stream. First to open, last to close. * Attention! This is our own JarOutputStream, not the java standard! */ private final JarOutputStream installerJar; /** * The merge manager. */ private final MergeManager mergeManager; /** * The path resolver. */ private final CompilerPathResolver pathResolver; /** * The mergeable resolver. */ private final MergeableResolver mergeableResolver; /** * The compression format to be used for pack compression. */ private final PackCompressor compressor; /** * The compiler data. */ private final CompilerData compilerData; /** * Installer requirements. */ private List<InstallerRequirement> installerRequirements; /** * Basic installer info. */ private Info info; /** * GUI preferences. */ private GUIPrefs guiPrefs; /** * Console preferences. */ private ConsolePrefs consolePrefs; /** * The ordered panels. */ protected List<Panel> panelList = new ArrayList<Panel>(); /** * The ordered pack information. */ private final List<PackInfo> packsList = new ArrayList<PackInfo>(); /** * The ordered language pack locale names. */ private List<String> langpackNameList = new ArrayList<String>(); /** * The ordered custom actions information. */ private List<CustomData> customDataList = new ArrayList<CustomData>(); /** * The language pack URLs keyed by locale name (e.g. de_CH). */ private final Map<String, URL> installerResourceURLMap = new HashMap<String, URL>(); /** * The conditions. */ private final Map<String, Condition> rules = new HashMap<String, Condition>(); /** * Dynamic variables. */ private final Map<String, List<DynamicVariable>> dynamicVariables = new HashMap<String, List<DynamicVariable>>(); /** * Dynamic conditions. */ private List<DynamicInstallerRequirementValidator> dynamicInstallerRequirements = new ArrayList<DynamicInstallerRequirementValidator>(); /** * Jar file URLs who's contents will be copied into the installer. */ private Set<Object[]> includedJarURLs = new HashSet<Object[]>(); /** * Tracks files which are already written into the container file. */ private Map<FilterOutputStream, Set<String>> alreadyWrittenFiles = new HashMap<FilterOutputStream, Set<String>>(); /** * Constructs a <tt>PackagerBase</tt>. * * @param properties the properties * @param listener the packager listener * @param installerJar the installer jar output stream * @param mergeManager the merge manager * @param pathResolver the path resolver * @param mergeableResolver the mergeable resolver * @param compressor the pack compressor * @param compilerData the compiler data */ public PackagerBase(Properties properties, PackagerListener listener, JarOutputStream installerJar, MergeManager mergeManager, CompilerPathResolver pathResolver, MergeableResolver mergeableResolver, PackCompressor compressor, CompilerData compilerData) { this.properties = properties; this.listener = listener; this.installerJar = installerJar; this.mergeManager = mergeManager; this.pathResolver = pathResolver; this.mergeableResolver = mergeableResolver; this.compressor = compressor; this.compilerData = compilerData; } @Override public void addCustomJar(CustomData ca, URL url) { if (ca != null) { customDataList.add(ca); // serialized to keep order/variables correct } if (url != null) { addJarContent(url); // each included once, no matter how many times added } } @Override public void addJarContent(URL jarURL) { sendMsg("Adding content of jar: " + jarURL.getFile(), PackagerListener.MSG_VERBOSE); mergeManager.addResourceToMerge(mergeableResolver.getMergeableFromURL(jarURL)); } @Override public void addLangPack(String iso3, URL xmlURL, URL flagURL) { sendMsg("Adding langpack: " + iso3, PackagerListener.MSG_VERBOSE); // put data & flag as entries in installer, and keep array of iso3's // names langpackNameList.add(iso3); addResource("flag." + iso3, flagURL); installerResourceURLMap.put("langpacks/" + iso3 + ".xml", xmlURL); } @Override public void addNativeLibrary(String name, URL url) { sendMsg("Adding native library: " + name, PackagerListener.MSG_VERBOSE); installerResourceURLMap.put("native/" + name, url); } @Override public void addNativeUninstallerLibrary(CustomData data) { customDataList.add(data); // serialized to keep order/variables // correct } @Override public void addPack(PackInfo pack) { packsList.add(pack); } @Override public void addPanel(Panel panel) { sendMsg("Adding panel: " + panel.getPanelId() + " :: Classname : " + panel.getClassName()); panelList.add(panel); // serialized to keep order/variables correct PanelMerge mergeable = pathResolver.getPanelMerge(panel.getClassName()); mergeManager.addResourceToMerge(mergeable); } @Override public void addResource(String resId, URL url) { sendMsg("Adding resource: " + resId, PackagerListener.MSG_VERBOSE); installerResourceURLMap.put(resId, url); } @Override public List<PackInfo> getPacksList() { return packsList; } @Override public List<Panel> getPanelList() { return panelList; } @Override public Properties getVariables() { return properties; } @Override public void setGUIPrefs(GUIPrefs prefs) { sendMsg("Setting the GUI preferences", PackagerListener.MSG_VERBOSE); guiPrefs = prefs; } @Override public void setConsolePrefs(ConsolePrefs prefs) { sendMsg("Setting the console preferences", PackagerListener.MSG_VERBOSE); consolePrefs = prefs; } @Override public void setInfo(Info info) { sendMsg("Setting the installer information", PackagerListener.MSG_VERBOSE); this.info = info; if (!compressor.useStandardCompression() && compressor.getDecoderMapperName() != null) { this.info.setPackDecoderClassName(compressor.getDecoderMapperName()); } } public Info getInfo() { return info; } /** * @return the rules */ @Override public Map<String, Condition> getRules() { return this.rules; } /** * @return the dynamic variables */ @Override public Map<String, List<DynamicVariable>> getDynamicVariables() { return dynamicVariables; } /** * @return the dynamic conditions */ @Override public List<DynamicInstallerRequirementValidator> getDynamicInstallerRequirements() { return dynamicInstallerRequirements; } @Override public void addInstallerRequirements(List<InstallerRequirement> conditions) { this.installerRequirements = conditions; } @Override public void createInstaller() throws Exception { // preliminary work info.setInstallerBase(compilerData.getOutput().replaceAll(".jar", "")); sendStart(); writeInstaller(); // Finish up. closeAlways is a hack for pack compressions other than // default. Some of it (e.g. BZip2) closes the slave of it also. // But this should not be because the jar stream should be open // for the next pack. Therefore an own JarOutputStream will be used // which close method will be blocked. getInstallerJar().closeAlways(); sendStop(); } /** * Determines if each pack is to be included in a separate jar. * * @return <tt>true</tt> if {@link Info#getWebDirURL()} is non-null */ protected boolean packSeparateJars() { return info != null && info.getWebDirURL() != null; } /** * Writes the installer. * * @throws IOException for any I/O error */ protected void writeInstaller() throws IOException { // write the installer jar. MUST be first so manifest is not overwritten by an included jar writeManifest(); writeSkeletonInstaller(); writeInstallerObject("info", info); writeInstallerObject("vars", properties); writeInstallerObject("ConsolePrefs", consolePrefs); writeInstallerObject("GUIPrefs", guiPrefs); writeInstallerObject("panelsOrder", panelList); writeInstallerObject("customData", customDataList); writeInstallerObject("langpacks.info", langpackNameList); writeInstallerObject("rules", rules); writeInstallerObject("dynvariables", dynamicVariables); writeInstallerObject("dynconditions", dynamicInstallerRequirements); writeInstallerObject("installerrequirements", installerRequirements); writeInstallerResources(); writeIncludedJars(); // Pack File Data may be written to separate jars writePacks(); } /** * Write manifest in the install jar. * * @throws IOException for any I/O error */ protected void writeManifest() throws IOException { Manifest manifest = new Manifest(PackagerBase.class.getResourceAsStream("MANIFEST.MF")); File tempManifest = com.izforge.izpack.util.file.FileUtils.createTempFile("MANIFEST", ".MF"); manifest.write(new FileOutputStream(tempManifest)); mergeManager.addResourceToMerge(tempManifest.getAbsolutePath(), "META-INF/MANIFEST.MF"); } /** * Write skeleton installer to the installer jar. * * @throws IOException for any I/O error */ protected void writeSkeletonInstaller() throws IOException { sendMsg("Copying the skeleton installer", PackagerListener.MSG_VERBOSE); mergeManager.addResourceToMerge("com/izforge/izpack/installer/"); mergeManager.addResourceToMerge("org/picocontainer/"); mergeManager.addResourceToMerge("com/izforge/izpack/img/"); mergeManager.addResourceToMerge("com/izforge/izpack/bin/icons/"); mergeManager.addResourceToMerge("com/izforge/izpack/api/"); mergeManager.addResourceToMerge("com/izforge/izpack/event/"); mergeManager.addResourceToMerge("com/izforge/izpack/core/"); mergeManager.addResourceToMerge("com/izforge/izpack/data/"); mergeManager.addResourceToMerge("com/izforge/izpack/gui/"); mergeManager.addResourceToMerge("com/izforge/izpack/merge/"); mergeManager.addResourceToMerge("com/izforge/izpack/util/"); mergeManager.addResourceToMerge("org/apache/regexp/"); mergeManager.addResourceToMerge("com/coi/tools/"); mergeManager.addResourceToMerge("org/apache/tools/zip/"); mergeManager.addResourceToMerge("org/apache/commons/io/FilenameUtils.class"); mergeManager.addResourceToMerge("jline/"); mergeManager.addResourceToMerge("org/fusesource/"); mergeManager.addResourceToMerge("META-INF/native/"); mergeManager.merge(installerJar); } /** * Write an arbitrary object to installer jar. * * @throws IOException for any I/O error */ protected void writeInstallerObject(String entryName, Object object) throws IOException { installerJar.putNextEntry(new org.apache.tools.zip.ZipEntry(RESOURCES_PATH + entryName)); ObjectOutputStream out = new ObjectOutputStream(installerJar); try { out.writeObject(object); } catch (IOException e) { throw new IOException("Error serializing instance of " + object.getClass().getName() + " as entry \"" + entryName + "\"", e); } finally { out.flush(); installerJar.closeEntry(); } } /** * Write the data referenced by URL to installer jar. * * @throws IOException for any I/O error */ protected void writeInstallerResources() throws IOException { sendMsg("Copying " + installerResourceURLMap.size() + " files into installer"); for (Map.Entry<String, URL> stringURLEntry : installerResourceURLMap.entrySet()) { URL url = stringURLEntry.getValue(); InputStream in = url.openStream(); org.apache.tools.zip.ZipEntry newEntry = new org.apache.tools.zip.ZipEntry( RESOURCES_PATH + stringURLEntry.getKey()); long dateTime = FileUtil.getFileDateTime(url); if (dateTime != -1) { newEntry.setTime(dateTime); } installerJar.putNextEntry(newEntry); IoHelper.copyStream(in, installerJar); installerJar.closeEntry(); in.close(); } } /** * Copy included jars to installer jar. * * @throws IOException for any I/O error */ protected void writeIncludedJars() throws IOException { sendMsg("Merging " + includedJarURLs.size() + " jars into installer"); for (Object[] includedJarURL : includedJarURLs) { InputStream is = ((URL) includedJarURL[0]).openStream(); ZipInputStream inJarStream = new ZipInputStream(is); IoHelper.copyZip(inJarStream, installerJar, (List<String>) includedJarURL[1], alreadyWrittenFiles); } } /** * Write packs to the installer jar, or each to a separate jar. * * @throws IOException for any I/O error */ protected abstract void writePacks() throws IOException; /** * Returns the installer jar stream. * * @return the installer jar stream */ protected JarOutputStream getInstallerJar() { return installerJar; } /** * Returns the pack compressor. * * @return the pack compressor */ protected PackCompressor getCompressor() { return compressor; } /** * Dispatches a message to the listeners. * * @param job the job description. */ protected void sendMsg(String job) { sendMsg(job, PackagerListener.MSG_INFO); } /** * Dispatches a message to the listeners at specified priority. * * @param job the job description. * @param priority the message priority. */ protected void sendMsg(String job, int priority) { if (listener != null) { listener.packagerMsg(job, priority); } } /** * Dispatches a start event to the listeners. */ protected void sendStart() { if (listener != null) { listener.packagerStart(); } } /** * Dispatches a stop event to the listeners. */ protected void sendStop() { if (listener != null) { listener.packagerStop(); } } }
bradcfisher/izpack
izpack-compiler/src/main/java/com/izforge/izpack/compiler/packager/impl/PackagerBase.java
Java
apache-2.0
17,943
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <!-- NewPage --> <html lang="en"> <head> <!-- Generated by javadoc (version 1.7.0_25) on Wed Jan 08 11:58:51 EST 2014 --> <title>Uses of Class cpsr.planning.ertapprox.actionensembles.ActionEnsemblesFeatureBuilder</title> <meta name="date" content="2014-01-08"> <link rel="stylesheet" type="text/css" href="../../../../../stylesheet.css" title="Style"> </head> <body> <script type="text/javascript"><!-- if (location.href.indexOf('is-external=true') == -1) { parent.document.title="Uses of Class cpsr.planning.ertapprox.actionensembles.ActionEnsemblesFeatureBuilder"; } //--> </script> <noscript> <div>JavaScript is disabled on your browser.</div> </noscript> <!-- ========= START OF TOP NAVBAR ======= --> <div class="topNav"><a name="navbar_top"> <!-- --> </a><a href="#skip-navbar_top" title="Skip navigation links"></a><a name="navbar_top_firstrow"> <!-- --> </a> <ul class="navList" title="Navigation"> <li><a href="../../../../../overview-summary.html">Overview</a></li> <li><a href="../package-summary.html">Package</a></li> <li><a href="../../../../../cpsr/planning/ertapprox/actionensembles/ActionEnsemblesFeatureBuilder.html" title="class in cpsr.planning.ertapprox.actionensembles">Class</a></li> <li class="navBarCell1Rev">Use</li> <li><a href="../package-tree.html">Tree</a></li> <li><a href="../../../../../deprecated-list.html">Deprecated</a></li> <li><a href="../../../../../index-files/index-1.html">Index</a></li> <li><a href="../../../../../help-doc.html">Help</a></li> </ul> </div> <div class="subNav"> <ul class="navList"> <li>Prev</li> <li>Next</li> </ul> <ul class="navList"> <li><a href="../../../../../index.html?cpsr/planning/ertapprox/actionensembles/class-use/ActionEnsemblesFeatureBuilder.html" target="_top">Frames</a></li> <li><a href="ActionEnsemblesFeatureBuilder.html" target="_top">No Frames</a></li> </ul> <ul class="navList" id="allclasses_navbar_top"> <li><a href="../../../../../allclasses-noframe.html">All Classes</a></li> </ul> <div> <script type="text/javascript"><!-- allClassesLink = document.getElementById("allclasses_navbar_top"); if(window==top) { allClassesLink.style.display = "block"; } else { allClassesLink.style.display = "none"; } //--> </script> </div> <a name="skip-navbar_top"> <!-- --> </a></div> <!-- ========= END OF TOP NAVBAR ========= --> <div class="header"> <h2 title="Uses of Class cpsr.planning.ertapprox.actionensembles.ActionEnsemblesFeatureBuilder" class="title">Uses of Class<br>cpsr.planning.ertapprox.actionensembles.ActionEnsemblesFeatureBuilder</h2> </div> <div class="classUseContainer">No usage of cpsr.planning.ertapprox.actionensembles.ActionEnsemblesFeatureBuilder</div> <!-- ======= START OF BOTTOM NAVBAR ====== --> <div class="bottomNav"><a name="navbar_bottom"> <!-- --> </a><a href="#skip-navbar_bottom" title="Skip navigation links"></a><a name="navbar_bottom_firstrow"> <!-- --> </a> <ul class="navList" title="Navigation"> <li><a href="../../../../../overview-summary.html">Overview</a></li> <li><a href="../package-summary.html">Package</a></li> <li><a href="../../../../../cpsr/planning/ertapprox/actionensembles/ActionEnsemblesFeatureBuilder.html" title="class in cpsr.planning.ertapprox.actionensembles">Class</a></li> <li class="navBarCell1Rev">Use</li> <li><a href="../package-tree.html">Tree</a></li> <li><a href="../../../../../deprecated-list.html">Deprecated</a></li> <li><a href="../../../../../index-files/index-1.html">Index</a></li> <li><a href="../../../../../help-doc.html">Help</a></li> </ul> </div> <div class="subNav"> <ul class="navList"> <li>Prev</li> <li>Next</li> </ul> <ul class="navList"> <li><a href="../../../../../index.html?cpsr/planning/ertapprox/actionensembles/class-use/ActionEnsemblesFeatureBuilder.html" target="_top">Frames</a></li> <li><a href="ActionEnsemblesFeatureBuilder.html" target="_top">No Frames</a></li> </ul> <ul class="navList" id="allclasses_navbar_bottom"> <li><a href="../../../../../allclasses-noframe.html">All Classes</a></li> </ul> <div> <script type="text/javascript"><!-- allClassesLink = document.getElementById("allclasses_navbar_bottom"); if(window==top) { allClassesLink.style.display = "block"; } else { allClassesLink.style.display = "none"; } //--> </script> </div> <a name="skip-navbar_bottom"> <!-- --> </a></div> <!-- ======== END OF BOTTOM NAVBAR ======= --> </body> </html>
williamleif/PSRToolbox
doc/cpsr/planning/ertapprox/actionensembles/class-use/ActionEnsemblesFeatureBuilder.html
HTML
apache-2.0
4,504
package net.darkslave.test; public class TestExecutor { private TestExecutor() {} public static Result measure(Target target, int warming, int measure) throws Exception { if (target == null) throw new IllegalArgumentException("Target method can't be null"); if (warming < 0) throw new IllegalArgumentException("Warming count " + warming + " is not correct"); if (measure < 1) throw new IllegalArgumentException("Measure count " + measure + " is not correct"); long[] time = new long[measure]; // разогрев for (int index = 0; index < warming; index++) { target.run(); } // целевые измерения for (int index = 0; index < measure; index++) { long started = System.nanoTime(); target.run(); time[index] = System.nanoTime() - started; } // сбор статистики long timeMax = time[0]; long timeMin = time[0]; double summAvg = 0; double summStd = 0; for (int index = 0; index < measure; index++) { long d = time[index]; if (timeMax < d) timeMax = d; if (timeMin > d) timeMin = d; summAvg+= d; summStd+= d * d; } double timeAvg = summAvg / measure; double timeStd = Math.sqrt(summStd / measure - timeAvg * timeAvg); return new Result(timeAvg, timeMin, timeMax, timeStd); } @FunctionalInterface public static interface Target { void run() throws Exception; } public static class Result { private final double avg; private final double min; private final double max; private final double std; private Result(double avg, double min, double max, double std) { this.avg = avg; this.min = min; this.max = max; this.std = std; } public double avg() { return avg; } public double min() { return min; } public double max() { return max; } public double std() { return std; } private String toStringCache; @Override public String toString() { if (toStringCache == null) { StringBuilder result = new StringBuilder(); result.append("Avg: ").append(TimeFormat.format(avg)).append("; "); result.append("Min: ").append(TimeFormat.format(min)).append("; "); result.append("Max: ").append(TimeFormat.format(max)).append("; "); result.append("Std: ").append(TimeFormat.format(std)); toStringCache = result.toString(); } return toStringCache; } } public static enum TimeFormat { NANOSECONDS (1, "ns"), MICROSECONDS(1000, "μs"), MILLISECONDS(1000 * 1000, "ms"), SECONDS (1000 * 1000 * 1000, "s"); private final double factor; private final String suffix; private TimeFormat(double factor, String suffix) { this.factor = factor; this.suffix = suffix; } private String format0(double source) { double absolute = Math.abs(source); String format; if (absolute < 20) { format = "%.2f%s"; } else if (absolute < 200) { format = "%.1f%s"; } else { format = "%.0f%s"; } return String.format(format, source, suffix); } public static String format(double source) { double absolute = Math.abs(source); TimeFormat result; if (absolute >= SECONDS.factor) { result = SECONDS; } else if (absolute >= MILLISECONDS.factor) { result = MILLISECONDS; } else if (absolute >= MICROSECONDS.factor) { result = MICROSECONDS; } else { result = NANOSECONDS; } return result.format0(source / result.factor); } } }
TemkaS/util
src/main/java/net/darkslave/test/TestExecutor.java
Java
apache-2.0
4,362
// Copyright (c) Six Labors. // Licensed under the Apache License, Version 2.0. using Xunit; namespace SixLabors.Fonts.Tests { public class Accents { [Theory] [InlineData('á')] [InlineData('é')] [InlineData('í')] [InlineData('ó')] [InlineData('ú')] [InlineData('ç')] [InlineData('ã')] [InlineData('õ')] public void MeasuringAccentedCharacterDoesNotThrow(char c) { FontFamily arial = new FontCollection().Add(TestFonts.OpenSansFile); var font = new Font(arial, 1f, FontStyle.Regular); FontRectangle size = TextMeasurer.Measure(c.ToString(), new TextOptions(font)); } [Theory] [InlineData('á')] [InlineData('é')] [InlineData('í')] [InlineData('ó')] [InlineData('ú')] [InlineData('ç')] [InlineData('ã')] [InlineData('õ')] public void MeasuringWordWithAccentedCharacterDoesNotThrow(char c) { FontFamily arial = new FontCollection().Add(TestFonts.OpenSansFile); var font = new Font(arial, 1f, FontStyle.Regular); FontRectangle size = TextMeasurer.Measure($"abc{c}def", new TextOptions(font)); } } }
SixLabors/Fonts
tests/SixLabors.Fonts.Tests/Accents.cs
C#
apache-2.0
1,291
# This file contains style configuration - cpplint flags. set(MAX_LINE_LENGTH 100) set(STYLE_FILTER) set(STYLE_FILTER ${STYLE_FILTER},-legal/copyright) set(STYLE_FILTER ${STYLE_FILTER},-readability/streams) set(STYLE_FILTER ${STYLE_FILTER},-readability/casting) # set(STYLE_FILTER ${STYLE_FILTER},-build/include_order) set(STYLE_FILTER ${STYLE_FILTER},-build/include_what_you_use)
kareth/helloworldopen2014
cpp/cmake/StyleConfiguration.cmake
CMake
apache-2.0
383
# Wikidetox Viz This directory contains a google [Google Cloud Appengine Project](https://cloud.google.com/appengine/docs/flexible/) that visualizes contributions to Wikipedia. The goal is to help understand toxic contributions at scale. The models are far from perfect which means comments are sometimes incorrectly selected; we do not recommend taking an any automated action based on model scores. The visualization works by interpreting diffs on Talk Pages into comments, and then scoring the comments using [Perspective API hosted models](https://github.com/conversationai/perspectiveapi/blob/master/api_reference.md#models). If a comment is above a certain threshold, the visualization allows Wikipedians to go the [historical Wikipedia revision page](https://en.wikipedia.org/wiki/Help:Page_history) to help improve the conversation and/or raise awareness with the relevant admins. This work is part of the [Study of Harassment and its Impact](https://meta.wikimedia.org/wiki/Research:Study_of_harassment_and_its_impact), and the [WikiDetox Project](https://meta.wikimedia.org/wiki/Research:Detox), and we hope it can help support Wikipedia's [anti-harassment guidelines](https://en.wikipedia.org/wiki/Wikipedia:How_to_deal_with_harassment). [DEMO](https://wikidetox-viz.appspot.com/) ## Setup To setup an instance you need a [Google Cloud Project](https://cloud.google.com/resource-manager/docs/creating-managing-projects) with the [Perpsective API](http://perspectiveapi.com) enabled. 1. Make a directory called `config` in this directory. 2. Copy the `config_default_template.js` file to `config/default.js` 3. Enter the `gcloudKey` and `API_KEY` fields with a path to a keyfile for google cloud access, and a Perspective API key respectively. ## Development info Compile the ts code with `tsc --outdir build/server/ -p tsconfig.server.json -w` Run local server with nodemon `nodemon build/server/index.js config/dev.json` Test streaming `node ./wikiDataCollector/tasks.js streamData 2017-06-01:00:00:00 2017-06-01:00:10:00`
conversationai/wikidetox
viz/README.md
Markdown
apache-2.0
2,050
var class_tri_1_1_graphic_1_1_model = [ [ "Model", "class_tri_1_1_graphic_1_1_model.html#a57f8abc051d104b925e6a2cbb253b56c", null ], [ "GetNormal", "class_tri_1_1_graphic_1_1_model.html#a086a31e07b32371a5d7d6c4629853cb9", null ], [ "GetUV", "class_tri_1_1_graphic_1_1_model.html#a64da67596d9650b41ce0dd60f90eb7bb", null ], [ "GetVertex", "class_tri_1_1_graphic_1_1_model.html#af64f04fc0559ceb51061f8dfed0eb4e6", null ], [ "Render", "class_tri_1_1_graphic_1_1_model.html#aa190ca9e2258a28ea12a36d7fb25b0de", null ] ];
TriantEntertainment/TritonEngine
docs/html/class_tri_1_1_graphic_1_1_model.js
JavaScript
apache-2.0
535
/* * Copyright © 2013-2018 camunda services GmbH and various authors (info@camunda.com) * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.camunda.bpm.engine.rest.dto.history; import java.util.ArrayList; import java.util.Date; import java.util.List; import org.camunda.bpm.engine.ProcessEngineException; import org.camunda.bpm.engine.history.HistoricDecisionInputInstance; import org.camunda.bpm.engine.history.HistoricDecisionInstance; import org.camunda.bpm.engine.history.HistoricDecisionOutputInstance; import com.fasterxml.jackson.annotation.JsonInclude; import com.fasterxml.jackson.annotation.JsonInclude.Include; public class HistoricDecisionInstanceDto { protected String id; protected String decisionDefinitionId; protected String decisionDefinitionKey; protected String decisionDefinitionName; protected Date evaluationTime; protected Date removalTime; protected String processDefinitionId; protected String processDefinitionKey; protected String processInstanceId; protected String rootProcessInstanceId; protected String caseDefinitionId; protected String caseDefinitionKey; protected String caseInstanceId; protected String activityId; protected String activityInstanceId; protected String userId; protected List<HistoricDecisionInputInstanceDto> inputs; protected List<HistoricDecisionOutputInstanceDto> outputs; protected Double collectResultValue; protected String rootDecisionInstanceId; protected String decisionRequirementsDefinitionId; protected String decisionRequirementsDefinitionKey; protected String tenantId; public String getId() { return id; } public String getDecisionDefinitionId() { return decisionDefinitionId; } public String getDecisionDefinitionKey() { return decisionDefinitionKey; } public String getDecisionDefinitionName() { return decisionDefinitionName; } public Date getEvaluationTime() { return evaluationTime; } public String getProcessDefinitionId() { return processDefinitionId; } public String getProcessDefinitionKey() { return processDefinitionKey; } public String getProcessInstanceId() { return processInstanceId; } public String getCaseDefinitionId() { return caseDefinitionId; } public String getCaseDefinitionKey() { return caseDefinitionKey; } public String getCaseInstanceId() { return caseInstanceId; } public String getActivityId() { return activityId; } public String getActivityInstanceId() { return activityInstanceId; } public String getUserId() { return userId; } @JsonInclude(Include.NON_NULL) public List<HistoricDecisionInputInstanceDto> getInputs() { return inputs; } @JsonInclude(Include.NON_NULL) public List<HistoricDecisionOutputInstanceDto> getOutputs() { return outputs; } public Double getCollectResultValue() { return collectResultValue; } public String getRootDecisionInstanceId() { return rootDecisionInstanceId; } public String getTenantId() { return tenantId; } public String getDecisionRequirementsDefinitionId() { return decisionRequirementsDefinitionId; } public String getDecisionRequirementsDefinitionKey() { return decisionRequirementsDefinitionKey; } public Date getRemovalTime() { return removalTime; } public void setRemovalTime(Date removalTime) { this.removalTime = removalTime; } public String getRootProcessInstanceId() { return rootProcessInstanceId; } public void setRootProcessInstanceId(String rootProcessInstanceId) { this.rootProcessInstanceId = rootProcessInstanceId; } public static HistoricDecisionInstanceDto fromHistoricDecisionInstance(HistoricDecisionInstance historicDecisionInstance) { HistoricDecisionInstanceDto dto = new HistoricDecisionInstanceDto(); dto.id = historicDecisionInstance.getId(); dto.decisionDefinitionId = historicDecisionInstance.getDecisionDefinitionId(); dto.decisionDefinitionKey = historicDecisionInstance.getDecisionDefinitionKey(); dto.decisionDefinitionName = historicDecisionInstance.getDecisionDefinitionName(); dto.evaluationTime = historicDecisionInstance.getEvaluationTime(); dto.removalTime = historicDecisionInstance.getRemovalTime(); dto.processDefinitionId = historicDecisionInstance.getProcessDefinitionId(); dto.processDefinitionKey = historicDecisionInstance.getProcessDefinitionKey(); dto.processInstanceId = historicDecisionInstance.getProcessInstanceId(); dto.caseDefinitionId = historicDecisionInstance.getCaseDefinitionId(); dto.caseDefinitionKey = historicDecisionInstance.getCaseDefinitionKey(); dto.caseInstanceId = historicDecisionInstance.getCaseInstanceId(); dto.activityId = historicDecisionInstance.getActivityId(); dto.activityInstanceId = historicDecisionInstance.getActivityInstanceId(); dto.userId = historicDecisionInstance.getUserId(); dto.collectResultValue = historicDecisionInstance.getCollectResultValue(); dto.rootDecisionInstanceId = historicDecisionInstance.getRootDecisionInstanceId(); dto.rootProcessInstanceId = historicDecisionInstance.getRootProcessInstanceId(); dto.decisionRequirementsDefinitionId = historicDecisionInstance.getDecisionRequirementsDefinitionId(); dto.decisionRequirementsDefinitionKey = historicDecisionInstance.getDecisionRequirementsDefinitionKey(); dto.tenantId = historicDecisionInstance.getTenantId(); try { List<HistoricDecisionInputInstanceDto> inputs = new ArrayList<HistoricDecisionInputInstanceDto>(); for (HistoricDecisionInputInstance input : historicDecisionInstance.getInputs()) { HistoricDecisionInputInstanceDto inputDto = HistoricDecisionInputInstanceDto.fromHistoricDecisionInputInstance(input); inputs.add(inputDto); } dto.inputs = inputs; } catch (ProcessEngineException e) { // no inputs fetched } try { List<HistoricDecisionOutputInstanceDto> outputs = new ArrayList<HistoricDecisionOutputInstanceDto>(); for (HistoricDecisionOutputInstance output : historicDecisionInstance.getOutputs()) { HistoricDecisionOutputInstanceDto outputDto = HistoricDecisionOutputInstanceDto.fromHistoricDecisionOutputInstance(output); outputs.add(outputDto); } dto.outputs = outputs; } catch (ProcessEngineException e) { // no outputs fetched } return dto; } }
xasx/camunda-bpm-platform
engine-rest/engine-rest/src/main/java/org/camunda/bpm/engine/rest/dto/history/HistoricDecisionInstanceDto.java
Java
apache-2.0
6,991
package io.github.u2ware.browser.demo.onetomany.mail; import java.io.IOException; import javax.transaction.Transactional; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.SpringApplicationConfiguration; import org.springframework.test.annotation.Rollback; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; import org.springframework.test.context.web.WebAppConfiguration; import com.fasterxml.jackson.databind.ObjectMapper; import io.github.u2ware.browser.demo.DemoApplication; import io.github.u2ware.browser.demo.onetomany.mail.AttachedFile; import io.github.u2ware.browser.demo.onetomany.mail.MyMail; import io.github.u2ware.browser.demo.onetomany.mail.MyMailRepository; @RunWith(SpringJUnit4ClassRunner.class) @SpringApplicationConfiguration(classes = DemoApplication.class) @WebAppConfiguration @Transactional @Rollback(false) public class MyMailRepositoryTest { private Log logger = LogFactory.getLog(getClass()); @Autowired private MyMailRepository mailRepository; private ObjectMapper mapper = new ObjectMapper(); @Test public void test1() throws IOException { AttachedFile f1 = new AttachedFile(); f1.path = "f11"; AttachedFile f2 = new AttachedFile(); f2.path = "f12"; MyMail m = new MyMail(); m.mailFrom = "from1"; m.mailTo = "to1"; m.mailBody = "body1"; m.addAttachedFile(f1); m.addAttachedFile(f2); logger.debug(mapper.writeValueAsString(m)); mailRepository.save(m); Iterable<MyMail> it = mailRepository.findAll(); for(MyMail i : it){ logger.debug("test1 ## "+i); logger.debug(mapper.writeValueAsString(i)); } } @Test public void test2() throws IOException { String content = "{\"mailFrom\":\"from2\",\"mailTo\":\"to2\",\"mailBody\":\"body2\",\"attachedFiles\":[{\"path\":\"f21\"},{\"path\":\"f22\"}]}"; MyMail m = mapper.readValue(content, MyMail.class); mailRepository.save(m); Iterable<MyMail> it = mailRepository.findAll(); for(MyMail i : it){ logger.debug("test2 ## "+i); } } @Test public void test3() throws IOException { AttachedFile f3 = new AttachedFile(); f3.path = "f31"; AttachedFile f4 = new AttachedFile(); f4.path = "f32"; MyMail m = mailRepository.findOne(new Long(1)); m.mailBody = "body3"; m.removeAttechementFiles(); m.addAttachedFile(f3); m.addAttachedFile(f4); logger.debug(mapper.writeValueAsString(m)); mailRepository.save(m); Iterable<MyMail> it = mailRepository.findAll(); for(MyMail i : it){ logger.debug("test3 ## "+i); } } @Test public void test4() throws IOException { MyMail m = mailRepository.findOne(new Long(2)); mailRepository.delete(m); Iterable<MyMail> it = mailRepository.findAll(); for(MyMail i : it){ logger.debug("test4 ## "+i); } } }
u2ware/spring-data-rest-u2ware
spring-data-rest-u2ware-browser-demo/src/test/java/io/github/u2ware/browser/demo/onetomany/mail/MyMailRepositoryTest.java
Java
apache-2.0
3,097
package com.fireflysource.net.tcp.secure.wildfly; import com.fireflysource.net.tcp.secure.utils.SecureUtils; import javax.net.ssl.SSLContext; import java.io.InputStream; import static com.fireflysource.net.tcp.secure.utils.SecureUtils.*; public class SelfSignedCertificateWildflySSLContextFactory extends AbstractWildflySecureEngineFactory { private SSLContext sslContext; public SelfSignedCertificateWildflySSLContextFactory() { try (InputStream in = SecureUtils.getSelfSignedCertificate()) { sslContext = getSSLContext(in, SELF_SIGNED_KEY_STORE_PASSWORD, SELF_SIGNED_KEY_PASSWORD, SELF_SIGNED_KEY_STORE_TYPE); } catch (Throwable e) { LOG.error("get SSL context error", e); } } @Override public SSLContext getSSLContext() { return sslContext; } }
hypercube1024/firefly
firefly-net/src/main/java/com/fireflysource/net/tcp/secure/wildfly/SelfSignedCertificateWildflySSLContextFactory.java
Java
apache-2.0
835
// Copyright (C) 2002 Charless C. Fowlkes <fowlkes@eecs.berkeley.edu> // Copyright (C) 2002 David R. Martin <dmartin@eecs.berkeley.edu> // // This program is free software; you can redistribute it and/or // modify it under the terms of the GNU General Public License as // published by the Free Software Foundation; either version 2 of the // License, or (at your option) any later version. // // This program is distributed in the hope that it will be useful, but // WITHOUT ANY WARRANTY; without even the implied warranty of // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU // General Public License for more details. // // You should have received a copy of the GNU General Public License // along with this program; if not, write to the Free Software // Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA // 02111-1307, USA, or see http://www.gnu.org/copyleft/gpl.html. #include <iostream> #include <stdio.h> #include <math.h> #include <values.h> #include <assert.h> extern "C" { #include <jpeglib.h> #include <jerror.h> } #include "util.hh" #include "image.hh" #include "sort.hh" namespace Util { ////////////////////////////////////////////////////////////////////////////////////////////////////// // // read in a jpeg file into an ImageStack. // if the jpeg file is grayscale then the resulting ImageStack only has 1 layer // otherwise it has 3 layers corresponding to the RGB colorspace. // bool readJpegFile(const char *filename, ImageStack& im) { assert(filename != NULL); int bytesPerPixel; int height; int width; unsigned char* imbuf; bool isGray; FILE* file = fopen(filename, "r"); if (!file) { return (false); } jpeg_decompress_struct cinfo; jpeg_error_mgr jerr; cinfo.err = jpeg_std_error (&jerr); jpeg_create_decompress (&cinfo); jpeg_stdio_src (&cinfo, file); jpeg_read_header (&cinfo, (boolean) true); if (cinfo.out_color_space == JCS_GRAYSCALE) { isGray = true; cinfo.output_components = 1; bytesPerPixel = 1; } else { isGray = false; cinfo.out_color_space = JCS_RGB; cinfo.output_components = 3; bytesPerPixel = 3; } jpeg_calc_output_dimensions (&cinfo); jpeg_start_decompress (&cinfo); height = cinfo.output_height; width = cinfo.output_width; imbuf = new unsigned char[width * height * bytesPerPixel]; const int lineSize = width * bytesPerPixel; unsigned char* p = imbuf; while (cinfo.output_scanline < cinfo.output_height) { jpeg_read_scanlines (&cinfo, &p, 1); p += lineSize; } jpeg_finish_decompress (&cinfo); jpeg_destroy_decompress (&cinfo); fclose (file); //now fill in our image array data structure if (isGray) { im.resize(1,width,height); for (int i = 0; i < width * height; i++) { const float gray = (float) imbuf[i] / 255; const int y = i / width; const int x = i % width; im(0,x,y) = gray; // this fails on fp type errors: assert (gray >= 0 && gray <= 1); } } else { im.resize(3,width,height); for (int i = 0; i < width * height; i++) { const int y = i / width; const int x = i % width; float r = (float) imbuf[3 * i] / 255; float g = (float) imbuf[3 * i + 1] / 255; float b = (float) imbuf[3 * i + 2] / 255; if ((r < 0) || (r > 1)) { std::cerr << "r = " << r << std::endl; } if ((g < 0) || (g > 1)) { std::cerr << "g = " << g << std::endl; } if ((b < 0) || (b > 1)) { std::cerr << "b = " << r << std::endl; } assert (r >= 0 && r <= 1); assert (g >= 0 && g <= 1); assert (b >= 0 && b <= 1); im(RGB_R,x,y) = r; im(RGB_G,x,y) = g; im(RGB_B,x,y) = b; } } delete[] imbuf; return true; } ////////////////////////////////////////////////////////////////////////////////////////////////////// // // used for writing out a single channel image with // the "jet" psuedo-color map // static int jetR (float v) { assert (v >= 0 && v <= 1); int i = (uint) rint (v * 255); return Util::minmax (0, (450 - 5 * abs (i - 196)), 255); } static int jetG (float v) { assert (v >= 0 && v <= 1); int i = (uint) rint (v * 255); return Util::minmax (0, (450 - 5 * abs (i - 128)), 255); } static int jetB (float v) { assert (v >= 0 && v <= 1); int i = (uint) rint (v * 255); return Util::minmax (0, (450 - 5 * abs (i - 64)), 255); } // // create a normalized version of this image. // first run thru and find the min and max values // and then create a new image whose values range // over [0.0,1.0]. the constant image is set to 0.0 // // used in the Jpeg reading and writing. // static void getNormalized(const Image& im, Image& normalized) { int width = im.size(1); int height = im.size(0); normalized.resize(width,height); float max = im(0,0); float min = im(0,0); for (int i = 0; i < width; i++) { for (int j = 0; j < height; j++) { float val = im(i,j); max = Util::max(max, val); min = Util::min(min, val); } } if ((max - min) > 0) { normalized = (im - min) / (max - min); } else { normalized.init(0); } } // // write out a grayscale image to a jpeg file. // if normalize=true, then the range of the image is adjusted to use the full scale // if jet=true then the image is written in pseudocolor rather than grayscale // bool writeJpegFile(const Image& im, const char *filespec, const bool normalize, const bool jet) { assert(filespec != NULL); //normalized version of this image Image normim; if (normalize) { getNormalized(im,normim); } else { normim = im; } struct jpeg_error_mgr jerr; struct jpeg_compress_struct cinfo; // Open the output file. FILE* file = Util::openOutputStrm (filespec); if (!file) { return (false); } // Set up the normal JPEG error handling. cinfo.err = jpeg_std_error (&jerr); // Init a JPEG compression object. jpeg_create_compress (&cinfo); // Specify source of data. jpeg_stdio_dest (&cinfo, file); // Specify compression parameters. cinfo.image_width = im.size(0); cinfo.image_height = im.size(1); cinfo.input_components = 3; cinfo.in_color_space = JCS_RGB; jpeg_set_defaults (&cinfo); // Start compression. jpeg_start_compress (&cinfo, TRUE); // Allocate space for one scanline. JSAMPLE* buf = new JSAMPLE[im.size(0) * 3]; // Write data one scanline at a time. for (int y = 0; y < im.size(1); y++) { for (int x = 0; x < im.size(0); x++) { float v = normim(x,y); assert (v >= 0 && v <= 1); if (jet) { buf[3*x + 0] = jetR(v); buf[3*x + 1] = jetG(v); buf[3*x + 2] = jetB(v); } else { int g = (int)rint(255 * v); buf[3*x + 0] = g; buf[3*x + 1] = g; buf[3*x + 2] = g; } } int c = jpeg_write_scanlines (&cinfo, &buf, 1); if (c != 1) { fclose (file); jpeg_destroy_compress (&cinfo); delete[]buf; return false; } } // Clean up. jpeg_finish_compress (&cinfo); fclose (file); jpeg_destroy_compress (&cinfo); delete[]buf; return true; } ////////////////////////////////////////////////////////////////////////////////////////////////////// // // convert an RGB imagestack into an 1976 CIE L*a*b* imagestack. // void rgb2lab (const ImageStack& rgb, ImageStack& lab) { assert(rgb.size(0) == 3); const int w = rgb.size(1); const int h = rgb.size(2); lab.resize(3,w,h); for (int i = 0; i < w; i++) { for (int j = 0; j < h; j++) { // RGB const float R = rgb(RGB_R,i,j); const float G = rgb(RGB_G,i,j); const float B = rgb(RGB_B,i,j); assert (R >= 0 && R <= 1); assert (G >= 0 && G <= 1); assert (B >= 0 && B <= 1); // RGB --> XYZ const float X = 0.412453 * R + 0.357580 * G + 0.180423 * B; const float Y = 0.212671 * R + 0.715160 * G + 0.072169 * B; const float Z = 0.019334 * R + 0.119193 * G + 0.950227 * B; // XYZ of D65 reference white (R=G=B=1). const float Xn = 0.950456; const float Yn = 1.000000; const float Zn = 1.088754; // XYZ --> 1976 CIE L*a*b* const float rX = X / Xn; const float rY = Y / Yn; const float rZ = Z / Zn; const float thresh = 0.008856; #define f(t) (t > thresh) ? pow(t,1./3.) : (7.787 * t + 16./116.) const float fX = f(rX); const float fY = f(rY); const float fZ = f(rZ); #undef f const float L = (rY > thresh) ? 116. * pow (rY, 1. / 3.) - 16. : 903.3 * rY; const float a = 500. * (fX - fY); const float b = 200. * (fY - fZ); assert (L >= 0 && L <= 100); assert (a >= -120 && a <= 120); assert (b >= -120 && b <= 120); lab(LAB_L,i,j) = L; lab(LAB_A,i,j) = a; lab(LAB_B,i,j) = b; } } } ////////////////////////////////////////////////////////////////////////////////////////////////////// // // normalize an lab image so that values lie in [0,1] // void labNormalize (ImageStack& lab) { for(int x = 0; x < lab.size(1); x++) { for(int y = 0; y < lab.size(2); y++) { float L = lab(LAB_L,x,y); float a = lab(LAB_A,x,y); float b = lab(LAB_B,x,y); const float minab = -73; const float maxab = 95; const float range = maxab - minab; L = L / 100.0; a = (a - minab) / range; b = (b - minab) / range; L = Util::minmax(0.0f, L, 1.0f); a = Util::minmax(0.0f, a, 1.0f); b = Util::minmax(0.0f, b, 1.0f); lab(LAB_L,x,y) = L; lab(LAB_A,x,y) = a; lab(LAB_B,x,y) = b; } } } ////////////////////////////////////////////////////////////////////////////////////////////////////// // // create a translated version of this image where // the old image appears embedded in a new image of // size [newwidth x newheight]. undefined pixels // are filled in with value fill. // void getTranslated (const Image& im, const int xoffset, const int yoffset, const int newwidth, const int newheight, const float fill, Image& translated) { assert(newwidth >= 0); assert(newheight >= 0); int oldwidth = im.size(1); int oldheight = im.size(0); translated.resize(newwidth,newheight); translated.init(fill); for (int x = xoffset; x < oldwidth + xoffset; x++) { for (int y = yoffset; y < oldheight + yoffset; y++) { if ((y >= 0) && (y < newheight) && (x >= 0) && (x < newwidth)) { translated(x,y) = im(x-xoffset,y-yoffset); } } } } ////////////////////////////////////////////////////////////////////////////////////////////////////// // // utility method to compute an antialiasing filter of // size 13 for the given downscaling factor // // used by getScaled // void createAntialiasFilter (const float scale, Image& filter) { Util::Array1D<float> wind(13); Util::Array1D<float> b(6); float sum = 0; filter.resize(1,13); filter.init(0); for (int i = 0; i < 13; i++) { wind(i) = 0.54 - 0.46 * cos (2 * M_PI * i / 11); } for (int i = 0; i < 6; i++) { b(i) = sin (scale * M_PI * (i + 0.5)) / (M_PI * (i + 0.5)); } for (int i = 0; i < 6; i++) { filter(1,i) = b(6 - i) * wind(i); filter(1,i + 6) = b(i) * wind(i + 6); sum = sum + filter(1,i) + filter(1,i + 6); } sum = fabs (sum); for (int i = 0; i < 12; i++) { filter(1,i) = filter(1,i) / sum; } } // // create a resized version of this image of size [newwidth x newheight] // if bilinear = true, use bilinear interpolation // otherwise use bicubic interpolation // void getScaled(const Image& im, const int newwidth, const int newheight, const bool bilinear, Image& scaled) { assert(newwidth >= 0); assert(newheight >= 0); //first compute the scale factor float oldheight = im.size(0); float oldwidth = im.size(1); float xscale = (float) newwidth / (float) oldwidth; float yscale = (float) newheight / (float) oldheight; //filter to prevent aliasing if necessary Image antialiased = im; if (xscale < 1) { Image filtx; Image filtered; createAntialiasFilter(xscale,filtx); getFiltered(antialiased,filtered,filtx); antialiased = filtered; } if (yscale < 1) { Image filty; Image filtered; createAntialiasFilter(yscale,filty); getFiltered(antialiased,filtered,filty); antialiased = filtered; } //build the affine matrix Matrix A(3,3); A.init(0); A(0,0) = xscale; A(1,1) = yscale; A(2,2) = 1.0; //transform the image getTransformed(antialiased,A,newwidth,newheight,0,0,bilinear,scaled); } ////////////////////////////////////////////////////////////////////////////////////////////////////// // // create a rotated version of this image // using an appropriate affine transform. // if bilinear = true, use bilinear interpolation // otherwise use bicubic. // void getRotated(const Image& im, const float angle, const bool bilinear, Image& rotated) { //put theta in 0 - 2pi float theta = angle; while (theta < 0) theta = (theta + 2 * M_PI); while (theta > 2 * M_PI) theta = (theta - 2 * M_PI); //build the affine matrix Matrix A(3,3); A(0,0) = cos(theta); A(0,1) = sin(theta); A(0,2) = 0.0; A(1,0) = -sin(theta); A(1,1) = cos(theta); A(1,2) = 0.0; A(2,0) = 0.0; A(2,1) = 0.0; A(2,2) = 1.0; getTransformed(im, A, im.size(0), im.size(1), 0, 0, bilinear, rotated); } ////////////////////////////////////////////////////////////////////////////////////////////////////// // // the cubic spline interpolation kernel // static float cubic_bspline (const float x) { float a, b, c, d; if ((x + 2.0) <= 0.0) { a = 0.0; } else { a = pow ((x + 2.0), 3.0); } if ((x + 1.0) <= 0.0) { b = 0.0; } else { b = pow ((x + 1.0), 3.0); } if (x <= 0) { c = 0.0; } else { c = pow (x, 3.0); } if ((x - 1.0) <= 0.0) { d = 0.0; } else { d = pow ((x - 1.0), 3.0); } return ((1.0 / 6.0) * (a - (4.0 * b) + (6.0 * c) - (4.0 * d))); } // // returns the inverse. only works for 3x3 matricies // static void getInverse(const Matrix mat, Matrix& inv) { assert((mat.size(0) != 3) || (mat.size(1) != 3)); inv.resize(3,3); float denom = mat(0, 0) * mat(1, 1) * mat(2, 2) - mat(0, 0) * mat(1, 2) * mat(2, 1) - mat(1, 0) * mat(0, 1) * mat(2, 2) + mat(1, 0) * mat(0, 2) * mat(2, 1) + mat(2, 0) * mat(0, 1) * mat(1, 2) - mat(2, 0) * mat(0, 2) * mat(1, 1); inv(0,0) = ( mat(1, 1) * mat(2, 2) - mat(1, 2) * mat(2, 1)) / denom; inv(0,1) = (-mat(0, 1) * mat(2, 2) + mat(0, 2) * mat(2, 1)) / denom; inv(0,2) = ( mat(0, 1) * mat(1, 2) - mat(0, 2) * mat(1, 1)) / denom; inv(1,0) = (-mat(1, 0) * mat(2, 2) + mat(1, 2) * mat(2, 0)) / denom; inv(1,1) = ( mat(0, 0) * mat(2, 2) - mat(0, 2) * mat(2, 0)) / denom; inv(1,2) = (-mat(0, 0) * mat(1, 2) + mat(0, 2) * mat(1, 0)) / denom; inv(2,0) = ( mat(1, 0) * mat(2, 1) - mat(1, 1) * mat(2, 0)) / denom; inv(2,1) = (-mat(0, 0) * mat(2, 1) + mat(0, 1) * mat(2, 0)) / denom; inv(2,2) = ( mat(0, 0) * mat(1, 1) - mat(0, 1) * mat(1, 0)) / denom; } // // returns a new image which is an affine // transformed version of this image. // newimage = A*image. the new image // is of size (height, width) such that // the corners of the old image are transformed // to locations inside the new image // // if bilinear is TRUE then use bilinear interpolation // otherwise use bicubic B-spline interpolation // void getTransformed (const Image& im, const Matrix& A, const int width, const int height, const int xoffset, const int yoffset, const bool bilinear, Matrix& transformed) { assert(width >= 0); assert(height >= 0); float oldwidth = (float)im.size(1); float oldheight = (float)im.size(0); Matrix B; getInverse(A,B); //allocate the result transformed.resize(width,height); //transform the image for (float x = 0; x < width; x++) { for (float y = 0; y < height; y++) { //compute the coordinates in the original image plane float u = (x + xoffset) * B(0, 0) + (y + yoffset) * B(0, 1) + B(0, 2); float v = (x + xoffset) * B(1, 0) + (y + yoffset) * B(1, 1) + B(1, 2); //if it's outside the bounds of the //source image, fill with zeros if ((u >= oldwidth) || (u < 0.0) || (v >= oldheight) || (v < 0.0)) { transformed((int)x, (int)y) = 0.0; } else { //do bilinear or bicubic interpolation as required if (bilinear == true) { float u1 = floor (u); float u2 = u1 + 1; float v1 = floor (v); float v2 = v1 + 1; float du = u - u1; float dv = v - v1; u1 = Util::max (0.0f, u1); u2 = Util::min (oldwidth - 1, u2); v1 = Util::max (0.0f, v1); v2 = Util::min (oldheight - 1, v2); float val11 = im((int)u1, (int)v1); float val12 = im((int)u1, (int)v2); float val21 = im((int)u2, (int)v1); float val22 = im((int)u2, (int)v2); float val = (1 - dv) * (1 - du) * val11 + (1 - dv) * du * val12 + dv * (1 - du) * val21 + dv * du * val22; transformed((int)x, (int)y) = val; } else { float a = u - floor (u); float b = v - floor (v); float val = 0.0; for (int m = -1; m < 3; m++) { float r1 = cubic_bspline ((float) m - a); for (int n = -1; n < 3; n++) { float r2 = cubic_bspline (-1.0 * ((float) n - b)); float u1 = floor (u) + m; float v1 = floor (v) + n; u1 = Util::min (oldwidth - 1, Util::max (0.0f, u1)); v1 = Util::min (oldheight - 1, Util::max (0.0f, v1)); val += im((int)u1,(int)v1) * r1 * r2; } } transformed((int)x,(int)y) = val; } } } } } ////////////////////////////////////////////////////////////////////////////////////////////////////// // // filters the image via convolution with the given // kernel and returns the resulting image. kernel // must have odd dimensions. // void getFiltered (const Image& im, const Image& kernel, Image& filtered) { // image and kernel dimensions const int iwidth = im.size(0); const int iheight = im.size(1); const int kwidth = kernel.size(0); const int kheight = kernel.size(1); //output is same size as input filtered.resize(iwidth,iheight); filtered.init(0); // the kernel must not be larger than the image assert (kwidth <= iwidth); assert (kheight <= iheight); // the kernel must be odd in each dimension so it can be centered // over a pixel assert ((kwidth % 2) == 1); assert ((kheight % 2) == 1); // radius of kernel in each dimension; also the coordinates of the // kernel's center pixel const int xr = kwidth / 2; const int yr = kheight / 2; //flip the kernel left-right and up-down Util::Array2D<float> kern(kwidth,kheight); kern.init(0); for (int x = 0; x < kwidth; x++) { const int xx = kwidth - 1 - x; for (int y = 0; y < kheight; y++) { const int yy = kheight - 1 - y; kern(xx,yy) = kernel(x,y); } } // padded image dimensions const int pwidth = iwidth + 2 * xr; const int pheight = iheight + 2 * yr; //create image with reflective padding Util::Array2D<float> pim(pwidth,pheight); // top left for (int y = 0; y < yr; y++) { const int py = yr - 1 - y; for (int x = 0; x < xr; x++) { const int px = xr - 1 - x; pim(px,py) = im(x,y); } } // top right for (int y = 0; y < yr; y++) { const int py = yr - 1 - y; for (int x = 0; x < xr; x++) { const int xs = iwidth - 1 - x; const int xd = xr + iwidth + x; pim(xd,py) = im(xs,y); } } // bottom left for (int y = 0; y < yr; y++) { const int ys = iheight - 1 - y; const int yd = yr + iheight + y; for (int x = 0; x < xr; x++) { const int px = xr - 1 - x; pim(px,yd) = im(x,ys); } } // bottom right for (int y = 0; y < yr; y++) { const int ys = iheight - 1 - y; const int yd = yr + iheight + y; for (int x = 0; x < xr; x++) { const int xs = iwidth - 1 - x; const int xd = xr + iwidth + x; pim(xd,yd) = im(xs,ys); } } // top for (int y = 0; y < yr; y++) { const int py = yr - 1 - y; for (int x = 0; x < iwidth; x++) { const int px = x + xr; pim(px,py) = im(x,y); } } // bottom for (int y = 0; y < yr; y++) { const int ys = iheight - 1 - y; const int yd = yr + iheight + y; for (int x = 0; x < iwidth; x++) { const int px = x + xr; pim(px,yd) = im(x,ys); } } // left for (int y = 0; y < iheight; y++) { const int py = yr + y; for (int x = 0; x < xr; x++) { const int px = xr - 1 - x; pim(px,py) = im(x,y); } } // right for (int y = 0; y < iheight; y++) { const int py = yr + y; for (int x = 0; x < xr; x++) { const int xs = iwidth - 1 - x; const int xd = xr + iwidth + x; pim(xd,py) = im(xs,y); } } // center for (int y = 0; y < iheight; y++) { const int py = yr + y; for (int x = 0; x < iwidth; x++) { const int px = xr + x; pim(px,py) = im(x,y); } } // use direct access to underlying arrays for speed float *p_pim = pim.data(); float *p_kern = kern.data(); float *p_filtered = filtered.data(); // do the convolution // interchange y and ky loops, and unroll ky loop // gets 371 MFLOPS (including all overhead) on 700MHz PIII (53% of peak) for (int x = 0; x < iwidth; x++) { for (int kx = 0; kx < kwidth; kx++) { const int pcol = (x + kx) * pheight; const int kcol = kx * kheight; int ky = 0; while (ky < (kheight & ~0x7)) { const float k0 = p_kern[kcol + ky + 0]; const float k1 = p_kern[kcol + ky + 1]; const float k2 = p_kern[kcol + ky + 2]; const float k3 = p_kern[kcol + ky + 3]; const float k4 = p_kern[kcol + ky + 4]; const float k5 = p_kern[kcol + ky + 5]; const float k6 = p_kern[kcol + ky + 6]; const float k7 = p_kern[kcol + ky + 7]; float in0 = p_pim[pcol + 0 + ky + 0]; float in1 = p_pim[pcol + 0 + ky + 1]; float in2 = p_pim[pcol + 0 + ky + 2]; float in3 = p_pim[pcol + 0 + ky + 3]; float in4 = p_pim[pcol + 0 + ky + 4]; float in5 = p_pim[pcol + 0 + ky + 5]; float in6 = p_pim[pcol + 0 + ky + 6]; for (int y = 0; y < iheight; y++) { const float in7 = p_pim[pcol + y + ky + 7]; p_filtered[x*iheight + y] += k0*in0 + k1*in1 + k2*in2 + k3*in3 + k4*in4 + k5*in5 + k6*in6 + k7*in7; in0 = in1; in1 = in2; in2 = in3; in3 = in4; in4 = in5; in5 = in6; in6 = in7; } ky += 8; } while (ky < (kheight & ~0x3)) { const float k0 = p_kern[kcol + ky + 0]; const float k1 = p_kern[kcol + ky + 1]; const float k2 = p_kern[kcol + ky + 2]; const float k3 = p_kern[kcol + ky + 3]; float in0 = p_pim[pcol + 0 + ky + 0]; float in1 = p_pim[pcol + 0 + ky + 1]; float in2 = p_pim[pcol + 0 + ky + 2]; for (int y = 0; y < iheight; y++) { const float in3 = p_pim[pcol + y + ky + 3]; p_filtered[x*iheight + y] += k0*in0 + k1*in1 + k2*in2 + k3*in3; in0 = in1; in1 = in2; in2 = in3; } ky += 4; } while (ky < (kheight & ~0x1)) { const float k0 = p_kern[kcol + ky + 0]; const float k1 = p_kern[kcol + ky + 1]; float in0 = p_pim[pcol + 0 + ky + 0]; for (int y = 0; y < iheight; y++) { const float in1 = p_pim[pcol + y + ky + 1]; p_filtered[x*iheight + y] += k0*in0 + k1*in1; in0 = in1; } ky += 2; } while (ky < kheight) { const float k0 = p_kern[kcol + ky]; for (int y = 0; y < iheight; y++) { p_filtered[x*iheight + y] += k0*p_pim[pcol + y + ky]; } ky += 1; } assert (ky == kheight); } } } ////////////////////////////////////////////////////////////////////////////////////////////////////// // // filter at the given radius. // the resulting value at a pixel is the n'th-order value // in a circular window cetnered at the pixel // 0 <= order <= 1, 0 is smallest value, 1 is largest. // Call with 0.5 to get median filtering. // void getPctFiltered(const Image& im, const float radius, const float order, Image& filtered) { assert (order >= 0 && order <= 1); if (radius < 1) { filtered = im; return; } //allocate window and filtered image const int windowRadius = (int) ceil (radius); const int windowDiam = 2 * windowRadius + 1; const int windowPixels = windowDiam * windowDiam; Util::Array1D<float> values(windowPixels); const int iwidth = im.size(0); const int iheight = im.size(1); filtered.resize(iwidth,iheight); //loop over the image for (int x = 0; x < iwidth; x++) { for (int y = 0; y < iheight; y++) { //copy values out of the window int count = 0; for (int u = -windowRadius; u <= windowRadius; u++) { for (int v = -windowRadius; v <= windowRadius; v++) { if ((u * u + v * v) > radius * radius) { continue; } int yi = y + u; int xi = x + v; if (yi < 0 || yi >= iheight) { continue; } if (xi < 0 || xi >= iwidth) { continue; } assert (count < windowPixels); values(count++) = im(xi,yi); } } //sort the values in ascending order assert (count > 0); Util::sort(values.data(), count); assert(values(0) <= values(count - 1)); //pick out percentile value int index = (int) rint (order * (count - 1)); assert (index >= 0 && index < count); float pctVal = values(index); assert (pctVal >= values(0)); assert (pctVal <= values(count - 1)); filtered(x,y) = pctVal; } } } ////////////////////////////////////////////////////////////////////////////////////////////////////// // // filter at the given radius. // the resulting value at a pixel is the maximum value // in a circular window cetnered at the pixel // void getMaxFiltered (const Image& im, const float radius, Image& filtered) { if (radius < 1) { filtered = im; return; } const int iwidth = im.size(0); const int iheight = im.size(1); const int windowRadius = (int)ceil(radius); filtered.resize(iwidth,iheight); //loop over the image for (int x = 0; x < iwidth; x++) { for (int y = 0; y < iheight; y++) { //extract max from window float maxVal = im(x,y); for (int u = -windowRadius; u <= windowRadius; u++) { for (int v = -windowRadius; v <= windowRadius; v++) { if ((u * u + v * v) > radius * radius) { continue; } int xi = x + u; int yi = y + v; if (yi < 0 || yi >= iheight) { continue; } if (xi < 0 || xi >= iwidth) { continue; } const float val = im(xi,yi); maxVal = Util::max(val,maxVal); } } // save max filtered(x,y) = maxVal; } } } } //namespace Util
batra-mlp-lab/divmbest
pascalseg/external_src/cpmc_release1/external_code/globalPb/BSE-1.2/util/image.cc
C++
apache-2.0
31,940
<!doctype html PUBLIC "-//W3C//DTD html 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <meta http-equiv='Content-Type' content='text/html; charset=utf-8'> <meta http-equiv='X-UA-Compatible' content='IE=emulateIE7' /> <title>Coverage for /Library/Python/2.7/site-packages/_pytest/assertion/rewrite: 36%</title> <link rel='stylesheet' href='style.css' type='text/css'> <script type='text/javascript' src='jquery.min.js'></script> <script type='text/javascript' src='jquery.hotkeys.js'></script> <script type='text/javascript' src='jquery.isonscreen.js'></script> <script type='text/javascript' src='coverage_html.js'></script> <script type='text/javascript' charset='utf-8'> jQuery(document).ready(coverage.pyfile_ready); </script> </head> <body id='pyfile'> <div id='header'> <div class='content'> <h1>Coverage for <b>/Library/Python/2.7/site-packages/_pytest/assertion/rewrite</b> : <span class='pc_cov'>36%</span> </h1> <img id='keyboard_icon' src='keybd_closed.png'> <h2 class='stats'> 452 statements &nbsp; <span class='run hide_run shortkey_r button_toggle_run'>163 run</span> <span class='mis shortkey_m button_toggle_mis'>289 missing</span> <span class='exc shortkey_x button_toggle_exc'>0 excluded</span> </h2> </div> </div> <div class='help_panel'> <img id='panel_icon' src='keybd_open.png'> <p class='legend'>Hot-keys on this page</p> <div> <p class='keyhelp'> <span class='key'>r</span> <span class='key'>m</span> <span class='key'>x</span> <span class='key'>p</span> &nbsp; toggle line displays </p> <p class='keyhelp'> <span class='key'>j</span> <span class='key'>k</span> &nbsp; next/prev highlighted chunk </p> <p class='keyhelp'> <span class='key'>0</span> &nbsp; (zero) top of page </p> <p class='keyhelp'> <span class='key'>1</span> &nbsp; (one) first highlighted chunk </p> </div> </div> <div id='source'> <table cellspacing='0' cellpadding='0'> <tr> <td class='linenos' valign='top'> <p id='n1' class='pln'><a href='#n1'>1</a></p> <p id='n2' class='pln'><a href='#n2'>2</a></p> <p id='n3' class='stm run hide_run'><a href='#n3'>3</a></p> <p id='n4' class='stm run hide_run'><a href='#n4'>4</a></p> <p id='n5' class='stm run hide_run'><a href='#n5'>5</a></p> <p id='n6' class='stm run hide_run'><a href='#n6'>6</a></p> <p id='n7' class='stm run hide_run'><a href='#n7'>7</a></p> <p id='n8' class='stm run hide_run'><a href='#n8'>8</a></p> <p id='n9' class='stm run hide_run'><a href='#n9'>9</a></p> <p id='n10' class='stm run hide_run'><a href='#n10'>10</a></p> <p id='n11' class='stm run hide_run'><a href='#n11'>11</a></p> <p id='n12' class='stm run hide_run'><a href='#n12'>12</a></p> <p id='n13' class='pln'><a href='#n13'>13</a></p> <p id='n14' class='stm run hide_run'><a href='#n14'>14</a></p> <p id='n15' class='stm run hide_run'><a href='#n15'>15</a></p> <p id='n16' class='pln'><a href='#n16'>16</a></p> <p id='n17' class='pln'><a href='#n17'>17</a></p> <p id='n18' class='pln'><a href='#n18'>18</a></p> <p id='n19' class='stm run hide_run'><a href='#n19'>19</a></p> <p id='n20' class='stm mis'><a href='#n20'>20</a></p> <p id='n21' class='pln'><a href='#n21'>21</a></p> <p id='n22' class='stm run hide_run'><a href='#n22'>22</a></p> <p id='n23' class='stm mis'><a href='#n23'>23</a></p> <p id='n24' class='stm run hide_run'><a href='#n24'>24</a></p> <p id='n25' class='stm mis'><a href='#n25'>25</a></p> <p id='n26' class='pln'><a href='#n26'>26</a></p> <p id='n27' class='stm run hide_run'><a href='#n27'>27</a></p> <p id='n28' class='stm run hide_run'><a href='#n28'>28</a></p> <p id='n29' class='stm run hide_run'><a href='#n29'>29</a></p> <p id='n30' class='stm run hide_run'><a href='#n30'>30</a></p> <p id='n31' class='pln'><a href='#n31'>31</a></p> <p id='n32' class='stm run hide_run'><a href='#n32'>32</a></p> <p id='n33' class='stm run hide_run'><a href='#n33'>33</a></p> <p id='n34' class='pln'><a href='#n34'>34</a></p> <p id='n35' class='stm run hide_run'><a href='#n35'>35</a></p> <p id='n36' class='stm run hide_run'><a href='#n36'>36</a></p> <p id='n37' class='pln'><a href='#n37'>37</a></p> <p id='n38' class='stm run hide_run'><a href='#n38'>38</a></p> <p id='n39' class='pln'><a href='#n39'>39</a></p> <p id='n40' class='pln'><a href='#n40'>40</a></p> <p id='n41' class='stm run hide_run'><a href='#n41'>41</a></p> <p id='n42' class='stm run hide_run'><a href='#n42'>42</a></p> <p id='n43' class='stm run hide_run'><a href='#n43'>43</a></p> <p id='n44' class='stm run hide_run'><a href='#n44'>44</a></p> <p id='n45' class='pln'><a href='#n45'>45</a></p> <p id='n46' class='stm run hide_run'><a href='#n46'>46</a></p> <p id='n47' class='stm run hide_run'><a href='#n47'>47</a></p> <p id='n48' class='stm run hide_run'><a href='#n48'>48</a></p> <p id='n49' class='pln'><a href='#n49'>49</a></p> <p id='n50' class='stm run hide_run'><a href='#n50'>50</a></p> <p id='n51' class='stm run hide_run'><a href='#n51'>51</a></p> <p id='n52' class='stm run hide_run'><a href='#n52'>52</a></p> <p id='n53' class='stm run hide_run'><a href='#n53'>53</a></p> <p id='n54' class='stm run hide_run'><a href='#n54'>54</a></p> <p id='n55' class='stm run hide_run'><a href='#n55'>55</a></p> <p id='n56' class='stm run hide_run'><a href='#n56'>56</a></p> <p id='n57' class='stm run hide_run'><a href='#n57'>57</a></p> <p id='n58' class='stm run hide_run'><a href='#n58'>58</a></p> <p id='n59' class='stm run hide_run'><a href='#n59'>59</a></p> <p id='n60' class='pln'><a href='#n60'>60</a></p> <p id='n61' class='pln'><a href='#n61'>61</a></p> <p id='n62' class='stm run hide_run'><a href='#n62'>62</a></p> <p id='n63' class='stm run hide_run'><a href='#n63'>63</a></p> <p id='n64' class='stm run hide_run'><a href='#n64'>64</a></p> <p id='n65' class='stm run hide_run'><a href='#n65'>65</a></p> <p id='n66' class='stm run hide_run'><a href='#n66'>66</a></p> <p id='n67' class='stm run hide_run'><a href='#n67'>67</a></p> <p id='n68' class='stm run hide_run'><a href='#n68'>68</a></p> <p id='n69' class='stm run hide_run'><a href='#n69'>69</a></p> <p id='n70' class='stm run hide_run'><a href='#n70'>70</a></p> <p id='n71' class='stm run hide_run'><a href='#n71'>71</a></p> <p id='n72' class='stm run hide_run'><a href='#n72'>72</a></p> <p id='n73' class='stm run hide_run'><a href='#n73'>73</a></p> <p id='n74' class='stm mis'><a href='#n74'>74</a></p> <p id='n75' class='stm mis'><a href='#n75'>75</a></p> <p id='n76' class='pln'><a href='#n76'>76</a></p> <p id='n77' class='stm mis'><a href='#n77'>77</a></p> <p id='n78' class='stm run hide_run'><a href='#n78'>78</a></p> <p id='n79' class='pln'><a href='#n79'>79</a></p> <p id='n80' class='stm run hide_run'><a href='#n80'>80</a></p> <p id='n81' class='pln'><a href='#n81'>81</a></p> <p id='n82' class='stm run hide_run'><a href='#n82'>82</a></p> <p id='n83' class='stm run hide_run'><a href='#n83'>83</a></p> <p id='n84' class='pln'><a href='#n84'>84</a></p> <p id='n85' class='stm run hide_run'><a href='#n85'>85</a></p> <p id='n86' class='pln'><a href='#n86'>86</a></p> <p id='n87' class='pln'><a href='#n87'>87</a></p> <p id='n88' class='stm run hide_run'><a href='#n88'>88</a></p> <p id='n89' class='stm run hide_run'><a href='#n89'>89</a></p> <p id='n90' class='stm run hide_run'><a href='#n90'>90</a></p> <p id='n91' class='stm run hide_run'><a href='#n91'>91</a></p> <p id='n92' class='stm run hide_run'><a href='#n92'>92</a></p> <p id='n93' class='stm run hide_run'><a href='#n93'>93</a></p> <p id='n94' class='pln'><a href='#n94'>94</a></p> <p id='n95' class='stm run hide_run'><a href='#n95'>95</a></p> <p id='n96' class='pln'><a href='#n96'>96</a></p> <p id='n97' class='stm run hide_run'><a href='#n97'>97</a></p> <p id='n98' class='pln'><a href='#n98'>98</a></p> <p id='n99' class='stm mis'><a href='#n99'>99</a></p> <p id='n100' class='pln'><a href='#n100'>100</a></p> <p id='n101' class='pln'><a href='#n101'>101</a></p> <p id='n102' class='pln'><a href='#n102'>102</a></p> <p id='n103' class='pln'><a href='#n103'>103</a></p> <p id='n104' class='pln'><a href='#n104'>104</a></p> <p id='n105' class='pln'><a href='#n105'>105</a></p> <p id='n106' class='pln'><a href='#n106'>106</a></p> <p id='n107' class='pln'><a href='#n107'>107</a></p> <p id='n108' class='pln'><a href='#n108'>108</a></p> <p id='n109' class='stm run hide_run'><a href='#n109'>109</a></p> <p id='n110' class='stm run hide_run'><a href='#n110'>110</a></p> <p id='n111' class='stm run hide_run'><a href='#n111'>111</a></p> <p id='n112' class='stm run hide_run'><a href='#n112'>112</a></p> <p id='n113' class='stm run hide_run'><a href='#n113'>113</a></p> <p id='n114' class='stm run hide_run'><a href='#n114'>114</a></p> <p id='n115' class='stm run hide_run'><a href='#n115'>115</a></p> <p id='n116' class='stm run hide_run'><a href='#n116'>116</a></p> <p id='n117' class='pln'><a href='#n117'>117</a></p> <p id='n118' class='pln'><a href='#n118'>118</a></p> <p id='n119' class='pln'><a href='#n119'>119</a></p> <p id='n120' class='stm run hide_run'><a href='#n120'>120</a></p> <p id='n121' class='stm mis'><a href='#n121'>121</a></p> <p id='n122' class='pln'><a href='#n122'>122</a></p> <p id='n123' class='pln'><a href='#n123'>123</a></p> <p id='n124' class='stm mis'><a href='#n124'>124</a></p> <p id='n125' class='stm mis'><a href='#n125'>125</a></p> <p id='n126' class='stm mis'><a href='#n126'>126</a></p> <p id='n127' class='stm mis'><a href='#n127'>127</a></p> <p id='n128' class='pln'><a href='#n128'>128</a></p> <p id='n129' class='stm mis'><a href='#n129'>129</a></p> <p id='n130' class='stm run hide_run'><a href='#n130'>130</a></p> <p id='n131' class='stm run hide_run'><a href='#n131'>131</a></p> <p id='n132' class='pln'><a href='#n132'>132</a></p> <p id='n133' class='pln'><a href='#n133'>133</a></p> <p id='n134' class='stm run hide_run'><a href='#n134'>134</a></p> <p id='n135' class='stm run hide_run'><a href='#n135'>135</a></p> <p id='n136' class='stm run hide_run'><a href='#n136'>136</a></p> <p id='n137' class='stm run hide_run'><a href='#n137'>137</a></p> <p id='n138' class='stm run hide_run'><a href='#n138'>138</a></p> <p id='n139' class='pln'><a href='#n139'>139</a></p> <p id='n140' class='stm run hide_run'><a href='#n140'>140</a></p> <p id='n141' class='stm mis'><a href='#n141'>141</a></p> <p id='n142' class='stm mis'><a href='#n142'>142</a></p> <p id='n143' class='pln'><a href='#n143'>143</a></p> <p id='n144' class='stm run hide_run'><a href='#n144'>144</a></p> <p id='n145' class='stm run hide_run'><a href='#n145'>145</a></p> <p id='n146' class='stm run hide_run'><a href='#n146'>146</a></p> <p id='n147' class='pln'><a href='#n147'>147</a></p> <p id='n148' class='stm run hide_run'><a href='#n148'>148</a></p> <p id='n149' class='stm run hide_run'><a href='#n149'>149</a></p> <p id='n150' class='pln'><a href='#n150'>150</a></p> <p id='n151' class='pln'><a href='#n151'>151</a></p> <p id='n152' class='pln'><a href='#n152'>152</a></p> <p id='n153' class='stm run hide_run'><a href='#n153'>153</a></p> <p id='n154' class='stm run hide_run'><a href='#n154'>154</a></p> <p id='n155' class='stm run hide_run'><a href='#n155'>155</a></p> <p id='n156' class='pln'><a href='#n156'>156</a></p> <p id='n157' class='stm run hide_run'><a href='#n157'>157</a></p> <p id='n158' class='stm run hide_run'><a href='#n158'>158</a></p> <p id='n159' class='stm run hide_run'><a href='#n159'>159</a></p> <p id='n160' class='stm mis'><a href='#n160'>160</a></p> <p id='n161' class='stm mis'><a href='#n161'>161</a></p> <p id='n162' class='stm mis'><a href='#n162'>162</a></p> <p id='n163' class='stm run hide_run'><a href='#n163'>163</a></p> <p id='n164' class='pln'><a href='#n164'>164</a></p> <p id='n165' class='pln'><a href='#n165'>165</a></p> <p id='n166' class='pln'><a href='#n166'>166</a></p> <p id='n167' class='stm run hide_run'><a href='#n167'>167</a></p> <p id='n168' class='stm mis'><a href='#n168'>168</a></p> <p id='n169' class='stm mis'><a href='#n169'>169</a></p> <p id='n170' class='stm mis'><a href='#n170'>170</a></p> <p id='n171' class='stm mis'><a href='#n171'>171</a></p> <p id='n172' class='stm mis'><a href='#n172'>172</a></p> <p id='n173' class='stm mis'><a href='#n173'>173</a></p> <p id='n174' class='stm mis'><a href='#n174'>174</a></p> <p id='n175' class='stm mis'><a href='#n175'>175</a></p> <p id='n176' class='pln'><a href='#n176'>176</a></p> <p id='n177' class='stm run hide_run'><a href='#n177'>177</a></p> <p id='n178' class='pln'><a href='#n178'>178</a></p> <p id='n179' class='pln'><a href='#n179'>179</a></p> <p id='n180' class='pln'><a href='#n180'>180</a></p> <p id='n181' class='pln'><a href='#n181'>181</a></p> <p id='n182' class='pln'><a href='#n182'>182</a></p> <p id='n183' class='stm run hide_run'><a href='#n183'>183</a></p> <p id='n184' class='stm run hide_run'><a href='#n184'>184</a></p> <p id='n185' class='pln'><a href='#n185'>185</a></p> <p id='n186' class='stm run hide_run'><a href='#n186'>186</a></p> <p id='n187' class='stm mis'><a href='#n187'>187</a></p> <p id='n188' class='stm mis'><a href='#n188'>188</a></p> <p id='n189' class='pln'><a href='#n189'>189</a></p> <p id='n190' class='pln'><a href='#n190'>190</a></p> <p id='n191' class='pln'><a href='#n191'>191</a></p> <p id='n192' class='stm run hide_run'><a href='#n192'>192</a></p> <p id='n193' class='pln'><a href='#n193'>193</a></p> <p id='n194' class='pln'><a href='#n194'>194</a></p> <p id='n195' class='stm run hide_run'><a href='#n195'>195</a></p> <p id='n196' class='pln'><a href='#n196'>196</a></p> <p id='n197' class='pln'><a href='#n197'>197</a></p> <p id='n198' class='pln'><a href='#n198'>198</a></p> <p id='n199' class='pln'><a href='#n199'>199</a></p> <p id='n200' class='pln'><a href='#n200'>200</a></p> <p id='n201' class='stm mis'><a href='#n201'>201</a></p> <p id='n202' class='stm mis'><a href='#n202'>202</a></p> <p id='n203' class='stm mis'><a href='#n203'>203</a></p> <p id='n204' class='stm mis'><a href='#n204'>204</a></p> <p id='n205' class='stm mis'><a href='#n205'>205</a></p> <p id='n206' class='stm mis'><a href='#n206'>206</a></p> <p id='n207' class='pln'><a href='#n207'>207</a></p> <p id='n208' class='pln'><a href='#n208'>208</a></p> <p id='n209' class='pln'><a href='#n209'>209</a></p> <p id='n210' class='stm mis'><a href='#n210'>210</a></p> <p id='n211' class='stm mis'><a href='#n211'>211</a></p> <p id='n212' class='stm mis'><a href='#n212'>212</a></p> <p id='n213' class='stm mis'><a href='#n213'>213</a></p> <p id='n214' class='stm mis'><a href='#n214'>214</a></p> <p id='n215' class='pln'><a href='#n215'>215</a></p> <p id='n216' class='stm mis'><a href='#n216'>216</a></p> <p id='n217' class='stm mis'><a href='#n217'>217</a></p> <p id='n218' class='pln'><a href='#n218'>218</a></p> <p id='n219' class='stm run hide_run'><a href='#n219'>219</a></p> <p id='n220' class='stm run hide_run'><a href='#n220'>220</a></p> <p id='n221' class='pln'><a href='#n221'>221</a></p> <p id='n222' class='stm run hide_run'><a href='#n222'>222</a></p> <p id='n223' class='stm run hide_run'><a href='#n223'>223</a></p> <p id='n224' class='pln'><a href='#n224'>224</a></p> <p id='n225' class='stm run hide_run'><a href='#n225'>225</a></p> <p id='n226' class='pln'><a href='#n226'>226</a></p> <p id='n227' class='stm run hide_run'><a href='#n227'>227</a></p> <p id='n228' class='stm run hide_run'><a href='#n228'>228</a></p> <p id='n229' class='stm run hide_run'><a href='#n229'>229</a></p> <p id='n230' class='stm run hide_run'><a href='#n230'>230</a></p> <p id='n231' class='stm mis'><a href='#n231'>231</a></p> <p id='n232' class='pln'><a href='#n232'>232</a></p> <p id='n233' class='pln'><a href='#n233'>233</a></p> <p id='n234' class='pln'><a href='#n234'>234</a></p> <p id='n235' class='pln'><a href='#n235'>235</a></p> <p id='n236' class='pln'><a href='#n236'>236</a></p> <p id='n237' class='pln'><a href='#n237'>237</a></p> <p id='n238' class='pln'><a href='#n238'>238</a></p> <p id='n239' class='pln'><a href='#n239'>239</a></p> <p id='n240' class='pln'><a href='#n240'>240</a></p> <p id='n241' class='pln'><a href='#n241'>241</a></p> <p id='n242' class='pln'><a href='#n242'>242</a></p> <p id='n243' class='stm mis'><a href='#n243'>243</a></p> <p id='n244' class='stm mis'><a href='#n244'>244</a></p> <p id='n245' class='stm mis'><a href='#n245'>245</a></p> <p id='n246' class='pln'><a href='#n246'>246</a></p> <p id='n247' class='pln'><a href='#n247'>247</a></p> <p id='n248' class='stm mis'><a href='#n248'>248</a></p> <p id='n249' class='stm mis'><a href='#n249'>249</a></p> <p id='n250' class='stm mis'><a href='#n250'>250</a></p> <p id='n251' class='stm mis'><a href='#n251'>251</a></p> <p id='n252' class='stm mis'><a href='#n252'>252</a></p> <p id='n253' class='stm mis'><a href='#n253'>253</a></p> <p id='n254' class='stm mis'><a href='#n254'>254</a></p> <p id='n255' class='pln'><a href='#n255'>255</a></p> <p id='n256' class='stm mis'><a href='#n256'>256</a></p> <p id='n257' class='pln'><a href='#n257'>257</a></p> <p id='n258' class='stm mis'><a href='#n258'>258</a></p> <p id='n259' class='pln'><a href='#n259'>259</a></p> <p id='n260' class='pln'><a href='#n260'>260</a></p> <p id='n261' class='stm mis'><a href='#n261'>261</a></p> <p id='n262' class='stm mis'><a href='#n262'>262</a></p> <p id='n263' class='stm mis'><a href='#n263'>263</a></p> <p id='n264' class='stm mis'><a href='#n264'>264</a></p> <p id='n265' class='stm mis'><a href='#n265'>265</a></p> <p id='n266' class='pln'><a href='#n266'>266</a></p> <p id='n267' class='stm mis'><a href='#n267'>267</a></p> <p id='n268' class='stm mis'><a href='#n268'>268</a></p> <p id='n269' class='stm mis'><a href='#n269'>269</a></p> <p id='n270' class='stm mis'><a href='#n270'>270</a></p> <p id='n271' class='stm mis'><a href='#n271'>271</a></p> <p id='n272' class='stm mis'><a href='#n272'>272</a></p> <p id='n273' class='pln'><a href='#n273'>273</a></p> <p id='n274' class='pln'><a href='#n274'>274</a></p> <p id='n275' class='stm mis'><a href='#n275'>275</a></p> <p id='n276' class='stm mis'><a href='#n276'>276</a></p> <p id='n277' class='stm mis'><a href='#n277'>277</a></p> <p id='n278' class='pln'><a href='#n278'>278</a></p> <p id='n279' class='stm run hide_run'><a href='#n279'>279</a></p> <p id='n280' class='pln'><a href='#n280'>280</a></p> <p id='n281' class='stm mis'><a href='#n281'>281</a></p> <p id='n282' class='pln'><a href='#n282'>282</a></p> <p id='n283' class='pln'><a href='#n283'>283</a></p> <p id='n284' class='stm mis'><a href='#n284'>284</a></p> <p id='n285' class='pln'><a href='#n285'>285</a></p> <p id='n286' class='pln'><a href='#n286'>286</a></p> <p id='n287' class='pln'><a href='#n287'>287</a></p> <p id='n288' class='stm mis'><a href='#n288'>288</a></p> <p id='n289' class='stm mis'><a href='#n289'>289</a></p> <p id='n290' class='stm mis'><a href='#n290'>290</a></p> <p id='n291' class='pln'><a href='#n291'>291</a></p> <p id='n292' class='stm run hide_run'><a href='#n292'>292</a></p> <p id='n293' class='pln'><a href='#n293'>293</a></p> <p id='n294' class='pln'><a href='#n294'>294</a></p> <p id='n295' class='pln'><a href='#n295'>295</a></p> <p id='n296' class='pln'><a href='#n296'>296</a></p> <p id='n297' class='stm run hide_run'><a href='#n297'>297</a></p> <p id='n298' class='stm run hide_run'><a href='#n298'>298</a></p> <p id='n299' class='stm run hide_run'><a href='#n299'>299</a></p> <p id='n300' class='stm run hide_run'><a href='#n300'>300</a></p> <p id='n301' class='stm run hide_run'><a href='#n301'>301</a></p> <p id='n302' class='stm run hide_run'><a href='#n302'>302</a></p> <p id='n303' class='stm run hide_run'><a href='#n303'>303</a></p> <p id='n304' class='stm run hide_run'><a href='#n304'>304</a></p> <p id='n305' class='stm mis'><a href='#n305'>305</a></p> <p id='n306' class='stm mis'><a href='#n306'>306</a></p> <p id='n307' class='pln'><a href='#n307'>307</a></p> <p id='n308' class='stm run hide_run'><a href='#n308'>308</a></p> <p id='n309' class='pln'><a href='#n309'>309</a></p> <p id='n310' class='stm mis'><a href='#n310'>310</a></p> <p id='n311' class='stm run hide_run'><a href='#n311'>311</a></p> <p id='n312' class='stm run hide_run'><a href='#n312'>312</a></p> <p id='n313' class='pln'><a href='#n313'>313</a></p> <p id='n314' class='stm mis'><a href='#n314'>314</a></p> <p id='n315' class='stm run hide_run'><a href='#n315'>315</a></p> <p id='n316' class='pln'><a href='#n316'>316</a></p> <p id='n317' class='stm run hide_run'><a href='#n317'>317</a></p> <p id='n318' class='pln'><a href='#n318'>318</a></p> <p id='n319' class='pln'><a href='#n319'>319</a></p> <p id='n320' class='stm run hide_run'><a href='#n320'>320</a></p> <p id='n321' class='pln'><a href='#n321'>321</a></p> <p id='n322' class='stm mis'><a href='#n322'>322</a></p> <p id='n323' class='pln'><a href='#n323'>323</a></p> <p id='n324' class='pln'><a href='#n324'>324</a></p> <p id='n325' class='stm run hide_run'><a href='#n325'>325</a></p> <p id='n326' class='stm run hide_run'><a href='#n326'>326</a></p> <p id='n327' class='pln'><a href='#n327'>327</a></p> <p id='n328' class='stm run hide_run'><a href='#n328'>328</a></p> <p id='n329' class='stm mis'><a href='#n329'>329</a></p> <p id='n330' class='pln'><a href='#n330'>330</a></p> <p id='n331' class='stm run hide_run'><a href='#n331'>331</a></p> <p id='n332' class='stm mis'><a href='#n332'>332</a></p> <p id='n333' class='pln'><a href='#n333'>333</a></p> <p id='n334' class='stm run hide_run'><a href='#n334'>334</a></p> <p id='n335' class='stm run hide_run'><a href='#n335'>335</a></p> <p id='n336' class='stm run hide_run'><a href='#n336'>336</a></p> <p id='n337' class='stm run hide_run'><a href='#n337'>337</a></p> <p id='n338' class='stm mis'><a href='#n338'>338</a></p> <p id='n339' class='stm mis'><a href='#n339'>339</a></p> <p id='n340' class='stm run hide_run'><a href='#n340'>340</a></p> <p id='n341' class='stm run hide_run'><a href='#n341'>341</a></p> <p id='n342' class='stm run hide_run'><a href='#n342'>342</a></p> <p id='n343' class='stm run hide_run'><a href='#n343'>343</a></p> <p id='n344' class='stm run hide_run'><a href='#n344'>344</a></p> <p id='n345' class='stm mis'><a href='#n345'>345</a></p> <p id='n346' class='stm run hide_run'><a href='#n346'>346</a></p> <p id='n347' class='pln'><a href='#n347'>347</a></p> <p id='n348' class='pln'><a href='#n348'>348</a></p> <p id='n349' class='stm run hide_run'><a href='#n349'>349</a></p> <p id='n350' class='pln'><a href='#n350'>350</a></p> <p id='n351' class='pln'><a href='#n351'>351</a></p> <p id='n352' class='pln'><a href='#n352'>352</a></p> <p id='n353' class='pln'><a href='#n353'>353</a></p> <p id='n354' class='pln'><a href='#n354'>354</a></p> <p id='n355' class='pln'><a href='#n355'>355</a></p> <p id='n356' class='stm run hide_run'><a href='#n356'>356</a></p> <p id='n357' class='pln'><a href='#n357'>357</a></p> <p id='n358' class='pln'><a href='#n358'>358</a></p> <p id='n359' class='pln'><a href='#n359'>359</a></p> <p id='n360' class='pln'><a href='#n360'>360</a></p> <p id='n361' class='pln'><a href='#n361'>361</a></p> <p id='n362' class='pln'><a href='#n362'>362</a></p> <p id='n363' class='pln'><a href='#n363'>363</a></p> <p id='n364' class='pln'><a href='#n364'>364</a></p> <p id='n365' class='pln'><a href='#n365'>365</a></p> <p id='n366' class='pln'><a href='#n366'>366</a></p> <p id='n367' class='pln'><a href='#n367'>367</a></p> <p id='n368' class='pln'><a href='#n368'>368</a></p> <p id='n369' class='pln'><a href='#n369'>369</a></p> <p id='n370' class='pln'><a href='#n370'>370</a></p> <p id='n371' class='pln'><a href='#n371'>371</a></p> <p id='n372' class='pln'><a href='#n372'>372</a></p> <p id='n373' class='pln'><a href='#n373'>373</a></p> <p id='n374' class='pln'><a href='#n374'>374</a></p> <p id='n375' class='pln'><a href='#n375'>375</a></p> <p id='n376' class='pln'><a href='#n376'>376</a></p> <p id='n377' class='pln'><a href='#n377'>377</a></p> <p id='n378' class='pln'><a href='#n378'>378</a></p> <p id='n379' class='pln'><a href='#n379'>379</a></p> <p id='n380' class='pln'><a href='#n380'>380</a></p> <p id='n381' class='pln'><a href='#n381'>381</a></p> <p id='n382' class='stm run hide_run'><a href='#n382'>382</a></p> <p id='n383' class='pln'><a href='#n383'>383</a></p> <p id='n384' class='stm mis'><a href='#n384'>384</a></p> <p id='n385' class='stm mis'><a href='#n385'>385</a></p> <p id='n386' class='stm mis'><a href='#n386'>386</a></p> <p id='n387' class='stm mis'><a href='#n387'>387</a></p> <p id='n388' class='stm mis'><a href='#n388'>388</a></p> <p id='n389' class='stm mis'><a href='#n389'>389</a></p> <p id='n390' class='stm mis'><a href='#n390'>390</a></p> <p id='n391' class='stm mis'><a href='#n391'>391</a></p> <p id='n392' class='stm mis'><a href='#n392'>392</a></p> <p id='n393' class='pln'><a href='#n393'>393</a></p> <p id='n394' class='pln'><a href='#n394'>394</a></p> <p id='n395' class='stm run hide_run'><a href='#n395'>395</a></p> <p id='n396' class='pln'><a href='#n396'>396</a></p> <p id='n397' class='stm run hide_run'><a href='#n397'>397</a></p> <p id='n398' class='pln'><a href='#n398'>398</a></p> <p id='n399' class='stm mis'><a href='#n399'>399</a></p> <p id='n400' class='pln'><a href='#n400'>400</a></p> <p id='n401' class='stm mis'><a href='#n401'>401</a></p> <p id='n402' class='pln'><a href='#n402'>402</a></p> <p id='n403' class='pln'><a href='#n403'>403</a></p> <p id='n404' class='stm mis'><a href='#n404'>404</a></p> <p id='n405' class='pln'><a href='#n405'>405</a></p> <p id='n406' class='stm mis'><a href='#n406'>406</a></p> <p id='n407' class='stm mis'><a href='#n407'>407</a></p> <p id='n408' class='stm mis'><a href='#n408'>408</a></p> <p id='n409' class='stm mis'><a href='#n409'>409</a></p> <p id='n410' class='stm mis'><a href='#n410'>410</a></p> <p id='n411' class='pln'><a href='#n411'>411</a></p> <p id='n412' class='stm mis'><a href='#n412'>412</a></p> <p id='n413' class='stm mis'><a href='#n413'>413</a></p> <p id='n414' class='pln'><a href='#n414'>414</a></p> <p id='n415' class='stm mis'><a href='#n415'>415</a></p> <p id='n416' class='stm mis'><a href='#n416'>416</a></p> <p id='n417' class='stm mis'><a href='#n417'>417</a></p> <p id='n418' class='stm mis'><a href='#n418'>418</a></p> <p id='n419' class='pln'><a href='#n419'>419</a></p> <p id='n420' class='stm mis'><a href='#n420'>420</a></p> <p id='n421' class='stm mis'><a href='#n421'>421</a></p> <p id='n422' class='stm mis'><a href='#n422'>422</a></p> <p id='n423' class='stm mis'><a href='#n423'>423</a></p> <p id='n424' class='pln'><a href='#n424'>424</a></p> <p id='n425' class='stm mis'><a href='#n425'>425</a></p> <p id='n426' class='pln'><a href='#n426'>426</a></p> <p id='n427' class='stm mis'><a href='#n427'>427</a></p> <p id='n428' class='stm mis'><a href='#n428'>428</a></p> <p id='n429' class='stm mis'><a href='#n429'>429</a></p> <p id='n430' class='stm mis'><a href='#n430'>430</a></p> <p id='n431' class='stm mis'><a href='#n431'>431</a></p> <p id='n432' class='stm mis'><a href='#n432'>432</a></p> <p id='n433' class='stm mis'><a href='#n433'>433</a></p> <p id='n434' class='stm mis'><a href='#n434'>434</a></p> <p id='n435' class='pln'><a href='#n435'>435</a></p> <p id='n436' class='stm mis'><a href='#n436'>436</a></p> <p id='n437' class='pln'><a href='#n437'>437</a></p> <p id='n438' class='stm mis'><a href='#n438'>438</a></p> <p id='n439' class='stm mis'><a href='#n439'>439</a></p> <p id='n440' class='stm mis'><a href='#n440'>440</a></p> <p id='n441' class='stm mis'><a href='#n441'>441</a></p> <p id='n442' class='stm mis'><a href='#n442'>442</a></p> <p id='n443' class='pln'><a href='#n443'>443</a></p> <p id='n444' class='pln'><a href='#n444'>444</a></p> <p id='n445' class='pln'><a href='#n445'>445</a></p> <p id='n446' class='stm mis'><a href='#n446'>446</a></p> <p id='n447' class='pln'><a href='#n447'>447</a></p> <p id='n448' class='stm run hide_run'><a href='#n448'>448</a></p> <p id='n449' class='pln'><a href='#n449'>449</a></p> <p id='n450' class='pln'><a href='#n450'>450</a></p> <p id='n451' class='stm mis'><a href='#n451'>451</a></p> <p id='n452' class='stm mis'><a href='#n452'>452</a></p> <p id='n453' class='stm mis'><a href='#n453'>453</a></p> <p id='n454' class='pln'><a href='#n454'>454</a></p> <p id='n455' class='stm run hide_run'><a href='#n455'>455</a></p> <p id='n456' class='pln'><a href='#n456'>456</a></p> <p id='n457' class='stm mis'><a href='#n457'>457</a></p> <p id='n458' class='stm mis'><a href='#n458'>458</a></p> <p id='n459' class='stm mis'><a href='#n459'>459</a></p> <p id='n460' class='pln'><a href='#n460'>460</a></p> <p id='n461' class='stm run hide_run'><a href='#n461'>461</a></p> <p id='n462' class='pln'><a href='#n462'>462</a></p> <p id='n463' class='stm mis'><a href='#n463'>463</a></p> <p id='n464' class='pln'><a href='#n464'>464</a></p> <p id='n465' class='stm run hide_run'><a href='#n465'>465</a></p> <p id='n466' class='pln'><a href='#n466'>466</a></p> <p id='n467' class='stm mis'><a href='#n467'>467</a></p> <p id='n468' class='stm mis'><a href='#n468'>468</a></p> <p id='n469' class='stm mis'><a href='#n469'>469</a></p> <p id='n470' class='pln'><a href='#n470'>470</a></p> <p id='n471' class='stm run hide_run'><a href='#n471'>471</a></p> <p id='n472' class='pln'><a href='#n472'>472</a></p> <p id='n473' class='stm mis'><a href='#n473'>473</a></p> <p id='n474' class='stm mis'><a href='#n474'>474</a></p> <p id='n475' class='pln'><a href='#n475'>475</a></p> <p id='n476' class='stm run hide_run'><a href='#n476'>476</a></p> <p id='n477' class='stm mis'><a href='#n477'>477</a></p> <p id='n478' class='stm mis'><a href='#n478'>478</a></p> <p id='n479' class='stm mis'><a href='#n479'>479</a></p> <p id='n480' class='pln'><a href='#n480'>480</a></p> <p id='n481' class='stm run hide_run'><a href='#n481'>481</a></p> <p id='n482' class='stm mis'><a href='#n482'>482</a></p> <p id='n483' class='stm mis'><a href='#n483'>483</a></p> <p id='n484' class='pln'><a href='#n484'>484</a></p> <p id='n485' class='stm run hide_run'><a href='#n485'>485</a></p> <p id='n486' class='stm mis'><a href='#n486'>486</a></p> <p id='n487' class='stm mis'><a href='#n487'>487</a></p> <p id='n488' class='stm mis'><a href='#n488'>488</a></p> <p id='n489' class='stm mis'><a href='#n489'>489</a></p> <p id='n490' class='stm mis'><a href='#n490'>490</a></p> <p id='n491' class='stm mis'><a href='#n491'>491</a></p> <p id='n492' class='stm mis'><a href='#n492'>492</a></p> <p id='n493' class='stm mis'><a href='#n493'>493</a></p> <p id='n494' class='stm mis'><a href='#n494'>494</a></p> <p id='n495' class='pln'><a href='#n495'>495</a></p> <p id='n496' class='stm run hide_run'><a href='#n496'>496</a></p> <p id='n497' class='pln'><a href='#n497'>497</a></p> <p id='n498' class='stm mis'><a href='#n498'>498</a></p> <p id='n499' class='stm mis'><a href='#n499'>499</a></p> <p id='n500' class='stm mis'><a href='#n500'>500</a></p> <p id='n501' class='pln'><a href='#n501'>501</a></p> <p id='n502' class='stm run hide_run'><a href='#n502'>502</a></p> <p id='n503' class='stm mis'><a href='#n503'>503</a></p> <p id='n504' class='pln'><a href='#n504'>504</a></p> <p id='n505' class='stm mis'><a href='#n505'>505</a></p> <p id='n506' class='stm mis'><a href='#n506'>506</a></p> <p id='n507' class='stm mis'><a href='#n507'>507</a></p> <p id='n508' class='stm mis'><a href='#n508'>508</a></p> <p id='n509' class='stm mis'><a href='#n509'>509</a></p> <p id='n510' class='stm mis'><a href='#n510'>510</a></p> <p id='n511' class='stm mis'><a href='#n511'>511</a></p> <p id='n512' class='stm mis'><a href='#n512'>512</a></p> <p id='n513' class='pln'><a href='#n513'>513</a></p> <p id='n514' class='stm mis'><a href='#n514'>514</a></p> <p id='n515' class='pln'><a href='#n515'>515</a></p> <p id='n516' class='stm mis'><a href='#n516'>516</a></p> <p id='n517' class='stm mis'><a href='#n517'>517</a></p> <p id='n518' class='stm mis'><a href='#n518'>518</a></p> <p id='n519' class='stm mis'><a href='#n519'>519</a></p> <p id='n520' class='stm mis'><a href='#n520'>520</a></p> <p id='n521' class='stm mis'><a href='#n521'>521</a></p> <p id='n522' class='stm mis'><a href='#n522'>522</a></p> <p id='n523' class='stm mis'><a href='#n523'>523</a></p> <p id='n524' class='stm mis'><a href='#n524'>524</a></p> <p id='n525' class='stm mis'><a href='#n525'>525</a></p> <p id='n526' class='stm mis'><a href='#n526'>526</a></p> <p id='n527' class='pln'><a href='#n527'>527</a></p> <p id='n528' class='stm mis'><a href='#n528'>528</a></p> <p id='n529' class='stm mis'><a href='#n529'>529</a></p> <p id='n530' class='pln'><a href='#n530'>530</a></p> <p id='n531' class='stm mis'><a href='#n531'>531</a></p> <p id='n532' class='stm mis'><a href='#n532'>532</a></p> <p id='n533' class='pln'><a href='#n533'>533</a></p> <p id='n534' class='stm mis'><a href='#n534'>534</a></p> <p id='n535' class='stm mis'><a href='#n535'>535</a></p> <p id='n536' class='pln'><a href='#n536'>536</a></p> <p id='n537' class='stm mis'><a href='#n537'>537</a></p> <p id='n538' class='stm mis'><a href='#n538'>538</a></p> <p id='n539' class='stm mis'><a href='#n539'>539</a></p> <p id='n540' class='pln'><a href='#n540'>540</a></p> <p id='n541' class='stm run hide_run'><a href='#n541'>541</a></p> <p id='n542' class='pln'><a href='#n542'>542</a></p> <p id='n543' class='pln'><a href='#n543'>543</a></p> <p id='n544' class='stm mis'><a href='#n544'>544</a></p> <p id='n545' class='stm mis'><a href='#n545'>545</a></p> <p id='n546' class='stm mis'><a href='#n546'>546</a></p> <p id='n547' class='stm mis'><a href='#n547'>547</a></p> <p id='n548' class='stm mis'><a href='#n548'>548</a></p> <p id='n549' class='stm mis'><a href='#n549'>549</a></p> <p id='n550' class='pln'><a href='#n550'>550</a></p> <p id='n551' class='stm run hide_run'><a href='#n551'>551</a></p> <p id='n552' class='stm mis'><a href='#n552'>552</a></p> <p id='n553' class='stm mis'><a href='#n553'>553</a></p> <p id='n554' class='stm mis'><a href='#n554'>554</a></p> <p id='n555' class='stm mis'><a href='#n555'>555</a></p> <p id='n556' class='stm mis'><a href='#n556'>556</a></p> <p id='n557' class='stm mis'><a href='#n557'>557</a></p> <p id='n558' class='stm mis'><a href='#n558'>558</a></p> <p id='n559' class='stm mis'><a href='#n559'>559</a></p> <p id='n560' class='pln'><a href='#n560'>560</a></p> <p id='n561' class='stm mis'><a href='#n561'>561</a></p> <p id='n562' class='stm mis'><a href='#n562'>562</a></p> <p id='n563' class='stm mis'><a href='#n563'>563</a></p> <p id='n564' class='pln'><a href='#n564'>564</a></p> <p id='n565' class='stm mis'><a href='#n565'>565</a></p> <p id='n566' class='stm mis'><a href='#n566'>566</a></p> <p id='n567' class='stm mis'><a href='#n567'>567</a></p> <p id='n568' class='stm mis'><a href='#n568'>568</a></p> <p id='n569' class='stm mis'><a href='#n569'>569</a></p> <p id='n570' class='stm mis'><a href='#n570'>570</a></p> <p id='n571' class='stm mis'><a href='#n571'>571</a></p> <p id='n572' class='stm mis'><a href='#n572'>572</a></p> <p id='n573' class='stm mis'><a href='#n573'>573</a></p> <p id='n574' class='stm mis'><a href='#n574'>574</a></p> <p id='n575' class='stm mis'><a href='#n575'>575</a></p> <p id='n576' class='stm mis'><a href='#n576'>576</a></p> <p id='n577' class='stm mis'><a href='#n577'>577</a></p> <p id='n578' class='stm mis'><a href='#n578'>578</a></p> <p id='n579' class='stm mis'><a href='#n579'>579</a></p> <p id='n580' class='stm mis'><a href='#n580'>580</a></p> <p id='n581' class='stm mis'><a href='#n581'>581</a></p> <p id='n582' class='stm mis'><a href='#n582'>582</a></p> <p id='n583' class='stm mis'><a href='#n583'>583</a></p> <p id='n584' class='stm mis'><a href='#n584'>584</a></p> <p id='n585' class='pln'><a href='#n585'>585</a></p> <p id='n586' class='stm run hide_run'><a href='#n586'>586</a></p> <p id='n587' class='stm mis'><a href='#n587'>587</a></p> <p id='n588' class='stm mis'><a href='#n588'>588</a></p> <p id='n589' class='stm mis'><a href='#n589'>589</a></p> <p id='n590' class='stm mis'><a href='#n590'>590</a></p> <p id='n591' class='pln'><a href='#n591'>591</a></p> <p id='n592' class='stm run hide_run'><a href='#n592'>592</a></p> <p id='n593' class='stm mis'><a href='#n593'>593</a></p> <p id='n594' class='stm mis'><a href='#n594'>594</a></p> <p id='n595' class='stm mis'><a href='#n595'>595</a></p> <p id='n596' class='stm mis'><a href='#n596'>596</a></p> <p id='n597' class='stm mis'><a href='#n597'>597</a></p> <p id='n598' class='stm mis'><a href='#n598'>598</a></p> <p id='n599' class='pln'><a href='#n599'>599</a></p> <p id='n600' class='stm run hide_run'><a href='#n600'>600</a></p> <p id='n601' class='stm mis'><a href='#n601'>601</a></p> <p id='n602' class='stm mis'><a href='#n602'>602</a></p> <p id='n603' class='stm mis'><a href='#n603'>603</a></p> <p id='n604' class='stm mis'><a href='#n604'>604</a></p> <p id='n605' class='stm mis'><a href='#n605'>605</a></p> <p id='n606' class='stm mis'><a href='#n606'>606</a></p> <p id='n607' class='stm mis'><a href='#n607'>607</a></p> <p id='n608' class='stm mis'><a href='#n608'>608</a></p> <p id='n609' class='stm mis'><a href='#n609'>609</a></p> <p id='n610' class='stm mis'><a href='#n610'>610</a></p> <p id='n611' class='stm mis'><a href='#n611'>611</a></p> <p id='n612' class='stm mis'><a href='#n612'>612</a></p> <p id='n613' class='stm mis'><a href='#n613'>613</a></p> <p id='n614' class='stm mis'><a href='#n614'>614</a></p> <p id='n615' class='stm mis'><a href='#n615'>615</a></p> <p id='n616' class='stm mis'><a href='#n616'>616</a></p> <p id='n617' class='stm mis'><a href='#n617'>617</a></p> <p id='n618' class='stm mis'><a href='#n618'>618</a></p> <p id='n619' class='stm mis'><a href='#n619'>619</a></p> <p id='n620' class='stm mis'><a href='#n620'>620</a></p> <p id='n621' class='stm mis'><a href='#n621'>621</a></p> <p id='n622' class='pln'><a href='#n622'>622</a></p> <p id='n623' class='stm mis'><a href='#n623'>623</a></p> <p id='n624' class='stm mis'><a href='#n624'>624</a></p> <p id='n625' class='stm mis'><a href='#n625'>625</a></p> <p id='n626' class='stm mis'><a href='#n626'>626</a></p> <p id='n627' class='pln'><a href='#n627'>627</a></p> <p id='n628' class='stm run hide_run'><a href='#n628'>628</a></p> <p id='n629' class='stm mis'><a href='#n629'>629</a></p> <p id='n630' class='stm mis'><a href='#n630'>630</a></p> <p id='n631' class='stm mis'><a href='#n631'>631</a></p> <p id='n632' class='stm mis'><a href='#n632'>632</a></p> <p id='n633' class='stm mis'><a href='#n633'>633</a></p> <p id='n634' class='stm mis'><a href='#n634'>634</a></p> <p id='n635' class='stm mis'><a href='#n635'>635</a></p> <p id='n636' class='stm mis'><a href='#n636'>636</a></p> <p id='n637' class='pln'><a href='#n637'>637</a></p> <p id='n638' class='stm run hide_run'><a href='#n638'>638</a></p> <p id='n639' class='stm mis'><a href='#n639'>639</a></p> <p id='n640' class='stm mis'><a href='#n640'>640</a></p> <p id='n641' class='stm mis'><a href='#n641'>641</a></p> <p id='n642' class='stm mis'><a href='#n642'>642</a></p> <p id='n643' class='stm mis'><a href='#n643'>643</a></p> <p id='n644' class='stm mis'><a href='#n644'>644</a></p> <p id='n645' class='stm mis'><a href='#n645'>645</a></p> <p id='n646' class='stm mis'><a href='#n646'>646</a></p> <p id='n647' class='stm mis'><a href='#n647'>647</a></p> <p id='n648' class='stm mis'><a href='#n648'>648</a></p> <p id='n649' class='stm mis'><a href='#n649'>649</a></p> <p id='n650' class='stm mis'><a href='#n650'>650</a></p> <p id='n651' class='stm mis'><a href='#n651'>651</a></p> <p id='n652' class='stm mis'><a href='#n652'>652</a></p> <p id='n653' class='stm mis'><a href='#n653'>653</a></p> <p id='n654' class='stm mis'><a href='#n654'>654</a></p> <p id='n655' class='stm mis'><a href='#n655'>655</a></p> <p id='n656' class='stm mis'><a href='#n656'>656</a></p> <p id='n657' class='stm mis'><a href='#n657'>657</a></p> <p id='n658' class='pln'><a href='#n658'>658</a></p> <p id='n659' class='stm mis'><a href='#n659'>659</a></p> <p id='n660' class='pln'><a href='#n660'>660</a></p> <p id='n661' class='pln'><a href='#n661'>661</a></p> <p id='n662' class='pln'><a href='#n662'>662</a></p> <p id='n663' class='pln'><a href='#n663'>663</a></p> <p id='n664' class='stm mis'><a href='#n664'>664</a></p> <p id='n665' class='stm mis'><a href='#n665'>665</a></p> <p id='n666' class='pln'><a href='#n666'>666</a></p> <p id='n667' class='stm mis'><a href='#n667'>667</a></p> <p id='n668' class='stm mis'><a href='#n668'>668</a></p> </td> <td class='text' valign='top'> <p id='t1' class='pln'><span class='str'>&quot;&quot;&quot;Rewrite assertion AST to produce nice error messages&quot;&quot;&quot;</span><span class='strut'>&nbsp;</span></p> <p id='t2' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t3' class='stm run hide_run'><span class='key'>import</span> <span class='nam'>ast</span><span class='strut'>&nbsp;</span></p> <p id='t4' class='stm run hide_run'><span class='key'>import</span> <span class='nam'>errno</span><span class='strut'>&nbsp;</span></p> <p id='t5' class='stm run hide_run'><span class='key'>import</span> <span class='nam'>itertools</span><span class='strut'>&nbsp;</span></p> <p id='t6' class='stm run hide_run'><span class='key'>import</span> <span class='nam'>imp</span><span class='strut'>&nbsp;</span></p> <p id='t7' class='stm run hide_run'><span class='key'>import</span> <span class='nam'>marshal</span><span class='strut'>&nbsp;</span></p> <p id='t8' class='stm run hide_run'><span class='key'>import</span> <span class='nam'>os</span><span class='strut'>&nbsp;</span></p> <p id='t9' class='stm run hide_run'><span class='key'>import</span> <span class='nam'>re</span><span class='strut'>&nbsp;</span></p> <p id='t10' class='stm run hide_run'><span class='key'>import</span> <span class='nam'>struct</span><span class='strut'>&nbsp;</span></p> <p id='t11' class='stm run hide_run'><span class='key'>import</span> <span class='nam'>sys</span><span class='strut'>&nbsp;</span></p> <p id='t12' class='stm run hide_run'><span class='key'>import</span> <span class='nam'>types</span><span class='strut'>&nbsp;</span></p> <p id='t13' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t14' class='stm run hide_run'><span class='key'>import</span> <span class='nam'>py</span><span class='strut'>&nbsp;</span></p> <p id='t15' class='stm run hide_run'><span class='key'>from</span> <span class='nam'>_pytest</span><span class='op'>.</span><span class='nam'>assertion</span> <span class='key'>import</span> <span class='nam'>util</span><span class='strut'>&nbsp;</span></p> <p id='t16' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t17' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t18' class='pln'><span class='com'># pytest caches rewritten pycs in __pycache__.</span><span class='strut'>&nbsp;</span></p> <p id='t19' class='stm run hide_run'><span class='key'>if</span> <span class='nam'>hasattr</span><span class='op'>(</span><span class='nam'>imp</span><span class='op'>,</span> <span class='str'>&quot;get_tag&quot;</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t20' class='stm mis'>&nbsp; &nbsp; <span class='nam'>PYTEST_TAG</span> <span class='op'>=</span> <span class='nam'>imp</span><span class='op'>.</span><span class='nam'>get_tag</span><span class='op'>(</span><span class='op'>)</span> <span class='op'>+</span> <span class='str'>&quot;-PYTEST&quot;</span><span class='strut'>&nbsp;</span></p> <p id='t21' class='pln'><span class='key'>else</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t22' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>hasattr</span><span class='op'>(</span><span class='nam'>sys</span><span class='op'>,</span> <span class='str'>&quot;pypy_version_info&quot;</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t23' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>impl</span> <span class='op'>=</span> <span class='str'>&quot;pypy&quot;</span><span class='strut'>&nbsp;</span></p> <p id='t24' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>elif</span> <span class='nam'>sys</span><span class='op'>.</span><span class='nam'>platform</span> <span class='op'>==</span> <span class='str'>&quot;java&quot;</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t25' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>impl</span> <span class='op'>=</span> <span class='str'>&quot;jython&quot;</span><span class='strut'>&nbsp;</span></p> <p id='t26' class='pln'>&nbsp; &nbsp; <span class='key'>else</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t27' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>impl</span> <span class='op'>=</span> <span class='str'>&quot;cpython&quot;</span><span class='strut'>&nbsp;</span></p> <p id='t28' class='stm run hide_run'>&nbsp; &nbsp; <span class='nam'>ver</span> <span class='op'>=</span> <span class='nam'>sys</span><span class='op'>.</span><span class='nam'>version_info</span><span class='strut'>&nbsp;</span></p> <p id='t29' class='stm run hide_run'>&nbsp; &nbsp; <span class='nam'>PYTEST_TAG</span> <span class='op'>=</span> <span class='str'>&quot;%s-%s%s-PYTEST&quot;</span> <span class='op'>%</span> <span class='op'>(</span><span class='nam'>impl</span><span class='op'>,</span> <span class='nam'>ver</span><span class='op'>[</span><span class='num'>0</span><span class='op'>]</span><span class='op'>,</span> <span class='nam'>ver</span><span class='op'>[</span><span class='num'>1</span><span class='op'>]</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t30' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>del</span> <span class='nam'>ver</span><span class='op'>,</span> <span class='nam'>impl</span><span class='strut'>&nbsp;</span></p> <p id='t31' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t32' class='stm run hide_run'><span class='nam'>PYC_EXT</span> <span class='op'>=</span> <span class='str'>&quot;.py&quot;</span> <span class='op'>+</span> <span class='op'>(</span><span class='nam'>__debug__</span> <span class='key'>and</span> <span class='str'>&quot;c&quot;</span> <span class='key'>or</span> <span class='str'>&quot;o&quot;</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t33' class='stm run hide_run'><span class='nam'>PYC_TAIL</span> <span class='op'>=</span> <span class='str'>&quot;.&quot;</span> <span class='op'>+</span> <span class='nam'>PYTEST_TAG</span> <span class='op'>+</span> <span class='nam'>PYC_EXT</span><span class='strut'>&nbsp;</span></p> <p id='t34' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t35' class='stm run hide_run'><span class='nam'>REWRITE_NEWLINES</span> <span class='op'>=</span> <span class='nam'>sys</span><span class='op'>.</span><span class='nam'>version_info</span><span class='op'>[</span><span class='op'>:</span><span class='num'>2</span><span class='op'>]</span> <span class='op'>!=</span> <span class='op'>(</span><span class='num'>2</span><span class='op'>,</span> <span class='num'>7</span><span class='op'>)</span> <span class='key'>and</span> <span class='nam'>sys</span><span class='op'>.</span><span class='nam'>version_info</span> <span class='op'>&lt;</span> <span class='op'>(</span><span class='num'>3</span><span class='op'>,</span> <span class='num'>2</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t36' class='stm run hide_run'><span class='nam'>ASCII_IS_DEFAULT_ENCODING</span> <span class='op'>=</span> <span class='nam'>sys</span><span class='op'>.</span><span class='nam'>version_info</span><span class='op'>[</span><span class='num'>0</span><span class='op'>]</span> <span class='op'>&lt;</span> <span class='num'>3</span><span class='strut'>&nbsp;</span></p> <p id='t37' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t38' class='stm run hide_run'><span class='key'>class</span> <span class='nam'>AssertionRewritingHook</span><span class='op'>(</span><span class='nam'>object</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t39' class='pln'>&nbsp; &nbsp; <span class='str'>&quot;&quot;&quot;PEP302 Import hook which rewrites asserts.&quot;&quot;&quot;</span><span class='strut'>&nbsp;</span></p> <p id='t40' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t41' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>def</span> <span class='nam'>__init__</span><span class='op'>(</span><span class='nam'>self</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t42' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>session</span> <span class='op'>=</span> <span class='nam'>None</span><span class='strut'>&nbsp;</span></p> <p id='t43' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>modules</span> <span class='op'>=</span> <span class='op'>{</span><span class='op'>}</span><span class='strut'>&nbsp;</span></p> <p id='t44' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>_register_with_pkg_resources</span><span class='op'>(</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t45' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t46' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>def</span> <span class='nam'>set_session</span><span class='op'>(</span><span class='nam'>self</span><span class='op'>,</span> <span class='nam'>session</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t47' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>fnpats</span> <span class='op'>=</span> <span class='nam'>session</span><span class='op'>.</span><span class='nam'>config</span><span class='op'>.</span><span class='nam'>getini</span><span class='op'>(</span><span class='str'>&quot;python_files&quot;</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t48' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>session</span> <span class='op'>=</span> <span class='nam'>session</span><span class='strut'>&nbsp;</span></p> <p id='t49' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t50' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>def</span> <span class='nam'>find_module</span><span class='op'>(</span><span class='nam'>self</span><span class='op'>,</span> <span class='nam'>name</span><span class='op'>,</span> <span class='nam'>path</span><span class='op'>=</span><span class='nam'>None</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t51' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>session</span> <span class='key'>is</span> <span class='nam'>None</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t52' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>None</span><span class='strut'>&nbsp;</span></p> <p id='t53' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>sess</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>session</span><span class='strut'>&nbsp;</span></p> <p id='t54' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>state</span> <span class='op'>=</span> <span class='nam'>sess</span><span class='op'>.</span><span class='nam'>config</span><span class='op'>.</span><span class='nam'>_assertstate</span><span class='strut'>&nbsp;</span></p> <p id='t55' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>state</span><span class='op'>.</span><span class='nam'>trace</span><span class='op'>(</span><span class='str'>&quot;find_module called for: %s&quot;</span> <span class='op'>%</span> <span class='nam'>name</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t56' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>names</span> <span class='op'>=</span> <span class='nam'>name</span><span class='op'>.</span><span class='nam'>rsplit</span><span class='op'>(</span><span class='str'>&quot;.&quot;</span><span class='op'>,</span> <span class='num'>1</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t57' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>lastname</span> <span class='op'>=</span> <span class='nam'>names</span><span class='op'>[</span><span class='op'>-</span><span class='num'>1</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t58' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>pth</span> <span class='op'>=</span> <span class='nam'>None</span><span class='strut'>&nbsp;</span></p> <p id='t59' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>path</span> <span class='key'>is</span> <span class='key'>not</span> <span class='nam'>None</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t60' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># Starting with Python 3.3, path is a _NamespacePath(), which</span><span class='strut'>&nbsp;</span></p> <p id='t61' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># causes problems if not converted to list.</span><span class='strut'>&nbsp;</span></p> <p id='t62' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>path</span> <span class='op'>=</span> <span class='nam'>list</span><span class='op'>(</span><span class='nam'>path</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t63' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>len</span><span class='op'>(</span><span class='nam'>path</span><span class='op'>)</span> <span class='op'>==</span> <span class='num'>1</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t64' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>pth</span> <span class='op'>=</span> <span class='nam'>path</span><span class='op'>[</span><span class='num'>0</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t65' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>pth</span> <span class='key'>is</span> <span class='nam'>None</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t66' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>try</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t67' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>fd</span><span class='op'>,</span> <span class='nam'>fn</span><span class='op'>,</span> <span class='nam'>desc</span> <span class='op'>=</span> <span class='nam'>imp</span><span class='op'>.</span><span class='nam'>find_module</span><span class='op'>(</span><span class='nam'>lastname</span><span class='op'>,</span> <span class='nam'>path</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t68' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>except</span> <span class='nam'>ImportError</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t69' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>None</span><span class='strut'>&nbsp;</span></p> <p id='t70' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>fd</span> <span class='key'>is</span> <span class='key'>not</span> <span class='nam'>None</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t71' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>fd</span><span class='op'>.</span><span class='nam'>close</span><span class='op'>(</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t72' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>tp</span> <span class='op'>=</span> <span class='nam'>desc</span><span class='op'>[</span><span class='num'>2</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t73' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>tp</span> <span class='op'>==</span> <span class='nam'>imp</span><span class='op'>.</span><span class='nam'>PY_COMPILED</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t74' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>hasattr</span><span class='op'>(</span><span class='nam'>imp</span><span class='op'>,</span> <span class='str'>&quot;source_from_cache&quot;</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t75' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>fn</span> <span class='op'>=</span> <span class='nam'>imp</span><span class='op'>.</span><span class='nam'>source_from_cache</span><span class='op'>(</span><span class='nam'>fn</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t76' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>else</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t77' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>fn</span> <span class='op'>=</span> <span class='nam'>fn</span><span class='op'>[</span><span class='op'>:</span><span class='op'>-</span><span class='num'>1</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t78' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>elif</span> <span class='nam'>tp</span> <span class='op'>!=</span> <span class='nam'>imp</span><span class='op'>.</span><span class='nam'>PY_SOURCE</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t79' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># Don&#39;t know what this is.</span><span class='strut'>&nbsp;</span></p> <p id='t80' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>None</span><span class='strut'>&nbsp;</span></p> <p id='t81' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>else</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t82' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>fn</span> <span class='op'>=</span> <span class='nam'>os</span><span class='op'>.</span><span class='nam'>path</span><span class='op'>.</span><span class='nam'>join</span><span class='op'>(</span><span class='nam'>pth</span><span class='op'>,</span> <span class='nam'>name</span><span class='op'>.</span><span class='nam'>rpartition</span><span class='op'>(</span><span class='str'>&quot;.&quot;</span><span class='op'>)</span><span class='op'>[</span><span class='num'>2</span><span class='op'>]</span> <span class='op'>+</span> <span class='str'>&quot;.py&quot;</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t83' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>fn_pypath</span> <span class='op'>=</span> <span class='nam'>py</span><span class='op'>.</span><span class='nam'>path</span><span class='op'>.</span><span class='nam'>local</span><span class='op'>(</span><span class='nam'>fn</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t84' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># Is this a test file?</span><span class='strut'>&nbsp;</span></p> <p id='t85' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='key'>not</span> <span class='nam'>sess</span><span class='op'>.</span><span class='nam'>isinitpath</span><span class='op'>(</span><span class='nam'>fn</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t86' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># We have to be very careful here because imports in this code can</span><span class='strut'>&nbsp;</span></p> <p id='t87' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># trigger a cycle.</span><span class='strut'>&nbsp;</span></p> <p id='t88' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>session</span> <span class='op'>=</span> <span class='nam'>None</span><span class='strut'>&nbsp;</span></p> <p id='t89' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>try</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t90' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>for</span> <span class='nam'>pat</span> <span class='key'>in</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>fnpats</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t91' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>fn_pypath</span><span class='op'>.</span><span class='nam'>fnmatch</span><span class='op'>(</span><span class='nam'>pat</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t92' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>state</span><span class='op'>.</span><span class='nam'>trace</span><span class='op'>(</span><span class='str'>&quot;matched test file %r&quot;</span> <span class='op'>%</span> <span class='op'>(</span><span class='nam'>fn</span><span class='op'>,</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t93' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>break</span><span class='strut'>&nbsp;</span></p> <p id='t94' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>else</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t95' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>None</span><span class='strut'>&nbsp;</span></p> <p id='t96' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>finally</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t97' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>session</span> <span class='op'>=</span> <span class='nam'>sess</span><span class='strut'>&nbsp;</span></p> <p id='t98' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>else</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t99' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>state</span><span class='op'>.</span><span class='nam'>trace</span><span class='op'>(</span><span class='str'>&quot;matched test file (was specified on cmdline): %r&quot;</span> <span class='op'>%</span><span class='strut'>&nbsp;</span></p> <p id='t100' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='op'>(</span><span class='nam'>fn</span><span class='op'>,</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t101' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># The requested module looks like a test file, so rewrite it. This is</span><span class='strut'>&nbsp;</span></p> <p id='t102' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># the most magical part of the process: load the source, rewrite the</span><span class='strut'>&nbsp;</span></p> <p id='t103' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># asserts, and load the rewritten source. We also cache the rewritten</span><span class='strut'>&nbsp;</span></p> <p id='t104' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># module code in a special pyc. We must be aware of the possibility of</span><span class='strut'>&nbsp;</span></p> <p id='t105' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># concurrent pytest processes rewriting and loading pycs. To avoid</span><span class='strut'>&nbsp;</span></p> <p id='t106' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># tricky race conditions, we maintain the following invariant: The</span><span class='strut'>&nbsp;</span></p> <p id='t107' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># cached pyc is always a complete, valid pyc. Operations on it must be</span><span class='strut'>&nbsp;</span></p> <p id='t108' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># atomic. POSIX&#39;s atomic rename comes in handy.</span><span class='strut'>&nbsp;</span></p> <p id='t109' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>write</span> <span class='op'>=</span> <span class='key'>not</span> <span class='nam'>sys</span><span class='op'>.</span><span class='nam'>dont_write_bytecode</span><span class='strut'>&nbsp;</span></p> <p id='t110' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>cache_dir</span> <span class='op'>=</span> <span class='nam'>os</span><span class='op'>.</span><span class='nam'>path</span><span class='op'>.</span><span class='nam'>join</span><span class='op'>(</span><span class='nam'>fn_pypath</span><span class='op'>.</span><span class='nam'>dirname</span><span class='op'>,</span> <span class='str'>&quot;__pycache__&quot;</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t111' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>write</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t112' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>try</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t113' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>os</span><span class='op'>.</span><span class='nam'>mkdir</span><span class='op'>(</span><span class='nam'>cache_dir</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t114' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>except</span> <span class='nam'>OSError</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t115' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>e</span> <span class='op'>=</span> <span class='nam'>sys</span><span class='op'>.</span><span class='nam'>exc_info</span><span class='op'>(</span><span class='op'>)</span><span class='op'>[</span><span class='num'>1</span><span class='op'>]</span><span class='op'>.</span><span class='nam'>errno</span><span class='strut'>&nbsp;</span></p> <p id='t116' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>e</span> <span class='op'>==</span> <span class='nam'>errno</span><span class='op'>.</span><span class='nam'>EEXIST</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t117' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># Either the __pycache__ directory already exists (the</span><span class='strut'>&nbsp;</span></p> <p id='t118' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># common case) or it&#39;s blocked by a non-dir node. In the</span><span class='strut'>&nbsp;</span></p> <p id='t119' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># latter case, we&#39;ll ignore it in _write_pyc.</span><span class='strut'>&nbsp;</span></p> <p id='t120' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>pass</span><span class='strut'>&nbsp;</span></p> <p id='t121' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>elif</span> <span class='nam'>e</span> <span class='key'>in</span> <span class='op'>[</span><span class='nam'>errno</span><span class='op'>.</span><span class='nam'>ENOENT</span><span class='op'>,</span> <span class='nam'>errno</span><span class='op'>.</span><span class='nam'>ENOTDIR</span><span class='op'>]</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t122' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># One of the path components was not a directory, likely</span><span class='strut'>&nbsp;</span></p> <p id='t123' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># because we&#39;re in a zip file.</span><span class='strut'>&nbsp;</span></p> <p id='t124' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>write</span> <span class='op'>=</span> <span class='nam'>False</span><span class='strut'>&nbsp;</span></p> <p id='t125' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>elif</span> <span class='nam'>e</span> <span class='op'>==</span> <span class='nam'>errno</span><span class='op'>.</span><span class='nam'>EACCES</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t126' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>state</span><span class='op'>.</span><span class='nam'>trace</span><span class='op'>(</span><span class='str'>&quot;read only directory: %r&quot;</span> <span class='op'>%</span> <span class='nam'>fn_pypath</span><span class='op'>.</span><span class='nam'>dirname</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t127' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>write</span> <span class='op'>=</span> <span class='nam'>False</span><span class='strut'>&nbsp;</span></p> <p id='t128' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>else</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t129' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>raise</span><span class='strut'>&nbsp;</span></p> <p id='t130' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>cache_name</span> <span class='op'>=</span> <span class='nam'>fn_pypath</span><span class='op'>.</span><span class='nam'>basename</span><span class='op'>[</span><span class='op'>:</span><span class='op'>-</span><span class='num'>3</span><span class='op'>]</span> <span class='op'>+</span> <span class='nam'>PYC_TAIL</span><span class='strut'>&nbsp;</span></p> <p id='t131' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>pyc</span> <span class='op'>=</span> <span class='nam'>os</span><span class='op'>.</span><span class='nam'>path</span><span class='op'>.</span><span class='nam'>join</span><span class='op'>(</span><span class='nam'>cache_dir</span><span class='op'>,</span> <span class='nam'>cache_name</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t132' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># Notice that even if we&#39;re in a read-only directory, I&#39;m going</span><span class='strut'>&nbsp;</span></p> <p id='t133' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># to check for a cached pyc. This may not be optimal...</span><span class='strut'>&nbsp;</span></p> <p id='t134' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>co</span> <span class='op'>=</span> <span class='nam'>_read_pyc</span><span class='op'>(</span><span class='nam'>fn_pypath</span><span class='op'>,</span> <span class='nam'>pyc</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t135' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>co</span> <span class='key'>is</span> <span class='nam'>None</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t136' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>state</span><span class='op'>.</span><span class='nam'>trace</span><span class='op'>(</span><span class='str'>&quot;rewriting %r&quot;</span> <span class='op'>%</span> <span class='op'>(</span><span class='nam'>fn</span><span class='op'>,</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t137' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>co</span> <span class='op'>=</span> <span class='nam'>_rewrite_test</span><span class='op'>(</span><span class='nam'>state</span><span class='op'>,</span> <span class='nam'>fn_pypath</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t138' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>co</span> <span class='key'>is</span> <span class='nam'>None</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t139' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># Probably a SyntaxError in the test.</span><span class='strut'>&nbsp;</span></p> <p id='t140' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>None</span><span class='strut'>&nbsp;</span></p> <p id='t141' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>write</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t142' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>_make_rewritten_pyc</span><span class='op'>(</span><span class='nam'>state</span><span class='op'>,</span> <span class='nam'>fn_pypath</span><span class='op'>,</span> <span class='nam'>pyc</span><span class='op'>,</span> <span class='nam'>co</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t143' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>else</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t144' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>state</span><span class='op'>.</span><span class='nam'>trace</span><span class='op'>(</span><span class='str'>&quot;found cached rewritten pyc for %r&quot;</span> <span class='op'>%</span> <span class='op'>(</span><span class='nam'>fn</span><span class='op'>,</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t145' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>modules</span><span class='op'>[</span><span class='nam'>name</span><span class='op'>]</span> <span class='op'>=</span> <span class='nam'>co</span><span class='op'>,</span> <span class='nam'>pyc</span><span class='strut'>&nbsp;</span></p> <p id='t146' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>self</span><span class='strut'>&nbsp;</span></p> <p id='t147' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t148' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>def</span> <span class='nam'>load_module</span><span class='op'>(</span><span class='nam'>self</span><span class='op'>,</span> <span class='nam'>name</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t149' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>co</span><span class='op'>,</span> <span class='nam'>pyc</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>modules</span><span class='op'>.</span><span class='nam'>pop</span><span class='op'>(</span><span class='nam'>name</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t150' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># I wish I could just call imp.load_compiled here, but __file__ has to</span><span class='strut'>&nbsp;</span></p> <p id='t151' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># be set properly. In Python 3.2+, this all would be handled correctly</span><span class='strut'>&nbsp;</span></p> <p id='t152' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># by load_compiled.</span><span class='strut'>&nbsp;</span></p> <p id='t153' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>mod</span> <span class='op'>=</span> <span class='nam'>sys</span><span class='op'>.</span><span class='nam'>modules</span><span class='op'>[</span><span class='nam'>name</span><span class='op'>]</span> <span class='op'>=</span> <span class='nam'>imp</span><span class='op'>.</span><span class='nam'>new_module</span><span class='op'>(</span><span class='nam'>name</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t154' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>try</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t155' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>mod</span><span class='op'>.</span><span class='nam'>__file__</span> <span class='op'>=</span> <span class='nam'>co</span><span class='op'>.</span><span class='nam'>co_filename</span><span class='strut'>&nbsp;</span></p> <p id='t156' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># Normally, this attribute is 3.2+.</span><span class='strut'>&nbsp;</span></p> <p id='t157' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>mod</span><span class='op'>.</span><span class='nam'>__cached__</span> <span class='op'>=</span> <span class='nam'>pyc</span><span class='strut'>&nbsp;</span></p> <p id='t158' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>mod</span><span class='op'>.</span><span class='nam'>__loader__</span> <span class='op'>=</span> <span class='nam'>self</span><span class='strut'>&nbsp;</span></p> <p id='t159' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>py</span><span class='op'>.</span><span class='nam'>builtin</span><span class='op'>.</span><span class='nam'>exec_</span><span class='op'>(</span><span class='nam'>co</span><span class='op'>,</span> <span class='nam'>mod</span><span class='op'>.</span><span class='nam'>__dict__</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t160' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>except</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t161' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>del</span> <span class='nam'>sys</span><span class='op'>.</span><span class='nam'>modules</span><span class='op'>[</span><span class='nam'>name</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t162' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>raise</span><span class='strut'>&nbsp;</span></p> <p id='t163' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>sys</span><span class='op'>.</span><span class='nam'>modules</span><span class='op'>[</span><span class='nam'>name</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t164' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t165' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t166' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t167' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>def</span> <span class='nam'>is_package</span><span class='op'>(</span><span class='nam'>self</span><span class='op'>,</span> <span class='nam'>name</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t168' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>try</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t169' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>fd</span><span class='op'>,</span> <span class='nam'>fn</span><span class='op'>,</span> <span class='nam'>desc</span> <span class='op'>=</span> <span class='nam'>imp</span><span class='op'>.</span><span class='nam'>find_module</span><span class='op'>(</span><span class='nam'>name</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t170' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>except</span> <span class='nam'>ImportError</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t171' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>False</span><span class='strut'>&nbsp;</span></p> <p id='t172' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>fd</span> <span class='key'>is</span> <span class='key'>not</span> <span class='nam'>None</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t173' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>fd</span><span class='op'>.</span><span class='nam'>close</span><span class='op'>(</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t174' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>tp</span> <span class='op'>=</span> <span class='nam'>desc</span><span class='op'>[</span><span class='num'>2</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t175' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>tp</span> <span class='op'>==</span> <span class='nam'>imp</span><span class='op'>.</span><span class='nam'>PKG_DIRECTORY</span><span class='strut'>&nbsp;</span></p> <p id='t176' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t177' class='stm run hide_run'>&nbsp; &nbsp; <span class='op'>@</span><span class='nam'>classmethod</span><span class='strut'>&nbsp;</span></p> <p id='t178' class='pln'>&nbsp; &nbsp; <span class='key'>def</span> <span class='nam'>_register_with_pkg_resources</span><span class='op'>(</span><span class='nam'>cls</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t179' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='str'>&quot;&quot;&quot;</span><span class='strut'>&nbsp;</span></p> <p id='t180' class='pln'><span class='str'>&nbsp; &nbsp; &nbsp; &nbsp; Ensure package resources can be loaded from this loader. May be called</span><span class='strut'>&nbsp;</span></p> <p id='t181' class='pln'><span class='str'>&nbsp; &nbsp; &nbsp; &nbsp; multiple times, as the operation is idempotent.</span><span class='strut'>&nbsp;</span></p> <p id='t182' class='pln'><span class='str'>&nbsp; &nbsp; &nbsp; &nbsp; &quot;&quot;&quot;</span><span class='strut'>&nbsp;</span></p> <p id='t183' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>try</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t184' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>import</span> <span class='nam'>pkg_resources</span><span class='strut'>&nbsp;</span></p> <p id='t185' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># access an attribute in case a deferred importer is present</span><span class='strut'>&nbsp;</span></p> <p id='t186' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>pkg_resources</span><span class='op'>.</span><span class='nam'>__name__</span><span class='strut'>&nbsp;</span></p> <p id='t187' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>except</span> <span class='nam'>ImportError</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t188' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span><span class='strut'>&nbsp;</span></p> <p id='t189' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t190' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># Since pytest tests are always located in the file system, the</span><span class='strut'>&nbsp;</span></p> <p id='t191' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'>#&nbsp; DefaultProvider is appropriate.</span><span class='strut'>&nbsp;</span></p> <p id='t192' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>pkg_resources</span><span class='op'>.</span><span class='nam'>register_loader_type</span><span class='op'>(</span><span class='nam'>cls</span><span class='op'>,</span> <span class='nam'>pkg_resources</span><span class='op'>.</span><span class='nam'>DefaultProvider</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t193' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t194' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t195' class='stm run hide_run'><span class='key'>def</span> <span class='nam'>_write_pyc</span><span class='op'>(</span><span class='nam'>state</span><span class='op'>,</span> <span class='nam'>co</span><span class='op'>,</span> <span class='nam'>source_path</span><span class='op'>,</span> <span class='nam'>pyc</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t196' class='pln'>&nbsp; &nbsp; <span class='com'># Technically, we don&#39;t have to have the same pyc format as</span><span class='strut'>&nbsp;</span></p> <p id='t197' class='pln'>&nbsp; &nbsp; <span class='com'># (C)Python, since these &quot;pycs&quot; should never be seen by builtin</span><span class='strut'>&nbsp;</span></p> <p id='t198' class='pln'>&nbsp; &nbsp; <span class='com'># import. However, there&#39;s little reason deviate, and I hope</span><span class='strut'>&nbsp;</span></p> <p id='t199' class='pln'>&nbsp; &nbsp; <span class='com'># sometime to be able to use imp.load_compiled to load them. (See</span><span class='strut'>&nbsp;</span></p> <p id='t200' class='pln'>&nbsp; &nbsp; <span class='com'># the comment in load_module above.)</span><span class='strut'>&nbsp;</span></p> <p id='t201' class='stm mis'>&nbsp; &nbsp; <span class='nam'>mtime</span> <span class='op'>=</span> <span class='nam'>int</span><span class='op'>(</span><span class='nam'>source_path</span><span class='op'>.</span><span class='nam'>mtime</span><span class='op'>(</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t202' class='stm mis'>&nbsp; &nbsp; <span class='key'>try</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t203' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>fp</span> <span class='op'>=</span> <span class='nam'>open</span><span class='op'>(</span><span class='nam'>pyc</span><span class='op'>,</span> <span class='str'>&quot;wb&quot;</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t204' class='stm mis'>&nbsp; &nbsp; <span class='key'>except</span> <span class='nam'>IOError</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t205' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>err</span> <span class='op'>=</span> <span class='nam'>sys</span><span class='op'>.</span><span class='nam'>exc_info</span><span class='op'>(</span><span class='op'>)</span><span class='op'>[</span><span class='num'>1</span><span class='op'>]</span><span class='op'>.</span><span class='nam'>errno</span><span class='strut'>&nbsp;</span></p> <p id='t206' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>state</span><span class='op'>.</span><span class='nam'>trace</span><span class='op'>(</span><span class='str'>&quot;error writing pyc file at %s: errno=%s&quot;</span> <span class='op'>%</span><span class='op'>(</span><span class='nam'>pyc</span><span class='op'>,</span> <span class='nam'>err</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t207' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># we ignore any failure to write the cache file</span><span class='strut'>&nbsp;</span></p> <p id='t208' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># there are many reasons, permission-denied, __pycache__ being a</span><span class='strut'>&nbsp;</span></p> <p id='t209' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># file etc.</span><span class='strut'>&nbsp;</span></p> <p id='t210' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>False</span><span class='strut'>&nbsp;</span></p> <p id='t211' class='stm mis'>&nbsp; &nbsp; <span class='key'>try</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t212' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>fp</span><span class='op'>.</span><span class='nam'>write</span><span class='op'>(</span><span class='nam'>imp</span><span class='op'>.</span><span class='nam'>get_magic</span><span class='op'>(</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t213' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>fp</span><span class='op'>.</span><span class='nam'>write</span><span class='op'>(</span><span class='nam'>struct</span><span class='op'>.</span><span class='nam'>pack</span><span class='op'>(</span><span class='str'>&quot;&lt;l&quot;</span><span class='op'>,</span> <span class='nam'>mtime</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t214' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>marshal</span><span class='op'>.</span><span class='nam'>dump</span><span class='op'>(</span><span class='nam'>co</span><span class='op'>,</span> <span class='nam'>fp</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t215' class='pln'>&nbsp; &nbsp; <span class='key'>finally</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t216' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>fp</span><span class='op'>.</span><span class='nam'>close</span><span class='op'>(</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t217' class='stm mis'>&nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>True</span><span class='strut'>&nbsp;</span></p> <p id='t218' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t219' class='stm run hide_run'><span class='nam'>RN</span> <span class='op'>=</span> <span class='str'>&quot;\r\n&quot;</span><span class='op'>.</span><span class='nam'>encode</span><span class='op'>(</span><span class='str'>&quot;utf-8&quot;</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t220' class='stm run hide_run'><span class='nam'>N</span> <span class='op'>=</span> <span class='str'>&quot;\n&quot;</span><span class='op'>.</span><span class='nam'>encode</span><span class='op'>(</span><span class='str'>&quot;utf-8&quot;</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t221' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t222' class='stm run hide_run'><span class='nam'>cookie_re</span> <span class='op'>=</span> <span class='nam'>re</span><span class='op'>.</span><span class='nam'>compile</span><span class='op'>(</span><span class='str'>r&quot;^[ \t\f]*#.*coding[:=][ \t]*[-\w.]+&quot;</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t223' class='stm run hide_run'><span class='nam'>BOM_UTF8</span> <span class='op'>=</span> <span class='str'>&#39;\xef\xbb\xbf&#39;</span><span class='strut'>&nbsp;</span></p> <p id='t224' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t225' class='stm run hide_run'><span class='key'>def</span> <span class='nam'>_rewrite_test</span><span class='op'>(</span><span class='nam'>state</span><span class='op'>,</span> <span class='nam'>fn</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t226' class='pln'>&nbsp; &nbsp; <span class='str'>&quot;&quot;&quot;Try to read and rewrite *fn* and return the code object.&quot;&quot;&quot;</span><span class='strut'>&nbsp;</span></p> <p id='t227' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>try</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t228' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>source</span> <span class='op'>=</span> <span class='nam'>fn</span><span class='op'>.</span><span class='nam'>read</span><span class='op'>(</span><span class='str'>&quot;rb&quot;</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t229' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>except</span> <span class='nam'>EnvironmentError</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t230' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>None</span><span class='strut'>&nbsp;</span></p> <p id='t231' class='stm mis'>&nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>ASCII_IS_DEFAULT_ENCODING</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t232' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># ASCII is the default encoding in Python 2. Without a coding</span><span class='strut'>&nbsp;</span></p> <p id='t233' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># declaration, Python 2 will complain about any bytes in the file</span><span class='strut'>&nbsp;</span></p> <p id='t234' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># outside the ASCII range. Sadly, this behavior does not extend to</span><span class='strut'>&nbsp;</span></p> <p id='t235' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># compile() or ast.parse(), which prefer to interpret the bytes as</span><span class='strut'>&nbsp;</span></p> <p id='t236' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># latin-1. (At least they properly handle explicit coding cookies.) To</span><span class='strut'>&nbsp;</span></p> <p id='t237' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># preserve this error behavior, we could force ast.parse() to use ASCII</span><span class='strut'>&nbsp;</span></p> <p id='t238' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># as the encoding by inserting a coding cookie. Unfortunately, that</span><span class='strut'>&nbsp;</span></p> <p id='t239' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># messes up line numbers. Thus, we have to check ourselves if anything</span><span class='strut'>&nbsp;</span></p> <p id='t240' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># is outside the ASCII range in the case no encoding is explicitly</span><span class='strut'>&nbsp;</span></p> <p id='t241' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># declared. For more context, see issue #269. Yay for Python 3 which</span><span class='strut'>&nbsp;</span></p> <p id='t242' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># gets this right.</span><span class='strut'>&nbsp;</span></p> <p id='t243' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>end1</span> <span class='op'>=</span> <span class='nam'>source</span><span class='op'>.</span><span class='nam'>find</span><span class='op'>(</span><span class='str'>&quot;\n&quot;</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t244' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>end2</span> <span class='op'>=</span> <span class='nam'>source</span><span class='op'>.</span><span class='nam'>find</span><span class='op'>(</span><span class='str'>&quot;\n&quot;</span><span class='op'>,</span> <span class='nam'>end1</span> <span class='op'>+</span> <span class='num'>1</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t245' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='op'>(</span><span class='key'>not</span> <span class='nam'>source</span><span class='op'>.</span><span class='nam'>startswith</span><span class='op'>(</span><span class='nam'>BOM_UTF8</span><span class='op'>)</span> <span class='key'>and</span><span class='strut'>&nbsp;</span></p> <p id='t246' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>cookie_re</span><span class='op'>.</span><span class='nam'>match</span><span class='op'>(</span><span class='nam'>source</span><span class='op'>[</span><span class='num'>0</span><span class='op'>:</span><span class='nam'>end1</span><span class='op'>]</span><span class='op'>)</span> <span class='key'>is</span> <span class='nam'>None</span> <span class='key'>and</span><span class='strut'>&nbsp;</span></p> <p id='t247' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>cookie_re</span><span class='op'>.</span><span class='nam'>match</span><span class='op'>(</span><span class='nam'>source</span><span class='op'>[</span><span class='nam'>end1</span> <span class='op'>+</span> <span class='num'>1</span><span class='op'>:</span><span class='nam'>end2</span><span class='op'>]</span><span class='op'>)</span> <span class='key'>is</span> <span class='nam'>None</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t248' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>hasattr</span><span class='op'>(</span><span class='nam'>state</span><span class='op'>,</span> <span class='str'>&quot;_indecode&quot;</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t249' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>None</span>&nbsp; <span class='com'># encodings imported us again, we don&#39;t rewrite</span><span class='strut'>&nbsp;</span></p> <p id='t250' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>state</span><span class='op'>.</span><span class='nam'>_indecode</span> <span class='op'>=</span> <span class='nam'>True</span><span class='strut'>&nbsp;</span></p> <p id='t251' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>try</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t252' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>try</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t253' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>source</span><span class='op'>.</span><span class='nam'>decode</span><span class='op'>(</span><span class='str'>&quot;ascii&quot;</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t254' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>except</span> <span class='nam'>UnicodeDecodeError</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t255' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># Let it fail in real import.</span><span class='strut'>&nbsp;</span></p> <p id='t256' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>None</span><span class='strut'>&nbsp;</span></p> <p id='t257' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>finally</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t258' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>del</span> <span class='nam'>state</span><span class='op'>.</span><span class='nam'>_indecode</span><span class='strut'>&nbsp;</span></p> <p id='t259' class='pln'>&nbsp; &nbsp; <span class='com'># On Python versions which are not 2.7 and less than or equal to 3.1, the</span><span class='strut'>&nbsp;</span></p> <p id='t260' class='pln'>&nbsp; &nbsp; <span class='com'># parser expects *nix newlines.</span><span class='strut'>&nbsp;</span></p> <p id='t261' class='stm mis'>&nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>REWRITE_NEWLINES</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t262' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>source</span> <span class='op'>=</span> <span class='nam'>source</span><span class='op'>.</span><span class='nam'>replace</span><span class='op'>(</span><span class='nam'>RN</span><span class='op'>,</span> <span class='nam'>N</span><span class='op'>)</span> <span class='op'>+</span> <span class='nam'>N</span><span class='strut'>&nbsp;</span></p> <p id='t263' class='stm mis'>&nbsp; &nbsp; <span class='key'>try</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t264' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>tree</span> <span class='op'>=</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>parse</span><span class='op'>(</span><span class='nam'>source</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t265' class='stm mis'>&nbsp; &nbsp; <span class='key'>except</span> <span class='nam'>SyntaxError</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t266' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># Let this pop up again in the real import.</span><span class='strut'>&nbsp;</span></p> <p id='t267' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>state</span><span class='op'>.</span><span class='nam'>trace</span><span class='op'>(</span><span class='str'>&quot;failed to parse: %r&quot;</span> <span class='op'>%</span> <span class='op'>(</span><span class='nam'>fn</span><span class='op'>,</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t268' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>None</span><span class='strut'>&nbsp;</span></p> <p id='t269' class='stm mis'>&nbsp; &nbsp; <span class='nam'>rewrite_asserts</span><span class='op'>(</span><span class='nam'>tree</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t270' class='stm mis'>&nbsp; &nbsp; <span class='key'>try</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t271' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>co</span> <span class='op'>=</span> <span class='nam'>compile</span><span class='op'>(</span><span class='nam'>tree</span><span class='op'>,</span> <span class='nam'>fn</span><span class='op'>.</span><span class='nam'>strpath</span><span class='op'>,</span> <span class='str'>&quot;exec&quot;</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t272' class='stm mis'>&nbsp; &nbsp; <span class='key'>except</span> <span class='nam'>SyntaxError</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t273' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># It&#39;s possible that this error is from some bug in the</span><span class='strut'>&nbsp;</span></p> <p id='t274' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># assertion rewriting, but I don&#39;t know of a fast way to tell.</span><span class='strut'>&nbsp;</span></p> <p id='t275' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>state</span><span class='op'>.</span><span class='nam'>trace</span><span class='op'>(</span><span class='str'>&quot;failed to compile: %r&quot;</span> <span class='op'>%</span> <span class='op'>(</span><span class='nam'>fn</span><span class='op'>,</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t276' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>None</span><span class='strut'>&nbsp;</span></p> <p id='t277' class='stm mis'>&nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>co</span><span class='strut'>&nbsp;</span></p> <p id='t278' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t279' class='stm run hide_run'><span class='key'>def</span> <span class='nam'>_make_rewritten_pyc</span><span class='op'>(</span><span class='nam'>state</span><span class='op'>,</span> <span class='nam'>fn</span><span class='op'>,</span> <span class='nam'>pyc</span><span class='op'>,</span> <span class='nam'>co</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t280' class='pln'>&nbsp; &nbsp; <span class='str'>&quot;&quot;&quot;Try to dump rewritten code to *pyc*.&quot;&quot;&quot;</span><span class='strut'>&nbsp;</span></p> <p id='t281' class='stm mis'>&nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>sys</span><span class='op'>.</span><span class='nam'>platform</span><span class='op'>.</span><span class='nam'>startswith</span><span class='op'>(</span><span class='str'>&quot;win&quot;</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t282' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># Windows grants exclusive access to open files and doesn&#39;t have atomic</span><span class='strut'>&nbsp;</span></p> <p id='t283' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># rename, so just write into the final file.</span><span class='strut'>&nbsp;</span></p> <p id='t284' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>_write_pyc</span><span class='op'>(</span><span class='nam'>state</span><span class='op'>,</span> <span class='nam'>co</span><span class='op'>,</span> <span class='nam'>fn</span><span class='op'>,</span> <span class='nam'>pyc</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t285' class='pln'>&nbsp; &nbsp; <span class='key'>else</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t286' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># When not on windows, assume rename is atomic. Dump the code object</span><span class='strut'>&nbsp;</span></p> <p id='t287' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># into a file specific to this process and atomically replace it.</span><span class='strut'>&nbsp;</span></p> <p id='t288' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>proc_pyc</span> <span class='op'>=</span> <span class='nam'>pyc</span> <span class='op'>+</span> <span class='str'>&quot;.&quot;</span> <span class='op'>+</span> <span class='nam'>str</span><span class='op'>(</span><span class='nam'>os</span><span class='op'>.</span><span class='nam'>getpid</span><span class='op'>(</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t289' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>_write_pyc</span><span class='op'>(</span><span class='nam'>state</span><span class='op'>,</span> <span class='nam'>co</span><span class='op'>,</span> <span class='nam'>fn</span><span class='op'>,</span> <span class='nam'>proc_pyc</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t290' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>os</span><span class='op'>.</span><span class='nam'>rename</span><span class='op'>(</span><span class='nam'>proc_pyc</span><span class='op'>,</span> <span class='nam'>pyc</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t291' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t292' class='stm run hide_run'><span class='key'>def</span> <span class='nam'>_read_pyc</span><span class='op'>(</span><span class='nam'>source</span><span class='op'>,</span> <span class='nam'>pyc</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t293' class='pln'>&nbsp; &nbsp; <span class='str'>&quot;&quot;&quot;Possibly read a pytest pyc containing rewritten code.</span><span class='strut'>&nbsp;</span></p> <p id='t294' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t295' class='pln'><span class='str'>&nbsp; &nbsp; Return rewritten code if successful or None if not.</span><span class='strut'>&nbsp;</span></p> <p id='t296' class='pln'><span class='str'>&nbsp; &nbsp; &quot;&quot;&quot;</span><span class='strut'>&nbsp;</span></p> <p id='t297' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>try</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t298' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>fp</span> <span class='op'>=</span> <span class='nam'>open</span><span class='op'>(</span><span class='nam'>pyc</span><span class='op'>,</span> <span class='str'>&quot;rb&quot;</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t299' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>except</span> <span class='nam'>IOError</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t300' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>None</span><span class='strut'>&nbsp;</span></p> <p id='t301' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>try</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t302' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>try</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t303' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>mtime</span> <span class='op'>=</span> <span class='nam'>int</span><span class='op'>(</span><span class='nam'>source</span><span class='op'>.</span><span class='nam'>mtime</span><span class='op'>(</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t304' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>data</span> <span class='op'>=</span> <span class='nam'>fp</span><span class='op'>.</span><span class='nam'>read</span><span class='op'>(</span><span class='num'>8</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t305' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>except</span> <span class='nam'>EnvironmentError</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t306' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>None</span><span class='strut'>&nbsp;</span></p> <p id='t307' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># Check for invalid or out of date pyc file.</span><span class='strut'>&nbsp;</span></p> <p id='t308' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='op'>(</span><span class='nam'>len</span><span class='op'>(</span><span class='nam'>data</span><span class='op'>)</span> <span class='op'>!=</span> <span class='num'>8</span> <span class='key'>or</span> <span class='nam'>data</span><span class='op'>[</span><span class='op'>:</span><span class='num'>4</span><span class='op'>]</span> <span class='op'>!=</span> <span class='nam'>imp</span><span class='op'>.</span><span class='nam'>get_magic</span><span class='op'>(</span><span class='op'>)</span> <span class='key'>or</span><span class='strut'>&nbsp;</span></p> <p id='t309' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>struct</span><span class='op'>.</span><span class='nam'>unpack</span><span class='op'>(</span><span class='str'>&quot;&lt;l&quot;</span><span class='op'>,</span> <span class='nam'>data</span><span class='op'>[</span><span class='num'>4</span><span class='op'>:</span><span class='op'>]</span><span class='op'>)</span><span class='op'>[</span><span class='num'>0</span><span class='op'>]</span> <span class='op'>!=</span> <span class='nam'>mtime</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t310' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>None</span><span class='strut'>&nbsp;</span></p> <p id='t311' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>co</span> <span class='op'>=</span> <span class='nam'>marshal</span><span class='op'>.</span><span class='nam'>load</span><span class='op'>(</span><span class='nam'>fp</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t312' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='key'>not</span> <span class='nam'>isinstance</span><span class='op'>(</span><span class='nam'>co</span><span class='op'>,</span> <span class='nam'>types</span><span class='op'>.</span><span class='nam'>CodeType</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t313' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># That&#39;s interesting....</span><span class='strut'>&nbsp;</span></p> <p id='t314' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>None</span><span class='strut'>&nbsp;</span></p> <p id='t315' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>co</span><span class='strut'>&nbsp;</span></p> <p id='t316' class='pln'>&nbsp; &nbsp; <span class='key'>finally</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t317' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>fp</span><span class='op'>.</span><span class='nam'>close</span><span class='op'>(</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t318' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t319' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t320' class='stm run hide_run'><span class='key'>def</span> <span class='nam'>rewrite_asserts</span><span class='op'>(</span><span class='nam'>mod</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t321' class='pln'>&nbsp; &nbsp; <span class='str'>&quot;&quot;&quot;Rewrite the assert statements in mod.&quot;&quot;&quot;</span><span class='strut'>&nbsp;</span></p> <p id='t322' class='stm mis'>&nbsp; &nbsp; <span class='nam'>AssertionRewriter</span><span class='op'>(</span><span class='op'>)</span><span class='op'>.</span><span class='nam'>run</span><span class='op'>(</span><span class='nam'>mod</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t323' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t324' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t325' class='stm run hide_run'><span class='nam'>_saferepr</span> <span class='op'>=</span> <span class='nam'>py</span><span class='op'>.</span><span class='nam'>io</span><span class='op'>.</span><span class='nam'>saferepr</span><span class='strut'>&nbsp;</span></p> <p id='t326' class='stm run hide_run'><span class='key'>from</span> <span class='nam'>_pytest</span><span class='op'>.</span><span class='nam'>assertion</span><span class='op'>.</span><span class='nam'>util</span> <span class='key'>import</span> <span class='nam'>format_explanation</span> <span class='key'>as</span> <span class='nam'>_format_explanation</span> <span class='com'># noqa</span><span class='strut'>&nbsp;</span></p> <p id='t327' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t328' class='stm run hide_run'><span class='key'>def</span> <span class='nam'>_should_repr_global_name</span><span class='op'>(</span><span class='nam'>obj</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t329' class='stm mis'>&nbsp; &nbsp; <span class='key'>return</span> <span class='key'>not</span> <span class='nam'>hasattr</span><span class='op'>(</span><span class='nam'>obj</span><span class='op'>,</span> <span class='str'>&quot;__name__&quot;</span><span class='op'>)</span> <span class='key'>and</span> <span class='key'>not</span> <span class='nam'>py</span><span class='op'>.</span><span class='nam'>builtin</span><span class='op'>.</span><span class='nam'>callable</span><span class='op'>(</span><span class='nam'>obj</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t330' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t331' class='stm run hide_run'><span class='key'>def</span> <span class='nam'>_format_boolop</span><span class='op'>(</span><span class='nam'>explanations</span><span class='op'>,</span> <span class='nam'>is_or</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t332' class='stm mis'>&nbsp; &nbsp; <span class='key'>return</span> <span class='str'>&quot;(&quot;</span> <span class='op'>+</span> <span class='op'>(</span><span class='nam'>is_or</span> <span class='key'>and</span> <span class='str'>&quot; or &quot;</span> <span class='key'>or</span> <span class='str'>&quot; and &quot;</span><span class='op'>)</span><span class='op'>.</span><span class='nam'>join</span><span class='op'>(</span><span class='nam'>explanations</span><span class='op'>)</span> <span class='op'>+</span> <span class='str'>&quot;)&quot;</span><span class='strut'>&nbsp;</span></p> <p id='t333' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t334' class='stm run hide_run'><span class='key'>def</span> <span class='nam'>_call_reprcompare</span><span class='op'>(</span><span class='nam'>ops</span><span class='op'>,</span> <span class='nam'>results</span><span class='op'>,</span> <span class='nam'>expls</span><span class='op'>,</span> <span class='nam'>each_obj</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t335' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>for</span> <span class='nam'>i</span><span class='op'>,</span> <span class='nam'>res</span><span class='op'>,</span> <span class='nam'>expl</span> <span class='key'>in</span> <span class='nam'>zip</span><span class='op'>(</span><span class='nam'>range</span><span class='op'>(</span><span class='nam'>len</span><span class='op'>(</span><span class='nam'>ops</span><span class='op'>)</span><span class='op'>)</span><span class='op'>,</span> <span class='nam'>results</span><span class='op'>,</span> <span class='nam'>expls</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t336' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>try</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t337' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>done</span> <span class='op'>=</span> <span class='key'>not</span> <span class='nam'>res</span><span class='strut'>&nbsp;</span></p> <p id='t338' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>except</span> <span class='nam'>Exception</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t339' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>done</span> <span class='op'>=</span> <span class='nam'>True</span><span class='strut'>&nbsp;</span></p> <p id='t340' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>done</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t341' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>break</span><span class='strut'>&nbsp;</span></p> <p id='t342' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>util</span><span class='op'>.</span><span class='nam'>_reprcompare</span> <span class='key'>is</span> <span class='key'>not</span> <span class='nam'>None</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t343' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>custom</span> <span class='op'>=</span> <span class='nam'>util</span><span class='op'>.</span><span class='nam'>_reprcompare</span><span class='op'>(</span><span class='nam'>ops</span><span class='op'>[</span><span class='nam'>i</span><span class='op'>]</span><span class='op'>,</span> <span class='nam'>each_obj</span><span class='op'>[</span><span class='nam'>i</span><span class='op'>]</span><span class='op'>,</span> <span class='nam'>each_obj</span><span class='op'>[</span><span class='nam'>i</span> <span class='op'>+</span> <span class='num'>1</span><span class='op'>]</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t344' class='stm run hide_run'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>custom</span> <span class='key'>is</span> <span class='key'>not</span> <span class='nam'>None</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t345' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>custom</span><span class='strut'>&nbsp;</span></p> <p id='t346' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>expl</span><span class='strut'>&nbsp;</span></p> <p id='t347' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t348' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t349' class='stm run hide_run'><span class='nam'>unary_map</span> <span class='op'>=</span> <span class='op'>{</span><span class='strut'>&nbsp;</span></p> <p id='t350' class='pln'>&nbsp; &nbsp; <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Not</span><span class='op'>:</span> <span class='str'>&quot;not %s&quot;</span><span class='op'>,</span><span class='strut'>&nbsp;</span></p> <p id='t351' class='pln'>&nbsp; &nbsp; <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Invert</span><span class='op'>:</span> <span class='str'>&quot;~%s&quot;</span><span class='op'>,</span><span class='strut'>&nbsp;</span></p> <p id='t352' class='pln'>&nbsp; &nbsp; <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>USub</span><span class='op'>:</span> <span class='str'>&quot;-%s&quot;</span><span class='op'>,</span><span class='strut'>&nbsp;</span></p> <p id='t353' class='pln'>&nbsp; &nbsp; <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>UAdd</span><span class='op'>:</span> <span class='str'>&quot;+%s&quot;</span><span class='strut'>&nbsp;</span></p> <p id='t354' class='pln'><span class='op'>}</span><span class='strut'>&nbsp;</span></p> <p id='t355' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t356' class='stm run hide_run'><span class='nam'>binop_map</span> <span class='op'>=</span> <span class='op'>{</span><span class='strut'>&nbsp;</span></p> <p id='t357' class='pln'>&nbsp; &nbsp; <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>BitOr</span><span class='op'>:</span> <span class='str'>&quot;|&quot;</span><span class='op'>,</span><span class='strut'>&nbsp;</span></p> <p id='t358' class='pln'>&nbsp; &nbsp; <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>BitXor</span><span class='op'>:</span> <span class='str'>&quot;^&quot;</span><span class='op'>,</span><span class='strut'>&nbsp;</span></p> <p id='t359' class='pln'>&nbsp; &nbsp; <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>BitAnd</span><span class='op'>:</span> <span class='str'>&quot;&amp;&quot;</span><span class='op'>,</span><span class='strut'>&nbsp;</span></p> <p id='t360' class='pln'>&nbsp; &nbsp; <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>LShift</span><span class='op'>:</span> <span class='str'>&quot;&lt;&lt;&quot;</span><span class='op'>,</span><span class='strut'>&nbsp;</span></p> <p id='t361' class='pln'>&nbsp; &nbsp; <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>RShift</span><span class='op'>:</span> <span class='str'>&quot;&gt;&gt;&quot;</span><span class='op'>,</span><span class='strut'>&nbsp;</span></p> <p id='t362' class='pln'>&nbsp; &nbsp; <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Add</span><span class='op'>:</span> <span class='str'>&quot;+&quot;</span><span class='op'>,</span><span class='strut'>&nbsp;</span></p> <p id='t363' class='pln'>&nbsp; &nbsp; <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Sub</span><span class='op'>:</span> <span class='str'>&quot;-&quot;</span><span class='op'>,</span><span class='strut'>&nbsp;</span></p> <p id='t364' class='pln'>&nbsp; &nbsp; <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Mult</span><span class='op'>:</span> <span class='str'>&quot;*&quot;</span><span class='op'>,</span><span class='strut'>&nbsp;</span></p> <p id='t365' class='pln'>&nbsp; &nbsp; <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Div</span><span class='op'>:</span> <span class='str'>&quot;/&quot;</span><span class='op'>,</span><span class='strut'>&nbsp;</span></p> <p id='t366' class='pln'>&nbsp; &nbsp; <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>FloorDiv</span><span class='op'>:</span> <span class='str'>&quot;//&quot;</span><span class='op'>,</span><span class='strut'>&nbsp;</span></p> <p id='t367' class='pln'>&nbsp; &nbsp; <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Mod</span><span class='op'>:</span> <span class='str'>&quot;%%&quot;</span><span class='op'>,</span> <span class='com'># escaped for string formatting</span><span class='strut'>&nbsp;</span></p> <p id='t368' class='pln'>&nbsp; &nbsp; <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Eq</span><span class='op'>:</span> <span class='str'>&quot;==&quot;</span><span class='op'>,</span><span class='strut'>&nbsp;</span></p> <p id='t369' class='pln'>&nbsp; &nbsp; <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>NotEq</span><span class='op'>:</span> <span class='str'>&quot;!=&quot;</span><span class='op'>,</span><span class='strut'>&nbsp;</span></p> <p id='t370' class='pln'>&nbsp; &nbsp; <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Lt</span><span class='op'>:</span> <span class='str'>&quot;&lt;&quot;</span><span class='op'>,</span><span class='strut'>&nbsp;</span></p> <p id='t371' class='pln'>&nbsp; &nbsp; <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>LtE</span><span class='op'>:</span> <span class='str'>&quot;&lt;=&quot;</span><span class='op'>,</span><span class='strut'>&nbsp;</span></p> <p id='t372' class='pln'>&nbsp; &nbsp; <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Gt</span><span class='op'>:</span> <span class='str'>&quot;&gt;&quot;</span><span class='op'>,</span><span class='strut'>&nbsp;</span></p> <p id='t373' class='pln'>&nbsp; &nbsp; <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>GtE</span><span class='op'>:</span> <span class='str'>&quot;&gt;=&quot;</span><span class='op'>,</span><span class='strut'>&nbsp;</span></p> <p id='t374' class='pln'>&nbsp; &nbsp; <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Pow</span><span class='op'>:</span> <span class='str'>&quot;**&quot;</span><span class='op'>,</span><span class='strut'>&nbsp;</span></p> <p id='t375' class='pln'>&nbsp; &nbsp; <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Is</span><span class='op'>:</span> <span class='str'>&quot;is&quot;</span><span class='op'>,</span><span class='strut'>&nbsp;</span></p> <p id='t376' class='pln'>&nbsp; &nbsp; <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>IsNot</span><span class='op'>:</span> <span class='str'>&quot;is not&quot;</span><span class='op'>,</span><span class='strut'>&nbsp;</span></p> <p id='t377' class='pln'>&nbsp; &nbsp; <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>In</span><span class='op'>:</span> <span class='str'>&quot;in&quot;</span><span class='op'>,</span><span class='strut'>&nbsp;</span></p> <p id='t378' class='pln'>&nbsp; &nbsp; <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>NotIn</span><span class='op'>:</span> <span class='str'>&quot;not in&quot;</span><span class='strut'>&nbsp;</span></p> <p id='t379' class='pln'><span class='op'>}</span><span class='strut'>&nbsp;</span></p> <p id='t380' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t381' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t382' class='stm run hide_run'><span class='key'>def</span> <span class='nam'>set_location</span><span class='op'>(</span><span class='nam'>node</span><span class='op'>,</span> <span class='nam'>lineno</span><span class='op'>,</span> <span class='nam'>col_offset</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t383' class='pln'>&nbsp; &nbsp; <span class='str'>&quot;&quot;&quot;Set node location information recursively.&quot;&quot;&quot;</span><span class='strut'>&nbsp;</span></p> <p id='t384' class='stm mis'>&nbsp; &nbsp; <span class='key'>def</span> <span class='nam'>_fix</span><span class='op'>(</span><span class='nam'>node</span><span class='op'>,</span> <span class='nam'>lineno</span><span class='op'>,</span> <span class='nam'>col_offset</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t385' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='str'>&quot;lineno&quot;</span> <span class='key'>in</span> <span class='nam'>node</span><span class='op'>.</span><span class='nam'>_attributes</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t386' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>node</span><span class='op'>.</span><span class='nam'>lineno</span> <span class='op'>=</span> <span class='nam'>lineno</span><span class='strut'>&nbsp;</span></p> <p id='t387' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='str'>&quot;col_offset&quot;</span> <span class='key'>in</span> <span class='nam'>node</span><span class='op'>.</span><span class='nam'>_attributes</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t388' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>node</span><span class='op'>.</span><span class='nam'>col_offset</span> <span class='op'>=</span> <span class='nam'>col_offset</span><span class='strut'>&nbsp;</span></p> <p id='t389' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>for</span> <span class='nam'>child</span> <span class='key'>in</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>iter_child_nodes</span><span class='op'>(</span><span class='nam'>node</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t390' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>_fix</span><span class='op'>(</span><span class='nam'>child</span><span class='op'>,</span> <span class='nam'>lineno</span><span class='op'>,</span> <span class='nam'>col_offset</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t391' class='stm mis'>&nbsp; &nbsp; <span class='nam'>_fix</span><span class='op'>(</span><span class='nam'>node</span><span class='op'>,</span> <span class='nam'>lineno</span><span class='op'>,</span> <span class='nam'>col_offset</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t392' class='stm mis'>&nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>node</span><span class='strut'>&nbsp;</span></p> <p id='t393' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t394' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t395' class='stm run hide_run'><span class='key'>class</span> <span class='nam'>AssertionRewriter</span><span class='op'>(</span><span class='nam'>ast</span><span class='op'>.</span><span class='nam'>NodeVisitor</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t396' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t397' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>def</span> <span class='nam'>run</span><span class='op'>(</span><span class='nam'>self</span><span class='op'>,</span> <span class='nam'>mod</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t398' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='str'>&quot;&quot;&quot;Find all assert statements in *mod* and rewrite them.&quot;&quot;&quot;</span><span class='strut'>&nbsp;</span></p> <p id='t399' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='key'>not</span> <span class='nam'>mod</span><span class='op'>.</span><span class='nam'>body</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t400' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># Nothing to do.</span><span class='strut'>&nbsp;</span></p> <p id='t401' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span><span class='strut'>&nbsp;</span></p> <p id='t402' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># Insert some special imports at the top of the module but after any</span><span class='strut'>&nbsp;</span></p> <p id='t403' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># docstrings and __future__ imports.</span><span class='strut'>&nbsp;</span></p> <p id='t404' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>aliases</span> <span class='op'>=</span> <span class='op'>[</span><span class='nam'>ast</span><span class='op'>.</span><span class='nam'>alias</span><span class='op'>(</span><span class='nam'>py</span><span class='op'>.</span><span class='nam'>builtin</span><span class='op'>.</span><span class='nam'>builtins</span><span class='op'>.</span><span class='nam'>__name__</span><span class='op'>,</span> <span class='str'>&quot;@py_builtins&quot;</span><span class='op'>)</span><span class='op'>,</span><span class='strut'>&nbsp;</span></p> <p id='t405' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp; <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>alias</span><span class='op'>(</span><span class='str'>&quot;_pytest.assertion.rewrite&quot;</span><span class='op'>,</span> <span class='str'>&quot;@pytest_ar&quot;</span><span class='op'>)</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t406' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>expect_docstring</span> <span class='op'>=</span> <span class='nam'>True</span><span class='strut'>&nbsp;</span></p> <p id='t407' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>pos</span> <span class='op'>=</span> <span class='num'>0</span><span class='strut'>&nbsp;</span></p> <p id='t408' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>lineno</span> <span class='op'>=</span> <span class='num'>0</span><span class='strut'>&nbsp;</span></p> <p id='t409' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>for</span> <span class='nam'>item</span> <span class='key'>in</span> <span class='nam'>mod</span><span class='op'>.</span><span class='nam'>body</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t410' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='op'>(</span><span class='nam'>expect_docstring</span> <span class='key'>and</span> <span class='nam'>isinstance</span><span class='op'>(</span><span class='nam'>item</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Expr</span><span class='op'>)</span> <span class='key'>and</span><span class='strut'>&nbsp;</span></p> <p id='t411' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>isinstance</span><span class='op'>(</span><span class='nam'>item</span><span class='op'>.</span><span class='nam'>value</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Str</span><span class='op'>)</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t412' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>doc</span> <span class='op'>=</span> <span class='nam'>item</span><span class='op'>.</span><span class='nam'>value</span><span class='op'>.</span><span class='nam'>s</span><span class='strut'>&nbsp;</span></p> <p id='t413' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='str'>&quot;PYTEST_DONT_REWRITE&quot;</span> <span class='key'>in</span> <span class='nam'>doc</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t414' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># The module has disabled assertion rewriting.</span><span class='strut'>&nbsp;</span></p> <p id='t415' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span><span class='strut'>&nbsp;</span></p> <p id='t416' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>lineno</span> <span class='op'>+=</span> <span class='nam'>len</span><span class='op'>(</span><span class='nam'>doc</span><span class='op'>)</span> <span class='op'>-</span> <span class='num'>1</span><span class='strut'>&nbsp;</span></p> <p id='t417' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>expect_docstring</span> <span class='op'>=</span> <span class='nam'>False</span><span class='strut'>&nbsp;</span></p> <p id='t418' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>elif</span> <span class='op'>(</span><span class='key'>not</span> <span class='nam'>isinstance</span><span class='op'>(</span><span class='nam'>item</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>ImportFrom</span><span class='op'>)</span> <span class='key'>or</span> <span class='nam'>item</span><span class='op'>.</span><span class='nam'>level</span> <span class='op'>&gt;</span> <span class='num'>0</span> <span class='key'>or</span><span class='strut'>&nbsp;</span></p> <p id='t419' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>item</span><span class='op'>.</span><span class='nam'>module</span> <span class='op'>!=</span> <span class='str'>&quot;__future__&quot;</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t420' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>lineno</span> <span class='op'>=</span> <span class='nam'>item</span><span class='op'>.</span><span class='nam'>lineno</span><span class='strut'>&nbsp;</span></p> <p id='t421' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>break</span><span class='strut'>&nbsp;</span></p> <p id='t422' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>pos</span> <span class='op'>+=</span> <span class='num'>1</span><span class='strut'>&nbsp;</span></p> <p id='t423' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>imports</span> <span class='op'>=</span> <span class='op'>[</span><span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Import</span><span class='op'>(</span><span class='op'>[</span><span class='nam'>alias</span><span class='op'>]</span><span class='op'>,</span> <span class='nam'>lineno</span><span class='op'>=</span><span class='nam'>lineno</span><span class='op'>,</span> <span class='nam'>col_offset</span><span class='op'>=</span><span class='num'>0</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t424' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp; <span class='key'>for</span> <span class='nam'>alias</span> <span class='key'>in</span> <span class='nam'>aliases</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t425' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>mod</span><span class='op'>.</span><span class='nam'>body</span><span class='op'>[</span><span class='nam'>pos</span><span class='op'>:</span><span class='nam'>pos</span><span class='op'>]</span> <span class='op'>=</span> <span class='nam'>imports</span><span class='strut'>&nbsp;</span></p> <p id='t426' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># Collect asserts.</span><span class='strut'>&nbsp;</span></p> <p id='t427' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>nodes</span> <span class='op'>=</span> <span class='op'>[</span><span class='nam'>mod</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t428' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>while</span> <span class='nam'>nodes</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t429' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>node</span> <span class='op'>=</span> <span class='nam'>nodes</span><span class='op'>.</span><span class='nam'>pop</span><span class='op'>(</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t430' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>for</span> <span class='nam'>name</span><span class='op'>,</span> <span class='nam'>field</span> <span class='key'>in</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>iter_fields</span><span class='op'>(</span><span class='nam'>node</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t431' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>isinstance</span><span class='op'>(</span><span class='nam'>field</span><span class='op'>,</span> <span class='nam'>list</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t432' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>new</span> <span class='op'>=</span> <span class='op'>[</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t433' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>for</span> <span class='nam'>i</span><span class='op'>,</span> <span class='nam'>child</span> <span class='key'>in</span> <span class='nam'>enumerate</span><span class='op'>(</span><span class='nam'>field</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t434' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>isinstance</span><span class='op'>(</span><span class='nam'>child</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Assert</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t435' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># Transform assert.</span><span class='strut'>&nbsp;</span></p> <p id='t436' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>new</span><span class='op'>.</span><span class='nam'>extend</span><span class='op'>(</span><span class='nam'>self</span><span class='op'>.</span><span class='nam'>visit</span><span class='op'>(</span><span class='nam'>child</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t437' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>else</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t438' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>new</span><span class='op'>.</span><span class='nam'>append</span><span class='op'>(</span><span class='nam'>child</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t439' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>isinstance</span><span class='op'>(</span><span class='nam'>child</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>AST</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t440' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>nodes</span><span class='op'>.</span><span class='nam'>append</span><span class='op'>(</span><span class='nam'>child</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t441' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>setattr</span><span class='op'>(</span><span class='nam'>node</span><span class='op'>,</span> <span class='nam'>name</span><span class='op'>,</span> <span class='nam'>new</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t442' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>elif</span> <span class='op'>(</span><span class='nam'>isinstance</span><span class='op'>(</span><span class='nam'>field</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>AST</span><span class='op'>)</span> <span class='key'>and</span><span class='strut'>&nbsp;</span></p> <p id='t443' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># Don&#39;t recurse into expressions as they can&#39;t contain</span><span class='strut'>&nbsp;</span></p> <p id='t444' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># asserts.</span><span class='strut'>&nbsp;</span></p> <p id='t445' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>not</span> <span class='nam'>isinstance</span><span class='op'>(</span><span class='nam'>field</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>expr</span><span class='op'>)</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t446' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>nodes</span><span class='op'>.</span><span class='nam'>append</span><span class='op'>(</span><span class='nam'>field</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t447' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t448' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>def</span> <span class='nam'>variable</span><span class='op'>(</span><span class='nam'>self</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t449' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='str'>&quot;&quot;&quot;Get a new variable.&quot;&quot;&quot;</span><span class='strut'>&nbsp;</span></p> <p id='t450' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># Use a character invalid in python identifiers to avoid clashing.</span><span class='strut'>&nbsp;</span></p> <p id='t451' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>name</span> <span class='op'>=</span> <span class='str'>&quot;@py_assert&quot;</span> <span class='op'>+</span> <span class='nam'>str</span><span class='op'>(</span><span class='nam'>next</span><span class='op'>(</span><span class='nam'>self</span><span class='op'>.</span><span class='nam'>variable_counter</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t452' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>variables</span><span class='op'>.</span><span class='nam'>append</span><span class='op'>(</span><span class='nam'>name</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t453' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>name</span><span class='strut'>&nbsp;</span></p> <p id='t454' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t455' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>def</span> <span class='nam'>assign</span><span class='op'>(</span><span class='nam'>self</span><span class='op'>,</span> <span class='nam'>expr</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t456' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='str'>&quot;&quot;&quot;Give *expr* a name.&quot;&quot;&quot;</span><span class='strut'>&nbsp;</span></p> <p id='t457' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>name</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>variable</span><span class='op'>(</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t458' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>statements</span><span class='op'>.</span><span class='nam'>append</span><span class='op'>(</span><span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Assign</span><span class='op'>(</span><span class='op'>[</span><span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Name</span><span class='op'>(</span><span class='nam'>name</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Store</span><span class='op'>(</span><span class='op'>)</span><span class='op'>)</span><span class='op'>]</span><span class='op'>,</span> <span class='nam'>expr</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t459' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Name</span><span class='op'>(</span><span class='nam'>name</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Load</span><span class='op'>(</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t460' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t461' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>def</span> <span class='nam'>display</span><span class='op'>(</span><span class='nam'>self</span><span class='op'>,</span> <span class='nam'>expr</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t462' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='str'>&quot;&quot;&quot;Call py.io.saferepr on the expression.&quot;&quot;&quot;</span><span class='strut'>&nbsp;</span></p> <p id='t463' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>helper</span><span class='op'>(</span><span class='str'>&quot;saferepr&quot;</span><span class='op'>,</span> <span class='nam'>expr</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t464' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t465' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>def</span> <span class='nam'>helper</span><span class='op'>(</span><span class='nam'>self</span><span class='op'>,</span> <span class='nam'>name</span><span class='op'>,</span> <span class='op'>*</span><span class='nam'>args</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t466' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='str'>&quot;&quot;&quot;Call a helper in this module.&quot;&quot;&quot;</span><span class='strut'>&nbsp;</span></p> <p id='t467' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>py_name</span> <span class='op'>=</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Name</span><span class='op'>(</span><span class='str'>&quot;@pytest_ar&quot;</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Load</span><span class='op'>(</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t468' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>attr</span> <span class='op'>=</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Attribute</span><span class='op'>(</span><span class='nam'>py_name</span><span class='op'>,</span> <span class='str'>&quot;_&quot;</span> <span class='op'>+</span> <span class='nam'>name</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Load</span><span class='op'>(</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t469' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Call</span><span class='op'>(</span><span class='nam'>attr</span><span class='op'>,</span> <span class='nam'>list</span><span class='op'>(</span><span class='nam'>args</span><span class='op'>)</span><span class='op'>,</span> <span class='op'>[</span><span class='op'>]</span><span class='op'>,</span> <span class='nam'>None</span><span class='op'>,</span> <span class='nam'>None</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t470' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t471' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>def</span> <span class='nam'>builtin</span><span class='op'>(</span><span class='nam'>self</span><span class='op'>,</span> <span class='nam'>name</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t472' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='str'>&quot;&quot;&quot;Return the builtin called *name*.&quot;&quot;&quot;</span><span class='strut'>&nbsp;</span></p> <p id='t473' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>builtin_name</span> <span class='op'>=</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Name</span><span class='op'>(</span><span class='str'>&quot;@py_builtins&quot;</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Load</span><span class='op'>(</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t474' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Attribute</span><span class='op'>(</span><span class='nam'>builtin_name</span><span class='op'>,</span> <span class='nam'>name</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Load</span><span class='op'>(</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t475' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t476' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>def</span> <span class='nam'>explanation_param</span><span class='op'>(</span><span class='nam'>self</span><span class='op'>,</span> <span class='nam'>expr</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t477' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>specifier</span> <span class='op'>=</span> <span class='str'>&quot;py&quot;</span> <span class='op'>+</span> <span class='nam'>str</span><span class='op'>(</span><span class='nam'>next</span><span class='op'>(</span><span class='nam'>self</span><span class='op'>.</span><span class='nam'>variable_counter</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t478' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>explanation_specifiers</span><span class='op'>[</span><span class='nam'>specifier</span><span class='op'>]</span> <span class='op'>=</span> <span class='nam'>expr</span><span class='strut'>&nbsp;</span></p> <p id='t479' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='str'>&quot;%(&quot;</span> <span class='op'>+</span> <span class='nam'>specifier</span> <span class='op'>+</span> <span class='str'>&quot;)s&quot;</span><span class='strut'>&nbsp;</span></p> <p id='t480' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t481' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>def</span> <span class='nam'>push_format_context</span><span class='op'>(</span><span class='nam'>self</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t482' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>explanation_specifiers</span> <span class='op'>=</span> <span class='op'>{</span><span class='op'>}</span><span class='strut'>&nbsp;</span></p> <p id='t483' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>stack</span><span class='op'>.</span><span class='nam'>append</span><span class='op'>(</span><span class='nam'>self</span><span class='op'>.</span><span class='nam'>explanation_specifiers</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t484' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t485' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>def</span> <span class='nam'>pop_format_context</span><span class='op'>(</span><span class='nam'>self</span><span class='op'>,</span> <span class='nam'>expl_expr</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t486' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>current</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>stack</span><span class='op'>.</span><span class='nam'>pop</span><span class='op'>(</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t487' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>stack</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t488' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>explanation_specifiers</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>stack</span><span class='op'>[</span><span class='op'>-</span><span class='num'>1</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t489' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>keys</span> <span class='op'>=</span> <span class='op'>[</span><span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Str</span><span class='op'>(</span><span class='nam'>key</span><span class='op'>)</span> <span class='key'>for</span> <span class='nam'>key</span> <span class='key'>in</span> <span class='nam'>current</span><span class='op'>.</span><span class='nam'>keys</span><span class='op'>(</span><span class='op'>)</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t490' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>format_dict</span> <span class='op'>=</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Dict</span><span class='op'>(</span><span class='nam'>keys</span><span class='op'>,</span> <span class='nam'>list</span><span class='op'>(</span><span class='nam'>current</span><span class='op'>.</span><span class='nam'>values</span><span class='op'>(</span><span class='op'>)</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t491' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>form</span> <span class='op'>=</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>BinOp</span><span class='op'>(</span><span class='nam'>expl_expr</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Mod</span><span class='op'>(</span><span class='op'>)</span><span class='op'>,</span> <span class='nam'>format_dict</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t492' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>name</span> <span class='op'>=</span> <span class='str'>&quot;@py_format&quot;</span> <span class='op'>+</span> <span class='nam'>str</span><span class='op'>(</span><span class='nam'>next</span><span class='op'>(</span><span class='nam'>self</span><span class='op'>.</span><span class='nam'>variable_counter</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t493' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>on_failure</span><span class='op'>.</span><span class='nam'>append</span><span class='op'>(</span><span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Assign</span><span class='op'>(</span><span class='op'>[</span><span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Name</span><span class='op'>(</span><span class='nam'>name</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Store</span><span class='op'>(</span><span class='op'>)</span><span class='op'>)</span><span class='op'>]</span><span class='op'>,</span> <span class='nam'>form</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t494' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Name</span><span class='op'>(</span><span class='nam'>name</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Load</span><span class='op'>(</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t495' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t496' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>def</span> <span class='nam'>generic_visit</span><span class='op'>(</span><span class='nam'>self</span><span class='op'>,</span> <span class='nam'>node</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t497' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='str'>&quot;&quot;&quot;Handle expressions we don&#39;t have custom code for.&quot;&quot;&quot;</span><span class='strut'>&nbsp;</span></p> <p id='t498' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>assert</span> <span class='nam'>isinstance</span><span class='op'>(</span><span class='nam'>node</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>expr</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t499' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>res</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>assign</span><span class='op'>(</span><span class='nam'>node</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t500' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>res</span><span class='op'>,</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>explanation_param</span><span class='op'>(</span><span class='nam'>self</span><span class='op'>.</span><span class='nam'>display</span><span class='op'>(</span><span class='nam'>res</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t501' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t502' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>def</span> <span class='nam'>visit_Assert</span><span class='op'>(</span><span class='nam'>self</span><span class='op'>,</span> <span class='nam'>assert_</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t503' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>assert_</span><span class='op'>.</span><span class='nam'>msg</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t504' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># There&#39;s already a message. Don&#39;t mess with it.</span><span class='strut'>&nbsp;</span></p> <p id='t505' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='op'>[</span><span class='nam'>assert_</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t506' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>statements</span> <span class='op'>=</span> <span class='op'>[</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t507' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>cond_chain</span> <span class='op'>=</span> <span class='op'>(</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t508' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>variables</span> <span class='op'>=</span> <span class='op'>[</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t509' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>variable_counter</span> <span class='op'>=</span> <span class='nam'>itertools</span><span class='op'>.</span><span class='nam'>count</span><span class='op'>(</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t510' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>stack</span> <span class='op'>=</span> <span class='op'>[</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t511' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>on_failure</span> <span class='op'>=</span> <span class='op'>[</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t512' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>push_format_context</span><span class='op'>(</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t513' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># Rewrite assert into a bunch of statements.</span><span class='strut'>&nbsp;</span></p> <p id='t514' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>top_condition</span><span class='op'>,</span> <span class='nam'>explanation</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>visit</span><span class='op'>(</span><span class='nam'>assert_</span><span class='op'>.</span><span class='nam'>test</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t515' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># Create failure message.</span><span class='strut'>&nbsp;</span></p> <p id='t516' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>body</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>on_failure</span><span class='strut'>&nbsp;</span></p> <p id='t517' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>negation</span> <span class='op'>=</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>UnaryOp</span><span class='op'>(</span><span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Not</span><span class='op'>(</span><span class='op'>)</span><span class='op'>,</span> <span class='nam'>top_condition</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t518' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>statements</span><span class='op'>.</span><span class='nam'>append</span><span class='op'>(</span><span class='nam'>ast</span><span class='op'>.</span><span class='nam'>If</span><span class='op'>(</span><span class='nam'>negation</span><span class='op'>,</span> <span class='nam'>body</span><span class='op'>,</span> <span class='op'>[</span><span class='op'>]</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t519' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>explanation</span> <span class='op'>=</span> <span class='str'>&quot;assert &quot;</span> <span class='op'>+</span> <span class='nam'>explanation</span><span class='strut'>&nbsp;</span></p> <p id='t520' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>template</span> <span class='op'>=</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Str</span><span class='op'>(</span><span class='nam'>explanation</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t521' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>msg</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>pop_format_context</span><span class='op'>(</span><span class='nam'>template</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t522' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>fmt</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>helper</span><span class='op'>(</span><span class='str'>&quot;format_explanation&quot;</span><span class='op'>,</span> <span class='nam'>msg</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t523' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>err_name</span> <span class='op'>=</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Name</span><span class='op'>(</span><span class='str'>&quot;AssertionError&quot;</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Load</span><span class='op'>(</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t524' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>exc</span> <span class='op'>=</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Call</span><span class='op'>(</span><span class='nam'>err_name</span><span class='op'>,</span> <span class='op'>[</span><span class='nam'>fmt</span><span class='op'>]</span><span class='op'>,</span> <span class='op'>[</span><span class='op'>]</span><span class='op'>,</span> <span class='nam'>None</span><span class='op'>,</span> <span class='nam'>None</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t525' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>sys</span><span class='op'>.</span><span class='nam'>version_info</span><span class='op'>[</span><span class='num'>0</span><span class='op'>]</span> <span class='op'>&gt;=</span> <span class='num'>3</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t526' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>raise_</span> <span class='op'>=</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Raise</span><span class='op'>(</span><span class='nam'>exc</span><span class='op'>,</span> <span class='nam'>None</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t527' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>else</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t528' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>raise_</span> <span class='op'>=</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Raise</span><span class='op'>(</span><span class='nam'>exc</span><span class='op'>,</span> <span class='nam'>None</span><span class='op'>,</span> <span class='nam'>None</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t529' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>body</span><span class='op'>.</span><span class='nam'>append</span><span class='op'>(</span><span class='nam'>raise_</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t530' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># Clear temporary variables by setting them to None.</span><span class='strut'>&nbsp;</span></p> <p id='t531' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>variables</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t532' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>variables</span> <span class='op'>=</span> <span class='op'>[</span><span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Name</span><span class='op'>(</span><span class='nam'>name</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Store</span><span class='op'>(</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t533' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp; <span class='key'>for</span> <span class='nam'>name</span> <span class='key'>in</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>variables</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t534' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>clear</span> <span class='op'>=</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Assign</span><span class='op'>(</span><span class='nam'>variables</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Name</span><span class='op'>(</span><span class='str'>&quot;None&quot;</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Load</span><span class='op'>(</span><span class='op'>)</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t535' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>statements</span><span class='op'>.</span><span class='nam'>append</span><span class='op'>(</span><span class='nam'>clear</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t536' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># Fix line numbers.</span><span class='strut'>&nbsp;</span></p> <p id='t537' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>for</span> <span class='nam'>stmt</span> <span class='key'>in</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>statements</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t538' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>set_location</span><span class='op'>(</span><span class='nam'>stmt</span><span class='op'>,</span> <span class='nam'>assert_</span><span class='op'>.</span><span class='nam'>lineno</span><span class='op'>,</span> <span class='nam'>assert_</span><span class='op'>.</span><span class='nam'>col_offset</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t539' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>statements</span><span class='strut'>&nbsp;</span></p> <p id='t540' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t541' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>def</span> <span class='nam'>visit_Name</span><span class='op'>(</span><span class='nam'>self</span><span class='op'>,</span> <span class='nam'>name</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t542' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># Display the repr of the name if it&#39;s a local variable or</span><span class='strut'>&nbsp;</span></p> <p id='t543' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># _should_repr_global_name() thinks it&#39;s acceptable.</span><span class='strut'>&nbsp;</span></p> <p id='t544' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>locs</span> <span class='op'>=</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Call</span><span class='op'>(</span><span class='nam'>self</span><span class='op'>.</span><span class='nam'>builtin</span><span class='op'>(</span><span class='str'>&quot;locals&quot;</span><span class='op'>)</span><span class='op'>,</span> <span class='op'>[</span><span class='op'>]</span><span class='op'>,</span> <span class='op'>[</span><span class='op'>]</span><span class='op'>,</span> <span class='nam'>None</span><span class='op'>,</span> <span class='nam'>None</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t545' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>inlocs</span> <span class='op'>=</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Compare</span><span class='op'>(</span><span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Str</span><span class='op'>(</span><span class='nam'>name</span><span class='op'>.</span><span class='nam'>id</span><span class='op'>)</span><span class='op'>,</span> <span class='op'>[</span><span class='nam'>ast</span><span class='op'>.</span><span class='nam'>In</span><span class='op'>(</span><span class='op'>)</span><span class='op'>]</span><span class='op'>,</span> <span class='op'>[</span><span class='nam'>locs</span><span class='op'>]</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t546' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>dorepr</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>helper</span><span class='op'>(</span><span class='str'>&quot;should_repr_global_name&quot;</span><span class='op'>,</span> <span class='nam'>name</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t547' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>test</span> <span class='op'>=</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>BoolOp</span><span class='op'>(</span><span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Or</span><span class='op'>(</span><span class='op'>)</span><span class='op'>,</span> <span class='op'>[</span><span class='nam'>inlocs</span><span class='op'>,</span> <span class='nam'>dorepr</span><span class='op'>]</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t548' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>expr</span> <span class='op'>=</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>IfExp</span><span class='op'>(</span><span class='nam'>test</span><span class='op'>,</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>display</span><span class='op'>(</span><span class='nam'>name</span><span class='op'>)</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Str</span><span class='op'>(</span><span class='nam'>name</span><span class='op'>.</span><span class='nam'>id</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t549' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>name</span><span class='op'>,</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>explanation_param</span><span class='op'>(</span><span class='nam'>expr</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t550' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t551' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>def</span> <span class='nam'>visit_BoolOp</span><span class='op'>(</span><span class='nam'>self</span><span class='op'>,</span> <span class='nam'>boolop</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t552' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>res_var</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>variable</span><span class='op'>(</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t553' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>expl_list</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>assign</span><span class='op'>(</span><span class='nam'>ast</span><span class='op'>.</span><span class='nam'>List</span><span class='op'>(</span><span class='op'>[</span><span class='op'>]</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Load</span><span class='op'>(</span><span class='op'>)</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t554' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>app</span> <span class='op'>=</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Attribute</span><span class='op'>(</span><span class='nam'>expl_list</span><span class='op'>,</span> <span class='str'>&quot;append&quot;</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Load</span><span class='op'>(</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t555' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>is_or</span> <span class='op'>=</span> <span class='nam'>int</span><span class='op'>(</span><span class='nam'>isinstance</span><span class='op'>(</span><span class='nam'>boolop</span><span class='op'>.</span><span class='nam'>op</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Or</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t556' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>body</span> <span class='op'>=</span> <span class='nam'>save</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>statements</span><span class='strut'>&nbsp;</span></p> <p id='t557' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>fail_save</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>on_failure</span><span class='strut'>&nbsp;</span></p> <p id='t558' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>levels</span> <span class='op'>=</span> <span class='nam'>len</span><span class='op'>(</span><span class='nam'>boolop</span><span class='op'>.</span><span class='nam'>values</span><span class='op'>)</span> <span class='op'>-</span> <span class='num'>1</span><span class='strut'>&nbsp;</span></p> <p id='t559' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>push_format_context</span><span class='op'>(</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t560' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># Process each operand, short-circuting if needed.</span><span class='strut'>&nbsp;</span></p> <p id='t561' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>for</span> <span class='nam'>i</span><span class='op'>,</span> <span class='nam'>v</span> <span class='key'>in</span> <span class='nam'>enumerate</span><span class='op'>(</span><span class='nam'>boolop</span><span class='op'>.</span><span class='nam'>values</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t562' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>i</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t563' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>fail_inner</span> <span class='op'>=</span> <span class='op'>[</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t564' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># cond is set in a prior loop iteration below</span><span class='strut'>&nbsp;</span></p> <p id='t565' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>on_failure</span><span class='op'>.</span><span class='nam'>append</span><span class='op'>(</span><span class='nam'>ast</span><span class='op'>.</span><span class='nam'>If</span><span class='op'>(</span><span class='nam'>cond</span><span class='op'>,</span> <span class='nam'>fail_inner</span><span class='op'>,</span> <span class='op'>[</span><span class='op'>]</span><span class='op'>)</span><span class='op'>)</span> <span class='com'># noqa</span><span class='strut'>&nbsp;</span></p> <p id='t566' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>on_failure</span> <span class='op'>=</span> <span class='nam'>fail_inner</span><span class='strut'>&nbsp;</span></p> <p id='t567' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>push_format_context</span><span class='op'>(</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t568' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>res</span><span class='op'>,</span> <span class='nam'>expl</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>visit</span><span class='op'>(</span><span class='nam'>v</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t569' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>body</span><span class='op'>.</span><span class='nam'>append</span><span class='op'>(</span><span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Assign</span><span class='op'>(</span><span class='op'>[</span><span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Name</span><span class='op'>(</span><span class='nam'>res_var</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Store</span><span class='op'>(</span><span class='op'>)</span><span class='op'>)</span><span class='op'>]</span><span class='op'>,</span> <span class='nam'>res</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t570' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>expl_format</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>pop_format_context</span><span class='op'>(</span><span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Str</span><span class='op'>(</span><span class='nam'>expl</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t571' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>call</span> <span class='op'>=</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Call</span><span class='op'>(</span><span class='nam'>app</span><span class='op'>,</span> <span class='op'>[</span><span class='nam'>expl_format</span><span class='op'>]</span><span class='op'>,</span> <span class='op'>[</span><span class='op'>]</span><span class='op'>,</span> <span class='nam'>None</span><span class='op'>,</span> <span class='nam'>None</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t572' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>on_failure</span><span class='op'>.</span><span class='nam'>append</span><span class='op'>(</span><span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Expr</span><span class='op'>(</span><span class='nam'>call</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t573' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>i</span> <span class='op'>&lt;</span> <span class='nam'>levels</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t574' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>cond</span> <span class='op'>=</span> <span class='nam'>res</span><span class='strut'>&nbsp;</span></p> <p id='t575' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>is_or</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t576' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>cond</span> <span class='op'>=</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>UnaryOp</span><span class='op'>(</span><span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Not</span><span class='op'>(</span><span class='op'>)</span><span class='op'>,</span> <span class='nam'>cond</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t577' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>inner</span> <span class='op'>=</span> <span class='op'>[</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t578' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>statements</span><span class='op'>.</span><span class='nam'>append</span><span class='op'>(</span><span class='nam'>ast</span><span class='op'>.</span><span class='nam'>If</span><span class='op'>(</span><span class='nam'>cond</span><span class='op'>,</span> <span class='nam'>inner</span><span class='op'>,</span> <span class='op'>[</span><span class='op'>]</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t579' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>statements</span> <span class='op'>=</span> <span class='nam'>body</span> <span class='op'>=</span> <span class='nam'>inner</span><span class='strut'>&nbsp;</span></p> <p id='t580' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>statements</span> <span class='op'>=</span> <span class='nam'>save</span><span class='strut'>&nbsp;</span></p> <p id='t581' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>on_failure</span> <span class='op'>=</span> <span class='nam'>fail_save</span><span class='strut'>&nbsp;</span></p> <p id='t582' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>expl_template</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>helper</span><span class='op'>(</span><span class='str'>&quot;format_boolop&quot;</span><span class='op'>,</span> <span class='nam'>expl_list</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Num</span><span class='op'>(</span><span class='nam'>is_or</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t583' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>expl</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>pop_format_context</span><span class='op'>(</span><span class='nam'>expl_template</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t584' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Name</span><span class='op'>(</span><span class='nam'>res_var</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Load</span><span class='op'>(</span><span class='op'>)</span><span class='op'>)</span><span class='op'>,</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>explanation_param</span><span class='op'>(</span><span class='nam'>expl</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t585' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t586' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>def</span> <span class='nam'>visit_UnaryOp</span><span class='op'>(</span><span class='nam'>self</span><span class='op'>,</span> <span class='nam'>unary</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t587' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>pattern</span> <span class='op'>=</span> <span class='nam'>unary_map</span><span class='op'>[</span><span class='nam'>unary</span><span class='op'>.</span><span class='nam'>op</span><span class='op'>.</span><span class='nam'>__class__</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t588' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>operand_res</span><span class='op'>,</span> <span class='nam'>operand_expl</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>visit</span><span class='op'>(</span><span class='nam'>unary</span><span class='op'>.</span><span class='nam'>operand</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t589' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>res</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>assign</span><span class='op'>(</span><span class='nam'>ast</span><span class='op'>.</span><span class='nam'>UnaryOp</span><span class='op'>(</span><span class='nam'>unary</span><span class='op'>.</span><span class='nam'>op</span><span class='op'>,</span> <span class='nam'>operand_res</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t590' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>res</span><span class='op'>,</span> <span class='nam'>pattern</span> <span class='op'>%</span> <span class='op'>(</span><span class='nam'>operand_expl</span><span class='op'>,</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t591' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t592' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>def</span> <span class='nam'>visit_BinOp</span><span class='op'>(</span><span class='nam'>self</span><span class='op'>,</span> <span class='nam'>binop</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t593' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>symbol</span> <span class='op'>=</span> <span class='nam'>binop_map</span><span class='op'>[</span><span class='nam'>binop</span><span class='op'>.</span><span class='nam'>op</span><span class='op'>.</span><span class='nam'>__class__</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t594' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>left_expr</span><span class='op'>,</span> <span class='nam'>left_expl</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>visit</span><span class='op'>(</span><span class='nam'>binop</span><span class='op'>.</span><span class='nam'>left</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t595' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>right_expr</span><span class='op'>,</span> <span class='nam'>right_expl</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>visit</span><span class='op'>(</span><span class='nam'>binop</span><span class='op'>.</span><span class='nam'>right</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t596' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>explanation</span> <span class='op'>=</span> <span class='str'>&quot;(%s %s %s)&quot;</span> <span class='op'>%</span> <span class='op'>(</span><span class='nam'>left_expl</span><span class='op'>,</span> <span class='nam'>symbol</span><span class='op'>,</span> <span class='nam'>right_expl</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t597' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>res</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>assign</span><span class='op'>(</span><span class='nam'>ast</span><span class='op'>.</span><span class='nam'>BinOp</span><span class='op'>(</span><span class='nam'>left_expr</span><span class='op'>,</span> <span class='nam'>binop</span><span class='op'>.</span><span class='nam'>op</span><span class='op'>,</span> <span class='nam'>right_expr</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t598' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>res</span><span class='op'>,</span> <span class='nam'>explanation</span><span class='strut'>&nbsp;</span></p> <p id='t599' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t600' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>def</span> <span class='nam'>visit_Call</span><span class='op'>(</span><span class='nam'>self</span><span class='op'>,</span> <span class='nam'>call</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t601' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>new_func</span><span class='op'>,</span> <span class='nam'>func_expl</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>visit</span><span class='op'>(</span><span class='nam'>call</span><span class='op'>.</span><span class='nam'>func</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t602' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>arg_expls</span> <span class='op'>=</span> <span class='op'>[</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t603' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>new_args</span> <span class='op'>=</span> <span class='op'>[</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t604' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>new_kwargs</span> <span class='op'>=</span> <span class='op'>[</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t605' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>new_star</span> <span class='op'>=</span> <span class='nam'>new_kwarg</span> <span class='op'>=</span> <span class='nam'>None</span><span class='strut'>&nbsp;</span></p> <p id='t606' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>for</span> <span class='nam'>arg</span> <span class='key'>in</span> <span class='nam'>call</span><span class='op'>.</span><span class='nam'>args</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t607' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>res</span><span class='op'>,</span> <span class='nam'>expl</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>visit</span><span class='op'>(</span><span class='nam'>arg</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t608' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>new_args</span><span class='op'>.</span><span class='nam'>append</span><span class='op'>(</span><span class='nam'>res</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t609' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>arg_expls</span><span class='op'>.</span><span class='nam'>append</span><span class='op'>(</span><span class='nam'>expl</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t610' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>for</span> <span class='nam'>keyword</span> <span class='key'>in</span> <span class='nam'>call</span><span class='op'>.</span><span class='nam'>keywords</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t611' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>res</span><span class='op'>,</span> <span class='nam'>expl</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>visit</span><span class='op'>(</span><span class='nam'>keyword</span><span class='op'>.</span><span class='nam'>value</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t612' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>new_kwargs</span><span class='op'>.</span><span class='nam'>append</span><span class='op'>(</span><span class='nam'>ast</span><span class='op'>.</span><span class='nam'>keyword</span><span class='op'>(</span><span class='nam'>keyword</span><span class='op'>.</span><span class='nam'>arg</span><span class='op'>,</span> <span class='nam'>res</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t613' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>arg_expls</span><span class='op'>.</span><span class='nam'>append</span><span class='op'>(</span><span class='nam'>keyword</span><span class='op'>.</span><span class='nam'>arg</span> <span class='op'>+</span> <span class='str'>&quot;=&quot;</span> <span class='op'>+</span> <span class='nam'>expl</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t614' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>call</span><span class='op'>.</span><span class='nam'>starargs</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t615' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>new_star</span><span class='op'>,</span> <span class='nam'>expl</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>visit</span><span class='op'>(</span><span class='nam'>call</span><span class='op'>.</span><span class='nam'>starargs</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t616' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>arg_expls</span><span class='op'>.</span><span class='nam'>append</span><span class='op'>(</span><span class='str'>&quot;*&quot;</span> <span class='op'>+</span> <span class='nam'>expl</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t617' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>call</span><span class='op'>.</span><span class='nam'>kwargs</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t618' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>new_kwarg</span><span class='op'>,</span> <span class='nam'>expl</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>visit</span><span class='op'>(</span><span class='nam'>call</span><span class='op'>.</span><span class='nam'>kwargs</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t619' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>arg_expls</span><span class='op'>.</span><span class='nam'>append</span><span class='op'>(</span><span class='str'>&quot;**&quot;</span> <span class='op'>+</span> <span class='nam'>expl</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t620' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>expl</span> <span class='op'>=</span> <span class='str'>&quot;%s(%s)&quot;</span> <span class='op'>%</span> <span class='op'>(</span><span class='nam'>func_expl</span><span class='op'>,</span> <span class='str'>&#39;, &#39;</span><span class='op'>.</span><span class='nam'>join</span><span class='op'>(</span><span class='nam'>arg_expls</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t621' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>new_call</span> <span class='op'>=</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Call</span><span class='op'>(</span><span class='nam'>new_func</span><span class='op'>,</span> <span class='nam'>new_args</span><span class='op'>,</span> <span class='nam'>new_kwargs</span><span class='op'>,</span><span class='strut'>&nbsp;</span></p> <p id='t622' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>new_star</span><span class='op'>,</span> <span class='nam'>new_kwarg</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t623' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>res</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>assign</span><span class='op'>(</span><span class='nam'>new_call</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t624' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>res_expl</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>explanation_param</span><span class='op'>(</span><span class='nam'>self</span><span class='op'>.</span><span class='nam'>display</span><span class='op'>(</span><span class='nam'>res</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t625' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>outer_expl</span> <span class='op'>=</span> <span class='str'>&quot;%s\n{%s = %s\n}&quot;</span> <span class='op'>%</span> <span class='op'>(</span><span class='nam'>res_expl</span><span class='op'>,</span> <span class='nam'>res_expl</span><span class='op'>,</span> <span class='nam'>expl</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t626' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>res</span><span class='op'>,</span> <span class='nam'>outer_expl</span><span class='strut'>&nbsp;</span></p> <p id='t627' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t628' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>def</span> <span class='nam'>visit_Attribute</span><span class='op'>(</span><span class='nam'>self</span><span class='op'>,</span> <span class='nam'>attr</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t629' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='key'>not</span> <span class='nam'>isinstance</span><span class='op'>(</span><span class='nam'>attr</span><span class='op'>.</span><span class='nam'>ctx</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Load</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t630' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>generic_visit</span><span class='op'>(</span><span class='nam'>attr</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t631' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>value</span><span class='op'>,</span> <span class='nam'>value_expl</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>visit</span><span class='op'>(</span><span class='nam'>attr</span><span class='op'>.</span><span class='nam'>value</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t632' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>res</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>assign</span><span class='op'>(</span><span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Attribute</span><span class='op'>(</span><span class='nam'>value</span><span class='op'>,</span> <span class='nam'>attr</span><span class='op'>.</span><span class='nam'>attr</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Load</span><span class='op'>(</span><span class='op'>)</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t633' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>res_expl</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>explanation_param</span><span class='op'>(</span><span class='nam'>self</span><span class='op'>.</span><span class='nam'>display</span><span class='op'>(</span><span class='nam'>res</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t634' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>pat</span> <span class='op'>=</span> <span class='str'>&quot;%s\n{%s = %s.%s\n}&quot;</span><span class='strut'>&nbsp;</span></p> <p id='t635' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>expl</span> <span class='op'>=</span> <span class='nam'>pat</span> <span class='op'>%</span> <span class='op'>(</span><span class='nam'>res_expl</span><span class='op'>,</span> <span class='nam'>res_expl</span><span class='op'>,</span> <span class='nam'>value_expl</span><span class='op'>,</span> <span class='nam'>attr</span><span class='op'>.</span><span class='nam'>attr</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t636' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>res</span><span class='op'>,</span> <span class='nam'>expl</span><span class='strut'>&nbsp;</span></p> <p id='t637' class='pln'><span class='strut'>&nbsp;</span></p> <p id='t638' class='stm run hide_run'>&nbsp; &nbsp; <span class='key'>def</span> <span class='nam'>visit_Compare</span><span class='op'>(</span><span class='nam'>self</span><span class='op'>,</span> <span class='nam'>comp</span><span class='op'>)</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t639' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>push_format_context</span><span class='op'>(</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t640' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>left_res</span><span class='op'>,</span> <span class='nam'>left_expl</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>visit</span><span class='op'>(</span><span class='nam'>comp</span><span class='op'>.</span><span class='nam'>left</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t641' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>res_variables</span> <span class='op'>=</span> <span class='op'>[</span><span class='nam'>self</span><span class='op'>.</span><span class='nam'>variable</span><span class='op'>(</span><span class='op'>)</span> <span class='key'>for</span> <span class='nam'>i</span> <span class='key'>in</span> <span class='nam'>range</span><span class='op'>(</span><span class='nam'>len</span><span class='op'>(</span><span class='nam'>comp</span><span class='op'>.</span><span class='nam'>ops</span><span class='op'>)</span><span class='op'>)</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t642' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>load_names</span> <span class='op'>=</span> <span class='op'>[</span><span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Name</span><span class='op'>(</span><span class='nam'>v</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Load</span><span class='op'>(</span><span class='op'>)</span><span class='op'>)</span> <span class='key'>for</span> <span class='nam'>v</span> <span class='key'>in</span> <span class='nam'>res_variables</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t643' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>store_names</span> <span class='op'>=</span> <span class='op'>[</span><span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Name</span><span class='op'>(</span><span class='nam'>v</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Store</span><span class='op'>(</span><span class='op'>)</span><span class='op'>)</span> <span class='key'>for</span> <span class='nam'>v</span> <span class='key'>in</span> <span class='nam'>res_variables</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t644' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>it</span> <span class='op'>=</span> <span class='nam'>zip</span><span class='op'>(</span><span class='nam'>range</span><span class='op'>(</span><span class='nam'>len</span><span class='op'>(</span><span class='nam'>comp</span><span class='op'>.</span><span class='nam'>ops</span><span class='op'>)</span><span class='op'>)</span><span class='op'>,</span> <span class='nam'>comp</span><span class='op'>.</span><span class='nam'>ops</span><span class='op'>,</span> <span class='nam'>comp</span><span class='op'>.</span><span class='nam'>comparators</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t645' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>expls</span> <span class='op'>=</span> <span class='op'>[</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t646' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>syms</span> <span class='op'>=</span> <span class='op'>[</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t647' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>results</span> <span class='op'>=</span> <span class='op'>[</span><span class='nam'>left_res</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t648' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>for</span> <span class='nam'>i</span><span class='op'>,</span> <span class='nam'>op</span><span class='op'>,</span> <span class='nam'>next_operand</span> <span class='key'>in</span> <span class='nam'>it</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t649' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>next_res</span><span class='op'>,</span> <span class='nam'>next_expl</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>visit</span><span class='op'>(</span><span class='nam'>next_operand</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t650' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>results</span><span class='op'>.</span><span class='nam'>append</span><span class='op'>(</span><span class='nam'>next_res</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t651' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>sym</span> <span class='op'>=</span> <span class='nam'>binop_map</span><span class='op'>[</span><span class='nam'>op</span><span class='op'>.</span><span class='nam'>__class__</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t652' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>syms</span><span class='op'>.</span><span class='nam'>append</span><span class='op'>(</span><span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Str</span><span class='op'>(</span><span class='nam'>sym</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t653' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>expl</span> <span class='op'>=</span> <span class='str'>&quot;%s %s %s&quot;</span> <span class='op'>%</span> <span class='op'>(</span><span class='nam'>left_expl</span><span class='op'>,</span> <span class='nam'>sym</span><span class='op'>,</span> <span class='nam'>next_expl</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t654' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>expls</span><span class='op'>.</span><span class='nam'>append</span><span class='op'>(</span><span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Str</span><span class='op'>(</span><span class='nam'>expl</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t655' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>res_expr</span> <span class='op'>=</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Compare</span><span class='op'>(</span><span class='nam'>left_res</span><span class='op'>,</span> <span class='op'>[</span><span class='nam'>op</span><span class='op'>]</span><span class='op'>,</span> <span class='op'>[</span><span class='nam'>next_res</span><span class='op'>]</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t656' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>self</span><span class='op'>.</span><span class='nam'>statements</span><span class='op'>.</span><span class='nam'>append</span><span class='op'>(</span><span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Assign</span><span class='op'>(</span><span class='op'>[</span><span class='nam'>store_names</span><span class='op'>[</span><span class='nam'>i</span><span class='op'>]</span><span class='op'>]</span><span class='op'>,</span> <span class='nam'>res_expr</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t657' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>left_res</span><span class='op'>,</span> <span class='nam'>left_expl</span> <span class='op'>=</span> <span class='nam'>next_res</span><span class='op'>,</span> <span class='nam'>next_expl</span><span class='strut'>&nbsp;</span></p> <p id='t658' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='com'># Use pytest.assertion.util._reprcompare if that&#39;s available.</span><span class='strut'>&nbsp;</span></p> <p id='t659' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>expl_call</span> <span class='op'>=</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>helper</span><span class='op'>(</span><span class='str'>&quot;call_reprcompare&quot;</span><span class='op'>,</span><span class='strut'>&nbsp;</span></p> <p id='t660' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Tuple</span><span class='op'>(</span><span class='nam'>syms</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Load</span><span class='op'>(</span><span class='op'>)</span><span class='op'>)</span><span class='op'>,</span><span class='strut'>&nbsp;</span></p> <p id='t661' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Tuple</span><span class='op'>(</span><span class='nam'>load_names</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Load</span><span class='op'>(</span><span class='op'>)</span><span class='op'>)</span><span class='op'>,</span><span class='strut'>&nbsp;</span></p> <p id='t662' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Tuple</span><span class='op'>(</span><span class='nam'>expls</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Load</span><span class='op'>(</span><span class='op'>)</span><span class='op'>)</span><span class='op'>,</span><span class='strut'>&nbsp;</span></p> <p id='t663' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Tuple</span><span class='op'>(</span><span class='nam'>results</span><span class='op'>,</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>Load</span><span class='op'>(</span><span class='op'>)</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t664' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>if</span> <span class='nam'>len</span><span class='op'>(</span><span class='nam'>comp</span><span class='op'>.</span><span class='nam'>ops</span><span class='op'>)</span> <span class='op'>&gt;</span> <span class='num'>1</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t665' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>res</span> <span class='op'>=</span> <span class='nam'>ast</span><span class='op'>.</span><span class='nam'>BoolOp</span><span class='op'>(</span><span class='nam'>ast</span><span class='op'>.</span><span class='nam'>And</span><span class='op'>(</span><span class='op'>)</span><span class='op'>,</span> <span class='nam'>load_names</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> <p id='t666' class='pln'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>else</span><span class='op'>:</span><span class='strut'>&nbsp;</span></p> <p id='t667' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <span class='nam'>res</span> <span class='op'>=</span> <span class='nam'>load_names</span><span class='op'>[</span><span class='num'>0</span><span class='op'>]</span><span class='strut'>&nbsp;</span></p> <p id='t668' class='stm mis'>&nbsp; &nbsp; &nbsp; &nbsp; <span class='key'>return</span> <span class='nam'>res</span><span class='op'>,</span> <span class='nam'>self</span><span class='op'>.</span><span class='nam'>explanation_param</span><span class='op'>(</span><span class='nam'>self</span><span class='op'>.</span><span class='nam'>pop_format_context</span><span class='op'>(</span><span class='nam'>expl_call</span><span class='op'>)</span><span class='op'>)</span><span class='strut'>&nbsp;</span></p> </td> </tr> </table> </div> <div id='footer'> <div class='content'> <p> <a class='nav' href='index.html'>&#xab; index</a> &nbsp; &nbsp; <a class='nav' href='http://nedbatchelder.com/code/coverage'>coverage.py v3.7.1</a> </p> </div> </div> </body> </html>
Juniper/py-space-platform
jnpr/space/test/htmlcov/_Library_Python_2_7_site-packages__pytest_assertion_rewrite.html
HTML
apache-2.0
228,367
# HessianKit v.2.0.0 HessianKit is a Framework for Objective-C 2.0 to allow OS X 10.5+, and iOS 4.0+ applications to seamlessly communicate with hessian web services. Hessian web services can be implemented using Java, .NET and [more](http://hessian.caucho.com/). HessianKit provided typed proxy objects to formward calls to, and handle responses from any hessian web service, seamlessly just as if they where local objects. HessianKit also provide functionality to automatically generate classes for published value objects. Both web service proxies and value objects are defined as basic Objective-C Protocols. ### Installation The easiest way to get HessianKit installed is to use [CocoaPods](http://cocoapods.org/). Just add this line to your `Podfile`: ```ruby pod "HessianKit", "~> 2.0.0" ``` ### Documentation Documentation can be found at [CocoaDocs](http://cocoadocs.org/docsets/HessianKit/2.0.0). Also it can be generated using an [appledoc](http://gentlebytes.com/appledoc/) or a native utility [headerdoc2html](https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man1/headerdoc2html.1.html). ### Contents The HessianKit project consist of several targets: * __Documentation__ generates headerdoc documentation for the OS X and iOS APIs. * __HessianKit__ generates a framework to be bundled with applications for Mac OS X 10.5 and later, includes documentation. * __StaticHessianKit__ generates a static library to be linked with applications for iOS 4.0 and later. * __MacTests__ executes unit tests for HessianKit target against Mac OS X 10.5 and later. * __iPhoneTests__ executes unit tests for StaticHessianKit against iOS 4.0 and later. ### License Copyright 2008-2009 Fredrik Olsson, Cocoway. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at [http://www.apache.org/licenses/LICENSE-2.0 ](http://www.apache.org/licenses/LICENSE-2.0) Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
sxua/HessianKit
README.md
Markdown
apache-2.0
2,340
package pl.zankowski.iextrading4j.hist.tops.message; import nl.jqno.equalsverifier.EqualsVerifier; import org.junit.jupiter.api.Disabled; import org.junit.jupiter.api.Test; import pl.zankowski.iextrading4j.api.util.ToStringVerifier; import pl.zankowski.iextrading4j.hist.api.IEXMessageType; import pl.zankowski.iextrading4j.hist.api.field.IEXPrice; import pl.zankowski.iextrading4j.hist.api.util.IEXByteTestUtil; import pl.zankowski.iextrading4j.hist.tops.trading.IEXQuoteUpdateMessage; import static org.assertj.core.api.Assertions.assertThat; import static pl.zankowski.iextrading4j.hist.tops.message.builder.IEXQuoteUpdateMessageDataBuilder.defaultQuoteMessage; import static pl.zankowski.iextrading4j.hist.tops.message.builder.IEXQuoteUpdateMessageDataBuilder.quoteMessage; class IEXQuoteUpdateMessageTest { @Disabled @Test void shouldSuccessfullyCreateQuoteUpdateInstance() { final IEXMessageType messageType = IEXMessageType.QUOTE_UPDATE; final byte messageFlag = -64; final long timestamp = 123456789L; final String symbol = "AAPL"; final int bidSize = 100; final IEXPrice bidPrice = new IEXPrice(1234565L); final IEXPrice askPrice = new IEXPrice(1234567L); final int askSize = 200; final byte[] data = IEXByteTestUtil.prepareBytes(IEXQuoteUpdateMessage.LENGTH, messageType.getCode(), messageFlag, timestamp, symbol, bidSize, bidPrice.getNumber(), askPrice.getNumber(), askSize); final IEXQuoteUpdateMessage iexQuoteUpdateMessage = IEXQuoteUpdateMessage.createIEXMessage(data); assertThat(iexQuoteUpdateMessage.getMessageType()).isEqualTo(messageType); assertThat(iexQuoteUpdateMessage.isHalted()).isTrue(); assertThat(iexQuoteUpdateMessage.isPrePostMarketSession()).isFalse(); assertThat(iexQuoteUpdateMessage.getTimestamp()).isEqualTo(timestamp); assertThat(iexQuoteUpdateMessage.getSymbol()).isEqualTo(symbol); assertThat(iexQuoteUpdateMessage.getBidSize()).isEqualTo(bidSize); assertThat(iexQuoteUpdateMessage.getBidPrice()).isEqualTo(bidPrice); assertThat(iexQuoteUpdateMessage.getAskPrice()).isEqualTo(askPrice); assertThat(iexQuoteUpdateMessage.getAskSize()).isEqualTo(askSize); } @Disabled @Test void testIsPrePostMarketSession() { final IEXQuoteUpdateMessage iexQuoteUpdateMessage = quoteMessage() .withFlag((byte) -96) .build(); assertThat(iexQuoteUpdateMessage.isHalted()).isFalse(); assertThat(iexQuoteUpdateMessage.isPrePostMarketSession()).isTrue(); } @Test void equalsContract() { EqualsVerifier.forClass(IEXQuoteUpdateMessage.class) .usingGetClass() .verify(); } @Test void toStringVerification() { ToStringVerifier.forObject(defaultQuoteMessage()) .verify(); } }
WojciechZankowski/iextrading4j-hist
iextrading4j-hist-tops/src/test/java/pl/zankowski/iextrading4j/hist/tops/message/IEXQuoteUpdateMessageTest.java
Java
apache-2.0
2,942
# TetrisClone
live106/TetrisClone
README.md
Markdown
apache-2.0
14
/** * Copyright 2017 Telerik AD * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ (function(f){ if (typeof define === 'function' && define.amd) { define(["kendo.core"], f); } else { f(); } }(function(){ (function( window, undefined ) { kendo.cultures["vi"] = { name: "vi", numberFormat: { pattern: ["-n"], decimals: 2, ",": ".", ".": ",", groupSize: [3], percent: { pattern: ["-n %","n %"], decimals: 2, ",": ".", ".": ",", groupSize: [3], symbol: "%" }, currency: { name: "", abbr: "", pattern: ["-n $","n $"], decimals: 2, ",": ".", ".": ",", groupSize: [3], symbol: "₫" } }, calendars: { standard: { days: { names: ["Chủ Nhật","Thứ Hai","Thứ Ba","Thứ Tư","Thứ Năm","Thứ Sáu","Thứ Bảy"], namesAbbr: ["CN","T2","T3","Tư","Năm","Sáu","Bảy"], namesShort: ["C","H","B","T","N","S","B"] }, months: { names: ["Tháng Giêng","Tháng Hai","Tháng Ba","Tháng Tư","Tháng Năm","Tháng Sáu","Tháng Bảy","Tháng Tám","Tháng Chín","Tháng Mười","Tháng Mười Một","Tháng Mười Hai"], namesAbbr: ["Thg1","Thg2","Thg3","Thg4","Thg5","Thg6","Thg7","Thg8","Thg9","Thg10","Thg11","Thg12"] }, AM: ["SA","sa","SA"], PM: ["CH","ch","CH"], patterns: { d: "dd/MM/yyyy", D: "dd MMMM yyyy", F: "dd MMMM yyyy h:mm:ss tt", g: "dd/MM/yyyy h:mm tt", G: "dd/MM/yyyy h:mm:ss tt", m: "dd MMMM", M: "dd MMMM", s: "yyyy'-'MM'-'dd'T'HH':'mm':'ss", t: "h:mm tt", T: "h:mm:ss tt", u: "yyyy'-'MM'-'dd HH':'mm':'ss'Z'", y: "MMMM yyyy", Y: "MMMM yyyy" }, "/": "/", ":": ":", firstDay: 1 } } } })(this); }));
xiangxik/castle-platform
castle-themes/castle-theme-uikit/src/main/resources/META-INF/bower_components/kendo-ui/src/js/cultures/kendo.culture.vi.js
JavaScript
apache-2.0
6,655
# Mucotrichidium Foissner, Oleksiv & Müller, 1990 GENUS #### Status ACCEPTED #### According to The Catalogue of Life, 3rd January 2011 #### Published in Arch Protistenkd 138 (3): 196. #### Original name null ### Remarks null
mdoering/backbone
life/Protozoa/Ciliophora/Hypotrichea/Stichotrichida/Amphisiellidae/Mucotrichidium/README.md
Markdown
apache-2.0
230
// Copyright 2015 The Prometheus Authors // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package config import ( "encoding/json" "io/ioutil" "net/url" "reflect" "strings" "testing" "time" "github.com/prometheus/common/model" "gopkg.in/yaml.v2" ) var expectedConf = &Config{ GlobalConfig: GlobalConfig{ ScrapeInterval: model.Duration(15 * time.Second), ScrapeTimeout: DefaultGlobalConfig.ScrapeTimeout, EvaluationInterval: model.Duration(30 * time.Second), ExternalLabels: model.LabelSet{ "monitor": "codelab", "foo": "bar", }, }, RuleFiles: []string{ "testdata/first.rules", "/absolute/second.rules", "testdata/my/*.rules", }, ScrapeConfigs: []*ScrapeConfig{ { JobName: "prometheus", HonorLabels: true, ScrapeInterval: model.Duration(15 * time.Second), ScrapeTimeout: DefaultGlobalConfig.ScrapeTimeout, MetricsPath: DefaultScrapeConfig.MetricsPath, Scheme: DefaultScrapeConfig.Scheme, BearerTokenFile: "testdata/valid_token_file", StaticConfigs: []*TargetGroup{ { Targets: []model.LabelSet{ {model.AddressLabel: "localhost:9090"}, {model.AddressLabel: "localhost:9191"}, }, Labels: model.LabelSet{ "my": "label", "your": "label", }, }, }, FileSDConfigs: []*FileSDConfig{ { Files: []string{"foo/*.slow.json", "foo/*.slow.yml", "single/file.yml"}, RefreshInterval: model.Duration(10 * time.Minute), }, { Files: []string{"bar/*.yaml"}, RefreshInterval: model.Duration(5 * time.Minute), }, }, RelabelConfigs: []*RelabelConfig{ { SourceLabels: model.LabelNames{"job", "__meta_dns_name"}, TargetLabel: "job", Separator: ";", Regex: MustNewRegexp("(.*)some-[regex]"), Replacement: "foo-${1}", Action: RelabelReplace, }, { SourceLabels: model.LabelNames{"abc"}, TargetLabel: "cde", Separator: ";", Regex: DefaultRelabelConfig.Regex, Replacement: DefaultRelabelConfig.Replacement, Action: RelabelReplace, }, { TargetLabel: "abc", Separator: ";", Regex: DefaultRelabelConfig.Regex, Replacement: "static", Action: RelabelReplace, }, }, }, { JobName: "service-x", ScrapeInterval: model.Duration(50 * time.Second), ScrapeTimeout: model.Duration(5 * time.Second), BasicAuth: &BasicAuth{ Username: "admin_name", Password: "admin_password", }, MetricsPath: "/my_path", Scheme: "https", DNSSDConfigs: []*DNSSDConfig{ { Names: []string{ "first.dns.address.domain.com", "second.dns.address.domain.com", }, RefreshInterval: model.Duration(15 * time.Second), Type: "SRV", }, { Names: []string{ "first.dns.address.domain.com", }, RefreshInterval: model.Duration(30 * time.Second), Type: "SRV", }, }, RelabelConfigs: []*RelabelConfig{ { SourceLabels: model.LabelNames{"job"}, Regex: MustNewRegexp("(.*)some-[regex]"), Separator: ";", Replacement: DefaultRelabelConfig.Replacement, Action: RelabelDrop, }, { SourceLabels: model.LabelNames{"__address__"}, TargetLabel: "__tmp_hash", Regex: DefaultRelabelConfig.Regex, Replacement: DefaultRelabelConfig.Replacement, Modulus: 8, Separator: ";", Action: RelabelHashMod, }, { SourceLabels: model.LabelNames{"__tmp_hash"}, Regex: MustNewRegexp("1"), Separator: ";", Replacement: DefaultRelabelConfig.Replacement, Action: RelabelKeep, }, { Regex: MustNewRegexp("1"), Separator: ";", Replacement: DefaultRelabelConfig.Replacement, Action: RelabelLabelMap, }, }, MetricRelabelConfigs: []*RelabelConfig{ { SourceLabels: model.LabelNames{"__name__"}, Regex: MustNewRegexp("expensive_metric.*"), Separator: ";", Replacement: DefaultRelabelConfig.Replacement, Action: RelabelDrop, }, }, }, { JobName: "service-y", ScrapeInterval: model.Duration(15 * time.Second), ScrapeTimeout: DefaultGlobalConfig.ScrapeTimeout, MetricsPath: DefaultScrapeConfig.MetricsPath, Scheme: DefaultScrapeConfig.Scheme, ConsulSDConfigs: []*ConsulSDConfig{ { Server: "localhost:1234", Services: []string{"nginx", "cache", "mysql"}, TagSeparator: DefaultConsulSDConfig.TagSeparator, Scheme: DefaultConsulSDConfig.Scheme, }, }, }, { JobName: "service-z", ScrapeInterval: model.Duration(15 * time.Second), ScrapeTimeout: model.Duration(10 * time.Second), MetricsPath: "/metrics", Scheme: "http", TLSConfig: TLSConfig{ CertFile: "testdata/valid_cert_file", KeyFile: "testdata/valid_key_file", }, BearerToken: "avalidtoken", }, { JobName: "service-kubernetes", ScrapeInterval: model.Duration(15 * time.Second), ScrapeTimeout: DefaultGlobalConfig.ScrapeTimeout, MetricsPath: DefaultScrapeConfig.MetricsPath, Scheme: DefaultScrapeConfig.Scheme, KubernetesSDConfigs: []*KubernetesSDConfig{ { APIServers: []URL{kubernetesSDHostURL()}, Role: KubernetesRoleEndpoint, BasicAuth: &BasicAuth{ Username: "myusername", Password: "mypassword", }, RequestTimeout: model.Duration(10 * time.Second), RetryInterval: model.Duration(1 * time.Second), }, }, }, { JobName: "service-marathon", ScrapeInterval: model.Duration(15 * time.Second), ScrapeTimeout: DefaultGlobalConfig.ScrapeTimeout, MetricsPath: DefaultScrapeConfig.MetricsPath, Scheme: DefaultScrapeConfig.Scheme, MarathonSDConfigs: []*MarathonSDConfig{ { Servers: []string{ "http://marathon.example.com:8080", }, RefreshInterval: model.Duration(30 * time.Second), }, }, }, { JobName: "service-ec2", ScrapeInterval: model.Duration(15 * time.Second), ScrapeTimeout: DefaultGlobalConfig.ScrapeTimeout, MetricsPath: DefaultScrapeConfig.MetricsPath, Scheme: DefaultScrapeConfig.Scheme, EC2SDConfigs: []*EC2SDConfig{ { Region: "us-east-1", AccessKey: "access", SecretKey: "secret", RefreshInterval: model.Duration(60 * time.Second), Port: 80, }, }, }, { JobName: "service-azure", ScrapeInterval: model.Duration(15 * time.Second), ScrapeTimeout: DefaultGlobalConfig.ScrapeTimeout, MetricsPath: DefaultScrapeConfig.MetricsPath, Scheme: DefaultScrapeConfig.Scheme, AzureSDConfigs: []*AzureSDConfig{ { SubscriptionID: "11AAAA11-A11A-111A-A111-1111A1111A11", TenantID: "BBBB222B-B2B2-2B22-B222-2BB2222BB2B2", ClientID: "333333CC-3C33-3333-CCC3-33C3CCCCC33C", ClientSecret: "nAdvAK2oBuVym4IXix", RefreshInterval: model.Duration(5 * time.Minute), Port: 9100, }, }, }, { JobName: "service-nerve", ScrapeInterval: model.Duration(15 * time.Second), ScrapeTimeout: DefaultGlobalConfig.ScrapeTimeout, MetricsPath: DefaultScrapeConfig.MetricsPath, Scheme: DefaultScrapeConfig.Scheme, NerveSDConfigs: []*NerveSDConfig{ { Servers: []string{"localhost"}, Paths: []string{"/monitoring"}, Timeout: model.Duration(10 * time.Second), }, }, }, }, original: "", } func TestLoadConfig(t *testing.T) { // Parse a valid file that sets a global scrape timeout. This tests whether parsing // an overwritten default field in the global config permanently changes the default. if _, err := LoadFile("testdata/global_timeout.good.yml"); err != nil { t.Errorf("Error parsing %s: %s", "testdata/conf.good.yml", err) } c, err := LoadFile("testdata/conf.good.yml") if err != nil { t.Fatalf("Error parsing %s: %s", "testdata/conf.good.yml", err) } bgot, err := yaml.Marshal(c) if err != nil { t.Fatalf("%s", err) } bexp, err := yaml.Marshal(expectedConf) if err != nil { t.Fatalf("%s", err) } expectedConf.original = c.original if !reflect.DeepEqual(c, expectedConf) { t.Fatalf("%s: unexpected config result: \n\n%s\n expected\n\n%s", "testdata/conf.good.yml", bgot, bexp) } // String method must not reveal authentication credentials. s := c.String() if strings.Contains(s, "admin_password") { t.Fatalf("config's String method reveals authentication credentials.") } } var expectedErrors = []struct { filename string errMsg string }{ { filename: "jobname.bad.yml", errMsg: `"prom^etheus" is not a valid job name`, }, { filename: "jobname_dup.bad.yml", errMsg: `found multiple scrape configs with job name "prometheus"`, }, { filename: "scrape_interval.bad.yml", errMsg: `scrape timeout greater than scrape interval`, }, { filename: "labelname.bad.yml", errMsg: `"not$allowed" is not a valid label name`, }, { filename: "labelname2.bad.yml", errMsg: `"not:allowed" is not a valid label name`, }, { filename: "regex.bad.yml", errMsg: "error parsing regexp", }, { filename: "modulus_missing.bad.yml", errMsg: "relabel configuration for hashmod requires non-zero modulus", }, { filename: "rules.bad.yml", errMsg: "invalid rule file path", }, { filename: "unknown_attr.bad.yml", errMsg: "unknown fields in scrape_config: consult_sd_configs", }, { filename: "bearertoken.bad.yml", errMsg: "at most one of bearer_token & bearer_token_file must be configured", }, { filename: "bearertoken_basicauth.bad.yml", errMsg: "at most one of basic_auth, bearer_token & bearer_token_file must be configured", }, { filename: "kubernetes_bearertoken.bad.yml", errMsg: "at most one of bearer_token & bearer_token_file must be configured", }, { filename: "kubernetes_role.bad.yml", errMsg: "role", }, { filename: "kubernetes_bearertoken_basicauth.bad.yml", errMsg: "at most one of basic_auth, bearer_token & bearer_token_file must be configured", }, { filename: "marathon_no_servers.bad.yml", errMsg: "Marathon SD config must contain at least one Marathon server", }, { filename: "url_in_targetgroup.bad.yml", errMsg: "\"http://bad\" is not a valid hostname", }, } func TestBadConfigs(t *testing.T) { for _, ee := range expectedErrors { _, err := LoadFile("testdata/" + ee.filename) if err == nil { t.Errorf("Expected error parsing %s but got none", ee.filename) continue } if !strings.Contains(err.Error(), ee.errMsg) { t.Errorf("Expected error for %s to contain %q but got: %s", ee.filename, ee.errMsg, err) } } } func TestBadStaticConfigs(t *testing.T) { content, err := ioutil.ReadFile("testdata/static_config.bad.json") if err != nil { t.Fatal(err) } var tg TargetGroup err = json.Unmarshal(content, &tg) if err == nil { t.Errorf("Expected unmarshal error but got none.") } } func TestEmptyConfig(t *testing.T) { c, err := Load("") if err != nil { t.Fatalf("Unexpected error parsing empty config file: %s", err) } exp := DefaultConfig if !reflect.DeepEqual(*c, exp) { t.Fatalf("want %v, got %v", exp, c) } } func TestEmptyGlobalBlock(t *testing.T) { c, err := Load("global:\n") if err != nil { t.Fatalf("Unexpected error parsing empty config file: %s", err) } exp := DefaultConfig exp.original = "global:\n" if !reflect.DeepEqual(*c, exp) { t.Fatalf("want %v, got %v", exp, c) } } func kubernetesSDHostURL() URL { tURL, _ := url.Parse("https://localhost:1234") return URL{URL: tURL} }
msiebuhr/prometheus
config/config_test.go
GO
apache-2.0
12,202