content_type stringclasses 8 values | main_lang stringclasses 7 values | message stringlengths 1 50 | sha stringlengths 40 40 | patch stringlengths 52 962k | file_count int64 1 300 |
|---|---|---|---|---|---|
Ruby | Ruby | use ohai headers at the top of search results | c7e986d65e21e81a27c01ee4393ef6b897b91c03 | <ide><path>Library/Homebrew/cmd/install.rb
<ide> def install
<ide> else
<ide> ofail e.message
<ide> query = query_regexp(e.name)
<del> puts "Searching formulae..."
<add> ohai "Searching formulae..."
<ide> puts_columns(search_formulae(query))
<del> puts "Searching taps..."
<add> ohai "Searching taps..."
<ide> puts_columns(search_taps(query))
<ide>
<ide> # If they haven't updated in 48 hours (172800 seconds), that
<ide><path>Library/Homebrew/utils.rb
<ide> def issues_for_formula name
<ide>
<ide> def print_pull_requests_matching(query)
<ide> return [] if ENV['HOMEBREW_NO_GITHUB_API']
<del> puts "Searching pull requests..."
<add> ohai "Searching pull requests..."
<ide>
<ide> open_or_closed_prs = issues_matching(query, :type => "pr")
<ide> | 2 |
Text | Text | add @coderigo for thanks! | e98c229e5aa86f0da872583c08f353d974d9cc98 | <ide><path>docs/topics/credits.md
<ide> The following people have helped make REST framework great.
<ide> * Pavel Zinovkin - [pzinovkin]
<ide> * Will Kahn-Greene - [willkg]
<ide> * Kevin Brown - [kevin-brown]
<add>* Rodrigo Martell - [coderigo]
<ide>
<ide> Many thanks to everyone who's contributed to the project.
<ide>
<ide> You can also contact [@_tomchristie][twitter] directly on twitter.
<ide> [gertjanol]: https://github.com/gertjanol
<ide> [cyroxx]: https://github.com/cyroxx
<ide> [pzinovkin]: https://github.com/pzinovkin
<add>[coderigo]: https://github.com/coderigo
<ide> [willkg]: https://github.com/willkg
<ide> [kevin-brown]: https://github.com/kevin-brown | 1 |
Ruby | Ruby | move ar test classes inside the test case | 44e55510547f5aaea78f5f91b82dd3dc5e9bef54 | <ide><path>activerecord/test/cases/associations/eager_singularization_test.rb
<ide> require "cases/helper"
<ide>
<del>class Virus < ActiveRecord::Base
<del> belongs_to :octopus
<del>end
<del>class Octopus < ActiveRecord::Base
<del> has_one :virus
<del>end
<del>class Pass < ActiveRecord::Base
<del> belongs_to :bus
<del>end
<del>class Bus < ActiveRecord::Base
<del> has_many :passes
<del>end
<del>class Mess < ActiveRecord::Base
<del> has_and_belongs_to_many :crises
<del>end
<del>class Crisis < ActiveRecord::Base
<del> has_and_belongs_to_many :messes
<del> has_many :analyses, :dependent => :destroy
<del> has_many :successes, :through => :analyses
<del> has_many :dresses, :dependent => :destroy
<del> has_many :compresses, :through => :dresses
<del>end
<del>class Analysis < ActiveRecord::Base
<del> belongs_to :crisis
<del> belongs_to :success
<del>end
<del>class Success < ActiveRecord::Base
<del> has_many :analyses, :dependent => :destroy
<del> has_many :crises, :through => :analyses
<del>end
<del>class Dress < ActiveRecord::Base
<del> belongs_to :crisis
<del> has_many :compresses
<del>end
<del>class Compress < ActiveRecord::Base
<del> belongs_to :dress
<del>end
<del>
<ide>
<ide> class EagerSingularizationTest < ActiveRecord::TestCase
<add> class Virus < ActiveRecord::Base
<add> belongs_to :octopus
<add> end
<add>
<add> class Octopus < ActiveRecord::Base
<add> has_one :virus
<add> end
<add>
<add> class Pass < ActiveRecord::Base
<add> belongs_to :bus
<add> end
<add>
<add> class Bus < ActiveRecord::Base
<add> has_many :passes
<add> end
<add>
<add> class Mess < ActiveRecord::Base
<add> has_and_belongs_to_many :crises
<add> end
<add>
<add> class Crisis < ActiveRecord::Base
<add> has_and_belongs_to_many :messes
<add> has_many :analyses, :dependent => :destroy
<add> has_many :successes, :through => :analyses
<add> has_many :dresses, :dependent => :destroy
<add> has_many :compresses, :through => :dresses
<add> end
<add>
<add> class Analysis < ActiveRecord::Base
<add> belongs_to :crisis
<add> belongs_to :success
<add> end
<add>
<add> class Success < ActiveRecord::Base
<add> has_many :analyses, :dependent => :destroy
<add> has_many :crises, :through => :analyses
<add> end
<add>
<add> class Dress < ActiveRecord::Base
<add> belongs_to :crisis
<add> has_many :compresses
<add> end
<add>
<add> class Compress < ActiveRecord::Base
<add> belongs_to :dress
<add> end
<ide>
<ide> def setup
<ide> if ActiveRecord::Base.connection.supports_migrations? | 1 |
Text | Text | update docker pull examples | 32eff909b4d3072524041fffc9d43efe87d2116f | <ide><path>docs/reference/commandline/pull.md
<ide> Most of your images will be created on top of a base image from the
<ide> [Docker Hub](https://hub.docker.com) contains many pre-built images that you
<ide> can `pull` and try without needing to define and configure your own.
<ide>
<del>It is also possible to manually specify the path of a registry to pull from.
<del>For example, if you have set up a local registry, you can specify its path to
<del>pull from it. A repository path is similar to a URL, but does not contain
<del>a protocol specifier (`https://`, for example).
<del>
<ide> To download a particular image, or set of images (i.e., a repository),
<del>use `docker pull`:
<del>
<del> $ docker pull debian
<del> # will pull the debian:latest image and its intermediate layers
<del> $ docker pull debian:testing
<del> # will pull the image named debian:testing and any intermediate
<del> # layers it is based on.
<del> $ docker pull debian@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf
<del> # will pull the image from the debian repository with the digest
<del> # sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf
<del> # and any intermediate layers it is based on.
<del> # (Typically the empty `scratch` image, a MAINTAINER layer,
<del> # and the un-tarred base).
<del> $ docker pull --all-tags centos
<del> # will pull all the images from the centos repository
<del> $ docker pull registry.hub.docker.com/debian
<del> # manually specifies the path to the default Docker registry. This could
<del> # be replaced with the path to a local registry to pull from another source.
<del> # sudo docker pull myhub.com:8080/test-image
<add>use `docker pull`.
<add>
<add>## Examples
<add>
<add>### Pull an image from Docker Hub
<add>
<add>To download a particular image, or set of images (i.e., a repository), use
<add>`docker pull`. If no tag is provided, Docker Engine uses the `:latest` tag as a
<add>default. This command pulls the `debian:latest` image:
<add>
<add>```bash
<add>$ docker pull debian
<add>
<add>Using default tag: latest
<add>latest: Pulling from library/debian
<add>fdd5d7827f33: Pull complete
<add>a3ed95caeb02: Pull complete
<add>Digest: sha256:e7d38b3517548a1c71e41bffe9c8ae6d6d29546ce46bf62159837aad072c90aa
<add>Status: Downloaded newer image for debian:latest
<add>```
<add>
<add>Docker images can consist of multiple layers. In the example above, the image
<add>consists of two layers; `fdd5d7827f33` and `a3ed95caeb02`.
<add>
<add>Layers can be reused by images. For example, the `debian:jessie` image shares
<add>both layers with `debian:latest`. Pulling the `debian:jessie` image therefore
<add>only pulls its metadata, but not its layers, because all layers are already
<add>present locally:
<add>
<add>```bash
<add>$ docker pull debian:jessie
<add>
<add>jessie: Pulling from library/debian
<add>fdd5d7827f33: Already exists
<add>a3ed95caeb02: Already exists
<add>Digest: sha256:a9c958be96d7d40df920e7041608f2f017af81800ca5ad23e327bc402626b58e
<add>Status: Downloaded newer image for debian:jessie
<add>```
<add>
<add>To see which images are present locally, use the [`docker images`](images.md)
<add>command:
<add>
<add>```bash
<add>$ docker images
<add>
<add>REPOSITORY TAG IMAGE ID CREATED SIZE
<add>debian jessie f50f9524513f 5 days ago 125.1 MB
<add>debian latest f50f9524513f 5 days ago 125.1 MB
<add>```
<add>
<add>Docker uses a content-addressable image store, and the image ID is a SHA256
<add>digest covering the image's configuration and layers. In the example above,
<add>`debian:jessie` and `debian:latest` have the same image ID because they are
<add>actually the *same* image tagged with different names. Because they are the
<add>same image, their layers are stored only once and do not consume extra disk
<add>space.
<add>
<add>For more information about images, layers, and the content-addressable store,
<add>refer to [understand images, containers, and storage drivers](../../userguide/storagedriver/imagesandcontainers.md).
<add>
<add>
<add>## Pull an image by digest (immutable identifier)
<add>
<add>So far, you've pulled images by their name (and "tag"). Using names and tags is
<add>a convenient way to work with images. When using tags, you can `docker pull` an
<add>image again to make sure you have the most up-to-date version of that image.
<add>For example, `docker pull ubuntu:14.04` pulls the latest version of the Ubuntu
<add>14.04 image.
<add>
<add>In some cases you don't want images to be updated to newer versions, but prefer
<add>to use a fixed version of an image. Docker enables you to pull an image by its
<add>*digest*. When pulling an image by digest, you specify *exactly* which version
<add>of an image to pull. Doing so, allows you to "pin" an image to that version,
<add>and guarantee that the image you're using is always the same.
<add>
<add>To know the digest of an image, pull the image first. Let's pull the latest
<add>`ubuntu:14.04` image from Docker Hub:
<add>
<add>```bash
<add>$ docker pull ubuntu:14.04
<add>
<add>14.04: Pulling from library/ubuntu
<add>5a132a7e7af1: Pull complete
<add>fd2731e4c50c: Pull complete
<add>28a2f68d1120: Pull complete
<add>a3ed95caeb02: Pull complete
<add>Digest: sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2
<add>Status: Downloaded newer image for ubuntu:14.04
<add>```
<add>
<add>Docker prints the digest of the image after the pull has finished. In the example
<add>above, the digest of the image is:
<add>
<add> sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2
<add>
<add>Docker also prints the digest of an image when *pushing* to a registry. This
<add>may be useful if you want to pin to a version of the image you just pushed.
<add>
<add>A digest takes the place of the tag when pulling an image, for example, to
<add>pull the above image by digest, run the following command:
<add>
<add>```bash
<add>$ docker pull ubuntu@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2
<add>
<add>sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2: Pulling from library/ubuntu
<add>5a132a7e7af1: Already exists
<add>fd2731e4c50c: Already exists
<add>28a2f68d1120: Already exists
<add>a3ed95caeb02: Already exists
<add>Digest: sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2
<add>Status: Downloaded newer image for ubuntu@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2
<add>```
<add>
<add>Digest can also be used in the `FROM` of a Dockerfile, for example:
<add>
<add>```Dockerfile
<add>FROM ubuntu@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2
<add>MAINTAINER some maintainer <maintainer@example.com>
<add>```
<add>
<add>> **Note**: Using this feature "pins" an image to a specific version in time.
<add>> Docker will therefore not pull updated versions of an image, which may include
<add>> security updates. If you want to pull an updated image, you need to change the
<add>> digest accordingly.
<add>
<add>
<add>## Pulling from a different registry
<add>
<add>By default, `docker pull` pulls images from Docker Hub. It is also possible to
<add>manually specify the path of a registry to pull from. For example, if you have
<add>set up a local registry, you can specify its path to pull from it. A registry
<add>path is similar to a URL, but does not contain a protocol specifier (`https://`).
<add>
<add>The following command pulls the `testing/test-image` image from a local registry
<add>listening on port 5000 (`myregistry.local:5000`):
<add>
<add>```bash
<add>$ docker pull myregistry.local:5000/testing/test-image
<add>```
<add>
<add>Docker uses the `https://` protocol to communicate with a registry, unless the
<add>registry is allowed to be accessed over an insecure connection. Refer to the
<add>[insecure registries](daemon.md#insecure-registries) section for more information.
<add>
<add>
<add>## Pull a repository with multiple images
<add>
<add>By default, `docker pull` pulls a *single* image from the registry. A repository
<add>can contain multiple images. To pull all images from a repository, provide the
<add>`-a` (or `--all-tags`) option when using `docker pull`.
<add>
<add>This command pulls all images from the `fedora` repository:
<add>
<add>```bash
<add>$ docker pull --all-tags fedora
<add>
<add>Pulling repository fedora
<add>ad57ef8d78d7: Download complete
<add>105182bb5e8b: Download complete
<add>511136ea3c5a: Download complete
<add>73bd853d2ea5: Download complete
<add>....
<add>
<add>Status: Downloaded newer image for fedora
<add>```
<add>
<add>After the pull has completed use the `docker images` command to see the
<add>images that were pulled. The example below shows all the `fedora` images
<add>that are present locally:
<add>
<add>```bash
<add>$ docker images fedora
<add>
<add>REPOSITORY TAG IMAGE ID CREATED SIZE
<add>fedora rawhide ad57ef8d78d7 5 days ago 359.3 MB
<add>fedora 20 105182bb5e8b 5 days ago 372.7 MB
<add>fedora heisenbug 105182bb5e8b 5 days ago 372.7 MB
<add>fedora latest 105182bb5e8b 5 days ago 372.7 MB
<add>```
<add>
<add>## Canceling a pull
<ide>
<ide> Killing the `docker pull` process, for example by pressing `CTRL-c` while it is
<ide> running in a terminal, will terminate the pull operation.
<add>
<add>```bash
<add>$ docker pull fedora
<add>
<add>Using default tag: latest
<add>latest: Pulling from library/fedora
<add>a3ed95caeb02: Pulling fs layer
<add>236608c7b546: Pulling fs layer
<add>^C
<add>```
<add>
<add>> **Note**: Technically, the Engine terminates a pull operation when the
<add>> connection between the Docker Engine daemon and the Docker Engine client
<add>> initiating the pull is lost. If the connection with the Engine daemon is
<add>> lost for other reasons than a manual interaction, the pull is also aborted.
<ide><path>man/docker-pull.1.md
<ide> This command pulls down an image or a repository from a registry. If
<ide> there is more than one image for a repository (e.g., fedora) then all
<ide> images for that repository name can be pulled down including any tags
<ide> (see the option **-a** or **--all-tags**).
<del>
<add>
<ide> If you do not specify a `REGISTRY_HOST`, the command uses Docker's public
<ide> registry located at `registry-1.docker.io` by default.
<ide>
<ide> registry located at `registry-1.docker.io` by default.
<ide> **--help**
<ide> Print usage statement
<ide>
<del># EXAMPLE
<add># EXAMPLES
<add>
<add>### Pull an image from Docker Hub
<add>
<add>To download a particular image, or set of images (i.e., a repository), use
<add>`docker pull`. If no tag is provided, Docker Engine uses the `:latest` tag as a
<add>default. This command pulls the `debian:latest` image:
<add>
<add> $ docker pull debian
<add>
<add> Using default tag: latest
<add> latest: Pulling from library/debian
<add> fdd5d7827f33: Pull complete
<add> a3ed95caeb02: Pull complete
<add> Digest: sha256:e7d38b3517548a1c71e41bffe9c8ae6d6d29546ce46bf62159837aad072c90aa
<add> Status: Downloaded newer image for debian:latest
<add>
<add>Docker images can consist of multiple layers. In the example above, the image
<add>consists of two layers; `fdd5d7827f33` and `a3ed95caeb02`.
<add>
<add>Layers can be reused by images. For example, the `debian:jessie` image shares
<add>both layers with `debian:latest`. Pulling the `debian:jessie` image therefore
<add>only pulls its metadata, but not its layers, because all layers are already
<add>present locally:
<add>
<add> $ docker pull debian:jessie
<add>
<add> jessie: Pulling from library/debian
<add> fdd5d7827f33: Already exists
<add> a3ed95caeb02: Already exists
<add> Digest: sha256:a9c958be96d7d40df920e7041608f2f017af81800ca5ad23e327bc402626b58e
<add> Status: Downloaded newer image for debian:jessie
<add>
<add>To see which images are present locally, use the **docker-images(1)**
<add>command:
<add>
<add> $ docker images
<add>
<add> REPOSITORY TAG IMAGE ID CREATED SIZE
<add> debian jessie f50f9524513f 5 days ago 125.1 MB
<add> debian latest f50f9524513f 5 days ago 125.1 MB
<add>
<add>Docker uses a content-addressable image store, and the image ID is a SHA256
<add>digest covering the image's configuration and layers. In the example above,
<add>`debian:jessie` and `debian:latest` have the same image ID because they are
<add>actually the *same* image tagged with different names. Because they are the
<add>same image, their layers are stored only once and do not consume extra disk
<add>space.
<add>
<add>For more information about images, layers, and the content-addressable store,
<add>refer to [understand images, containers, and storage drivers](https://docs.docker.com/engine/userguide/storagedriver/imagesandcontainers/)
<add>in the online documentation.
<add>
<add>
<add>## Pull an image by digest (immutable identifier)
<add>
<add>So far, you've pulled images by their name (and "tag"). Using names and tags is
<add>a convenient way to work with images. When using tags, you can `docker pull` an
<add>image again to make sure you have the most up-to-date version of that image.
<add>For example, `docker pull ubuntu:14.04` pulls the latest version of the Ubuntu
<add>14.04 image.
<add>
<add>In some cases you don't want images to be updated to newer versions, but prefer
<add>to use a fixed version of an image. Docker enables you to pull an image by its
<add>*digest*. When pulling an image by digest, you specify *exactly* which version
<add>of an image to pull. Doing so, allows you to "pin" an image to that version,
<add>and guarantee that the image you're using is always the same.
<add>
<add>To know the digest of an image, pull the image first. Let's pull the latest
<add>`ubuntu:14.04` image from Docker Hub:
<add>
<add> $ docker pull ubuntu:14.04
<add>
<add> 14.04: Pulling from library/ubuntu
<add> 5a132a7e7af1: Pull complete
<add> fd2731e4c50c: Pull complete
<add> 28a2f68d1120: Pull complete
<add> a3ed95caeb02: Pull complete
<add> Digest: sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2
<add> Status: Downloaded newer image for ubuntu:14.04
<add>
<add>Docker prints the digest of the image after the pull has finished. In the example
<add>above, the digest of the image is:
<add>
<add> sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2
<add>
<add>Docker also prints the digest of an image when *pushing* to a registry. This
<add>may be useful if you want to pin to a version of the image you just pushed.
<add>
<add>A digest takes the place of the tag when pulling an image, for example, to
<add>pull the above image by digest, run the following command:
<ide>
<del>## Pull a repository with multiple images with the -a|--all-tags option set to true.
<del>Note that if the image is previously downloaded then the status would be
<del>`Status: Image is up to date for fedora`.
<add> $ docker pull ubuntu@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2
<add>
<add> sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2: Pulling from library/ubuntu
<add> 5a132a7e7af1: Already exists
<add> fd2731e4c50c: Already exists
<add> 28a2f68d1120: Already exists
<add> a3ed95caeb02: Already exists
<add> Digest: sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2
<add> Status: Downloaded newer image for ubuntu@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2
<add>
<add>Digest can also be used in the `FROM` of a Dockerfile, for example:
<add>
<add> FROM ubuntu@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2
<add> MAINTAINER some maintainer <maintainer@example.com>
<add>
<add>> **Note**: Using this feature "pins" an image to a specific version in time.
<add>> Docker will therefore not pull updated versions of an image, which may include
<add>> security updates. If you want to pull an updated image, you need to change the
<add>> digest accordingly.
<add>
<add>## Pulling from a different registry
<add>
<add>By default, `docker pull` pulls images from Docker Hub. It is also possible to
<add>manually specify the path of a registry to pull from. For example, if you have
<add>set up a local registry, you can specify its path to pull from it. A registry
<add>path is similar to a URL, but does not contain a protocol specifier (`https://`).
<add>
<add>The following command pulls the `testing/test-image` image from a local registry
<add>listening on port 5000 (`myregistry.local:5000`):
<add>
<add> $ docker pull myregistry.local:5000/testing/test-image
<add>
<add>Docker uses the `https://` protocol to communicate with a registry, unless the
<add>registry is allowed to be accessed over an insecure connection. Refer to the
<add>[insecure registries](https://docs.docker.com/engine/reference/commandline/daemon/#insecure-registries)
<add>section in the online documentation for more information.
<add>
<add>
<add>## Pull a repository with multiple images
<add>
<add>By default, `docker pull` pulls a *single* image from the registry. A repository
<add>can contain multiple images. To pull all images from a repository, provide the
<add>`-a` (or `--all-tags`) option when using `docker pull`.
<add>
<add>This command pulls all images from the `fedora` repository:
<ide>
<ide> $ docker pull --all-tags fedora
<add>
<ide> Pulling repository fedora
<ide> ad57ef8d78d7: Download complete
<ide> 105182bb5e8b: Download complete
<ide> 511136ea3c5a: Download complete
<ide> 73bd853d2ea5: Download complete
<add> ....
<ide>
<ide> Status: Downloaded newer image for fedora
<ide>
<del> $ docker images
<add>After the pull has completed use the `docker images` command to see the
<add>images that were pulled. The example below shows all the `fedora` images
<add>that are present locally:
<add>
<add> $ docker images fedora
<add>
<ide> REPOSITORY TAG IMAGE ID CREATED SIZE
<ide> fedora rawhide ad57ef8d78d7 5 days ago 359.3 MB
<ide> fedora 20 105182bb5e8b 5 days ago 372.7 MB
<ide> fedora heisenbug 105182bb5e8b 5 days ago 372.7 MB
<ide> fedora latest 105182bb5e8b 5 days ago 372.7 MB
<ide>
<del>## Pull a repository with the -a|--all-tags option set to false (this is the default).
<del>
<del> $ docker pull debian
<del> Using default tag: latest
<del> latest: Pulling from library/debian
<del> 2c49f83e0b13: Pull complete
<del> 4a5e6db8c069: Pull complete
<del>
<del> Status: Downloaded newer image for debian:latest
<del>
<del> $ docker images
<del> REPOSITORY TAG IMAGE ID CREATED SIZE
<del> debian latest 4a5e6db8c069 5 days ago 125.1 MB
<del>
<ide>
<del>## Pull an image, manually specifying path to Docker's public registry and tag
<del>Note that if the image is previously downloaded then the status would be
<del>`Status: Image is up to date for registry.hub.docker.com/fedora:20`
<add>## Canceling a pull
<ide>
<del> $ docker pull registry.hub.docker.com/fedora:20
<del> Pulling repository fedora
<del> 3f2fed40e4b0: Download complete
<del> 511136ea3c5a: Download complete
<del> fd241224e9cf: Download complete
<add>Killing the `docker pull` process, for example by pressing `CTRL-c` while it is
<add>running in a terminal, will terminate the pull operation.
<ide>
<del> Status: Downloaded newer image for registry.hub.docker.com/fedora:20
<add> $ docker pull fedora
<ide>
<del> $ docker images
<del> REPOSITORY TAG IMAGE ID CREATED SIZE
<del> fedora 20 3f2fed40e4b0 4 days ago 372.7 MB
<add> Using default tag: latest
<add> latest: Pulling from library/fedora
<add> a3ed95caeb02: Pulling fs layer
<add> 236608c7b546: Pulling fs layer
<add> ^C
<add>
<add>> **Note**: Technically, the Engine terminates a pull operation when the
<add>> connection between the Docker Engine daemon and the Docker Engine client
<add>> initiating the pull is lost. If the connection with the Engine daemon is
<add>> lost for other reasons than a manual interaction, the pull is also aborted.
<ide>
<ide>
<ide> # HISTORY | 2 |
Python | Python | fix memoryerror on win32 | 8919cb46a4a937781b52c73818b69661bbdb12eb | <ide><path>numpy/core/tests/test_multiarray.py
<ide> def test_zeros_big(self):
<ide> for dt in types:
<ide> d = np.zeros((30 * 1024**2,), dtype=dt)
<ide> assert_(not d.any())
<add> # This test can fail on 32-bit systems due to insufficient
<add> # contiguous memory. Deallocating the previous array increases the
<add> # chance of success.
<add> del(d)
<ide>
<ide> def test_zeros_obj(self):
<ide> # test initialization from PyLong(0) | 1 |
Ruby | Ruby | handle other pk types in postgresql gracefully | 0e00c6b296b48e35fc3997648561f5da7295098a | <ide><path>activerecord/lib/active_record/connection_adapters/postgresql_adapter.rb
<ide> class TableDefinition < ActiveRecord::ConnectionAdapters::TableDefinition
<ide> # a record (as primary keys cannot be +nil+). This might be done via the
<ide> # +SecureRandom.uuid+ method and a +before_save+ callback, for instance.
<ide> def primary_key(name, type = :primary_key, options = {})
<del> return super unless type == :uuid
<del> options[:default] = options.fetch(:default, 'uuid_generate_v4()')
<add> return super unless type = :primary_key
<add>
<add> if type == :uuid
<add> options[:default] = options.fetch(:default, 'uuid_generate_v4()')
<add> end
<add>
<ide> options[:primary_key] = true
<ide> column name, type, options
<ide> end
<ide><path>activerecord/test/cases/primary_keys_test.rb
<ide> def test_primaery_key_method_with_ansi_quotes
<ide> end
<ide> end
<ide>
<add>if current_adapter?(:PostgreSQLAdapter)
<add> class PrimaryKeyBigSerialTest < ActiveRecord::TestCase
<add> self.use_transactional_fixtures = false
<add>
<add> class Widget < ActiveRecord::Base
<add> end
<add>
<add> def setup
<add> @con = ActiveRecord::Base.connection
<add>
<add> ActiveRecord::Schema.define do
<add> create_table :widgets, id: :bigserial do |t|
<add> end
<add> end
<add> end
<add>
<add> def teardown
<add> ActiveRecord::Schema.define do
<add> drop_table :widgets
<add> end
<add> end
<add>
<add> def test_bigserial_primary_key
<add> widget = Widget.create!
<add>
<add> assert_not_nil widget.id
<add> end
<add> end
<add>end | 2 |
Go | Go | improve consistency in "skip" | a3948d17d330315c832112bfecfc15d5e19511b1 | <ide><path>integration/container/daemon_linux_test.go
<ide> import (
<ide> func TestContainerStartOnDaemonRestart(t *testing.T) {
<ide> skip.If(t, testEnv.IsRemoteDaemon, "cannot start daemon on remote test run")
<ide> skip.If(t, testEnv.DaemonInfo.OSType == "windows")
<del> skip.If(t, testEnv.IsRemoteDaemon(), "cannot start daemon on remote test run")
<ide> t.Parallel()
<ide>
<ide> d := daemon.New(t)
<ide><path>integration/container/export_test.go
<ide> func TestExportContainerAndImportImage(t *testing.T) {
<ide> // condition, daemon restart is needed after container creation.
<ide> func TestExportContainerAfterDaemonRestart(t *testing.T) {
<ide> skip.If(t, testEnv.DaemonInfo.OSType == "windows")
<del> skip.If(t, testEnv.IsRemoteDaemon())
<add> skip.If(t, testEnv.IsRemoteDaemon)
<ide>
<ide> d := daemon.New(t)
<ide> c := d.NewClientT(t)
<ide><path>integration/container/links_linux_test.go
<ide> import (
<ide> )
<ide>
<ide> func TestLinksEtcHostsContentMatch(t *testing.T) {
<del> skip.If(t, testEnv.IsRemoteDaemon())
<add> skip.If(t, testEnv.IsRemoteDaemon)
<ide>
<ide> hosts, err := ioutil.ReadFile("/etc/hosts")
<ide> skip.If(t, os.IsNotExist(err))
<ide><path>integration/container/logs_test.go
<ide> import (
<ide> // Makes sure that when following we don't get an EOF error when there are no logs
<ide> func TestLogsFollowTailEmpty(t *testing.T) {
<ide> // FIXME(vdemeester) fails on a e2e run on linux...
<del> skip.If(t, testEnv.IsRemoteDaemon())
<add> skip.If(t, testEnv.IsRemoteDaemon)
<ide> defer setupTest(t)()
<ide> client := testEnv.APIClient()
<ide> ctx := context.Background()
<ide><path>integration/container/nat_test.go
<ide> import (
<ide>
<ide> func TestNetworkNat(t *testing.T) {
<ide> skip.If(t, testEnv.OSType == "windows", "FIXME")
<del> skip.If(t, testEnv.IsRemoteDaemon())
<add> skip.If(t, testEnv.IsRemoteDaemon)
<ide>
<ide> defer setupTest(t)()
<ide>
<ide> func TestNetworkNat(t *testing.T) {
<ide>
<ide> func TestNetworkLocalhostTCPNat(t *testing.T) {
<ide> skip.If(t, testEnv.DaemonInfo.OSType == "windows", "FIXME")
<del> skip.If(t, testEnv.IsRemoteDaemon())
<add> skip.If(t, testEnv.IsRemoteDaemon)
<ide>
<ide> defer setupTest(t)()
<ide>
<ide> func TestNetworkLocalhostTCPNat(t *testing.T) {
<ide>
<ide> func TestNetworkLoopbackNat(t *testing.T) {
<ide> skip.If(t, testEnv.OSType == "windows", "FIXME")
<del> skip.If(t, testEnv.IsRemoteDaemon())
<add> skip.If(t, testEnv.IsRemoteDaemon)
<ide>
<ide> defer setupTest(t)()
<ide>
<ide><path>integration/container/remove_test.go
<ide> func getPrefixAndSlashFromDaemonPlatform() (prefix, slash string) {
<ide>
<ide> // Test case for #5244: `docker rm` fails if bind dir doesn't exist anymore
<ide> func TestRemoveContainerWithRemovedVolume(t *testing.T) {
<del> skip.If(t, testEnv.IsRemoteDaemon())
<add> skip.If(t, testEnv.IsRemoteDaemon)
<ide>
<ide> defer setupTest(t)()
<ide> ctx := context.Background()
<ide><path>integration/container/rename_test.go
<ide> func TestRenameContainerWithSameName(t *testing.T) {
<ide> // of the linked container should be updated so that the other
<ide> // container could still reference to the container that is renamed.
<ide> func TestRenameContainerWithLinkedContainer(t *testing.T) {
<del> skip.If(t, testEnv.IsRemoteDaemon())
<add> skip.If(t, testEnv.IsRemoteDaemon)
<ide> skip.If(t, testEnv.OSType == "windows", "FIXME")
<ide>
<ide> defer setupTest(t)()
<ide><path>integration/network/ipvlan/ipvlan_test.go
<ide> import (
<ide> func TestDockerNetworkIpvlanPersistance(t *testing.T) {
<ide> // verify the driver automatically provisions the 802.1q link (di-dummy0.70)
<ide> skip.If(t, testEnv.DaemonInfo.OSType == "windows")
<del> skip.If(t, testEnv.IsRemoteDaemon())
<add> skip.If(t, testEnv.IsRemoteDaemon)
<ide> skip.If(t, !ipvlanKernelSupport(), "Kernel doesn't support ipvlan")
<ide>
<ide> d := daemon.New(t, daemon.WithExperimental)
<ide> func TestDockerNetworkIpvlanPersistance(t *testing.T) {
<ide>
<ide> func TestDockerNetworkIpvlan(t *testing.T) {
<ide> skip.If(t, testEnv.DaemonInfo.OSType == "windows")
<del> skip.If(t, testEnv.IsRemoteDaemon())
<add> skip.If(t, testEnv.IsRemoteDaemon)
<ide> skip.If(t, !ipvlanKernelSupport(), "Kernel doesn't support ipvlan")
<ide>
<ide> for _, tc := range []struct {
<ide><path>integration/network/macvlan/macvlan_test.go
<ide> import (
<ide>
<ide> func TestDockerNetworkMacvlanPersistance(t *testing.T) {
<ide> // verify the driver automatically provisions the 802.1q link (dm-dummy0.60)
<del> skip.If(t, testEnv.IsRemoteDaemon())
<add> skip.If(t, testEnv.IsRemoteDaemon)
<ide> skip.If(t, !macvlanKernelSupport(), "Kernel doesn't support macvlan")
<ide>
<ide> d := daemon.New(t)
<ide> func TestDockerNetworkMacvlanPersistance(t *testing.T) {
<ide> }
<ide>
<ide> func TestDockerNetworkMacvlan(t *testing.T) {
<del> skip.If(t, testEnv.IsRemoteDaemon())
<add> skip.If(t, testEnv.IsRemoteDaemon)
<ide> skip.If(t, !macvlanKernelSupport(), "Kernel doesn't support macvlan")
<ide>
<ide> for _, tc := range []struct {
<ide><path>integration/network/service_test.go
<ide> func delInterface(t *testing.T, ifName string) {
<ide>
<ide> func TestDaemonRestartWithLiveRestore(t *testing.T) {
<ide> skip.If(t, testEnv.OSType == "windows")
<del> skip.If(t, testEnv.IsRemoteDaemon())
<add> skip.If(t, testEnv.IsRemoteDaemon)
<ide> skip.If(t, versions.LessThan(testEnv.DaemonAPIVersion(), "1.38"), "skip test from new feature")
<ide> d := daemon.New(t)
<ide> defer d.Stop(t)
<ide> func TestDaemonRestartWithLiveRestore(t *testing.T) {
<ide> func TestDaemonDefaultNetworkPools(t *testing.T) {
<ide> skip.If(t, testEnv.OSType == "windows")
<ide> // Remove docker0 bridge and the start daemon defining the predefined address pools
<del> skip.If(t, testEnv.IsRemoteDaemon())
<add> skip.If(t, testEnv.IsRemoteDaemon)
<ide> skip.If(t, versions.LessThan(testEnv.DaemonAPIVersion(), "1.38"), "skip test from new feature")
<ide> defaultNetworkBridge := "docker0"
<ide> delInterface(t, defaultNetworkBridge)
<ide> func TestDaemonDefaultNetworkPools(t *testing.T) {
<ide>
<ide> func TestDaemonRestartWithExistingNetwork(t *testing.T) {
<ide> skip.If(t, testEnv.OSType == "windows")
<del> skip.If(t, testEnv.IsRemoteDaemon())
<add> skip.If(t, testEnv.IsRemoteDaemon)
<ide> skip.If(t, versions.LessThan(testEnv.DaemonAPIVersion(), "1.38"), "skip test from new feature")
<ide> defaultNetworkBridge := "docker0"
<ide> d := daemon.New(t)
<ide> func TestDaemonRestartWithExistingNetwork(t *testing.T) {
<ide>
<ide> func TestDaemonRestartWithExistingNetworkWithDefaultPoolRange(t *testing.T) {
<ide> skip.If(t, testEnv.OSType == "windows")
<del> skip.If(t, testEnv.IsRemoteDaemon())
<add> skip.If(t, testEnv.IsRemoteDaemon)
<ide> skip.If(t, versions.LessThan(testEnv.DaemonAPIVersion(), "1.38"), "skip test from new feature")
<ide> defaultNetworkBridge := "docker0"
<ide> d := daemon.New(t)
<ide> func TestDaemonRestartWithExistingNetworkWithDefaultPoolRange(t *testing.T) {
<ide>
<ide> func TestDaemonWithBipAndDefaultNetworkPool(t *testing.T) {
<ide> skip.If(t, testEnv.OSType == "windows")
<del> skip.If(t, testEnv.IsRemoteDaemon())
<add> skip.If(t, testEnv.IsRemoteDaemon)
<ide> skip.If(t, versions.LessThan(testEnv.DaemonAPIVersion(), "1.38"), "skip test from new feature")
<ide> defaultNetworkBridge := "docker0"
<ide> d := daemon.New(t)
<ide><path>integration/plugin/logging/logging_linux_test.go
<ide> import (
<ide> )
<ide>
<ide> func TestContinueAfterPluginCrash(t *testing.T) {
<del> skip.If(t, testEnv.IsRemoteDaemon(), "test requires daemon on the same host")
<add> skip.If(t, testEnv.IsRemoteDaemon, "test requires daemon on the same host")
<ide> t.Parallel()
<ide>
<ide> d := daemon.New(t)
<ide><path>integration/service/inspect_test.go
<ide> import (
<ide> )
<ide>
<ide> func TestInspect(t *testing.T) {
<del> skip.If(t, testEnv.IsRemoteDaemon())
<add> skip.If(t, testEnv.IsRemoteDaemon)
<ide> skip.If(t, testEnv.DaemonInfo.OSType == "windows")
<ide> defer setupTest(t)()
<ide> d := swarm.NewSwarm(t, testEnv) | 12 |
Python | Python | use _validate_axis inside _ureduce | e3ed705e5d91b584e9191a20f3a4780d354271ff | <ide><path>numpy/lib/function_base.py
<ide> from numpy.core.numeric import (
<ide> ones, zeros, arange, concatenate, array, asarray, asanyarray, empty,
<ide> empty_like, ndarray, around, floor, ceil, take, dot, where, intp,
<del> integer, isscalar, absolute
<add> integer, isscalar, absolute, AxisError
<ide> )
<ide> from numpy.core.umath import (
<ide> pi, multiply, add, arctan2, frompyfunc, cos, less_equal, sqrt, sin,
<ide> def _ureduce(a, func, **kwargs):
<ide> if axis is not None:
<ide> keepdim = list(a.shape)
<ide> nd = a.ndim
<del> try:
<del> axis = operator.index(axis)
<del> if axis >= nd or axis < -nd:
<del> raise IndexError("axis %d out of bounds (%d)" % (axis, a.ndim))
<del> keepdim[axis] = 1
<del> except TypeError:
<del> sax = set()
<del> for x in axis:
<del> if x >= nd or x < -nd:
<del> raise IndexError("axis %d out of bounds (%d)" % (x, nd))
<del> if x in sax:
<del> raise ValueError("duplicate value in axis")
<del> sax.add(x % nd)
<del> keepdim[x] = 1
<del> keep = sax.symmetric_difference(frozenset(range(nd)))
<add> axis = _nx._validate_axis(axis, nd)
<add>
<add> for ax in axis:
<add> keepdim[ax] = 1
<add>
<add> if len(axis) == 1:
<add> kwargs['axis'] = axis[0]
<add> else:
<add> keep = set(range(nd)) - set(axis)
<ide> nkeep = len(keep)
<ide> # swap axis that should not be reduced to front
<ide> for i, s in enumerate(sorted(keep)):
<ide> def delete(arr, obj, axis=None):
<ide> if ndim != 1:
<ide> arr = arr.ravel()
<ide> ndim = arr.ndim
<del> axis = ndim - 1
<add> axis = -1
<add>
<ide> if ndim == 0:
<ide> # 2013-09-24, 1.9
<ide> warnings.warn(
<ide> def delete(arr, obj, axis=None):
<ide> else:
<ide> return arr.copy(order=arrorder)
<ide>
<add> axis = normalize_axis_index(axis, ndim)
<add>
<ide> slobj = [slice(None)]*ndim
<ide> N = arr.shape[axis]
<ide> newshape = list(arr.shape)
<ide><path>numpy/lib/tests/test_function_base.py
<ide> def test_extended_axis(self):
<ide>
<ide> def test_extended_axis_invalid(self):
<ide> d = np.ones((3, 5, 7, 11))
<del> assert_raises(IndexError, np.percentile, d, axis=-5, q=25)
<del> assert_raises(IndexError, np.percentile, d, axis=(0, -5), q=25)
<del> assert_raises(IndexError, np.percentile, d, axis=4, q=25)
<del> assert_raises(IndexError, np.percentile, d, axis=(0, 4), q=25)
<add> assert_raises(np.AxisError, np.percentile, d, axis=-5, q=25)
<add> assert_raises(np.AxisError, np.percentile, d, axis=(0, -5), q=25)
<add> assert_raises(np.AxisError, np.percentile, d, axis=4, q=25)
<add> assert_raises(np.AxisError, np.percentile, d, axis=(0, 4), q=25)
<add> # each of these refers to the same axis twice
<ide> assert_raises(ValueError, np.percentile, d, axis=(1, 1), q=25)
<add> assert_raises(ValueError, np.percentile, d, axis=(-1, -1), q=25)
<add> assert_raises(ValueError, np.percentile, d, axis=(3, -1), q=25)
<ide>
<ide> def test_keepdims(self):
<ide> d = np.ones((3, 5, 7, 11))
<ide> def test_extended_axis(self):
<ide>
<ide> def test_extended_axis_invalid(self):
<ide> d = np.ones((3, 5, 7, 11))
<del> assert_raises(IndexError, np.median, d, axis=-5)
<del> assert_raises(IndexError, np.median, d, axis=(0, -5))
<del> assert_raises(IndexError, np.median, d, axis=4)
<del> assert_raises(IndexError, np.median, d, axis=(0, 4))
<add> assert_raises(np.AxisError, np.median, d, axis=-5)
<add> assert_raises(np.AxisError, np.median, d, axis=(0, -5))
<add> assert_raises(np.AxisError, np.median, d, axis=4)
<add> assert_raises(np.AxisError, np.median, d, axis=(0, 4))
<ide> assert_raises(ValueError, np.median, d, axis=(1, 1))
<ide>
<ide> def test_keepdims(self):
<ide><path>numpy/lib/tests/test_nanfunctions.py
<ide> def test_scalar(self):
<ide>
<ide> def test_extended_axis_invalid(self):
<ide> d = np.ones((3, 5, 7, 11))
<del> assert_raises(IndexError, np.nanmedian, d, axis=-5)
<del> assert_raises(IndexError, np.nanmedian, d, axis=(0, -5))
<del> assert_raises(IndexError, np.nanmedian, d, axis=4)
<del> assert_raises(IndexError, np.nanmedian, d, axis=(0, 4))
<add> assert_raises(np.AxisError, np.nanmedian, d, axis=-5)
<add> assert_raises(np.AxisError, np.nanmedian, d, axis=(0, -5))
<add> assert_raises(np.AxisError, np.nanmedian, d, axis=4)
<add> assert_raises(np.AxisError, np.nanmedian, d, axis=(0, 4))
<ide> assert_raises(ValueError, np.nanmedian, d, axis=(1, 1))
<ide>
<ide> def test_float_special(self):
<ide> def test_scalar(self):
<ide>
<ide> def test_extended_axis_invalid(self):
<ide> d = np.ones((3, 5, 7, 11))
<del> assert_raises(IndexError, np.nanpercentile, d, q=5, axis=-5)
<del> assert_raises(IndexError, np.nanpercentile, d, q=5, axis=(0, -5))
<del> assert_raises(IndexError, np.nanpercentile, d, q=5, axis=4)
<del> assert_raises(IndexError, np.nanpercentile, d, q=5, axis=(0, 4))
<add> assert_raises(np.AxisError, np.nanpercentile, d, q=5, axis=-5)
<add> assert_raises(np.AxisError, np.nanpercentile, d, q=5, axis=(0, -5))
<add> assert_raises(np.AxisError, np.nanpercentile, d, q=5, axis=4)
<add> assert_raises(np.AxisError, np.nanpercentile, d, q=5, axis=(0, 4))
<ide> assert_raises(ValueError, np.nanpercentile, d, q=5, axis=(1, 1))
<ide>
<ide> def test_multiple_percentiles(self):
<ide><path>numpy/ma/tests/test_extras.py
<ide> def test_axis_argument_errors(self):
<ide> for axis, over in args:
<ide> try:
<ide> np.ma.median(x, axis=axis, overwrite_input=over)
<del> except IndexError:
<add> except np.AxisError:
<ide> pass
<ide> else:
<ide> raise AssertionError(msg % (mask, ndmin, axis, over)) | 4 |
Text | Text | fix links to original converged repo | 51cf983eb3cf47ed31100480d88c2553b24c7dca | <ide><path>doc/tsc-meetings/2015-05-27.md
<ide>
<ide> Extracted from **tsc-agenda** labelled issues and pull requests prior to meeting.
<ide>
<del>### nodejs/node
<add>### nodejs/node-convergence-archive
<ide>
<del>* \[Converge\] timers: Avoid linear scan in `_unrefActive`. [#23](https://github.com/nodejs/node/issues/23)
<del>* \[Converge\] child_process argument type checking [#22](https://github.com/nodejs/node/issues/22)
<del>* \[Converge\] SSLv2/3 disable/enable related commits [#20](https://github.com/nodejs/node/issues/20)
<del>* doc: Add new working groups [#15](https://github.com/nodejs/node/pull/15)
<add>* \[Converge\] timers: Avoid linear scan in `_unrefActive`. [#23](https://github.com/nodejs/node-convergence-archive/issues/23)
<add>* \[Converge\] child_process argument type checking [#22](https://github.com/nodejs/node-convergence-archive/issues/22)
<add>* \[Converge\] SSLv2/3 disable/enable related commits [#20](https://github.com/nodejs/node-convergence-archive/issues/20)
<add>* doc: Add new working groups [#15](https://github.com/nodejs/node-convergence-archive/pull/15)
<ide>
<ide> ### nodejs/io.js
<ide>
<ide> Extracted from **tsc-agenda** labelled issues and pull requests prior to meeting
<ide>
<ide> ## Minutes
<ide>
<del>### \[Converge\] timers: Avoid linear scan in `_unrefActive`. [#23](https://github.com/nodejs/node/issues/23)
<add>### \[Converge\] timers: Avoid linear scan in `_unrefActive`. [#23](https://github.com/nodejs/node-convergence-archive/issues/23)
<ide>
<ide> * James: conflicting approaches in both repos
<ide> * Ben: both are terrible under different workloads - do away with the code and start again
<ide> * Jeremiah: might have a go at it, working off an existing heap impl by Ben (ACTION)
<ide> * Bert: some problems with http - discussion happened about the implementation
<ide> * Chris: would be good to have Julien’s input since he was active on the joyent/node impl
<ide>
<del>### \[Converge\] child_process argument type checking [#22](https://github.com/nodejs/node/issues/22)
<add>### \[Converge\] child_process argument type checking [#22](https://github.com/nodejs/node-convergence-archive/issues/22)
<ide>
<ide> * James: arg checking merged in 0.10 after the fork
<ide> * Discussion about why this wasn’t merged to io.js
<ide> * Defer back to GitHub discussion after no reason for not merging could be found on the call
<ide>
<del>### \[Converge\] SSLv2/3 disable/enable related commits [#20](https://github.com/nodejs/node/issues/20)
<add>### \[Converge\] SSLv2/3 disable/enable related commits [#20](https://github.com/nodejs/node-convergence-archive/issues/20)
<ide>
<ide> * James: SSLv2/3 removed in io.js, merging these commits would involve reverting
<ide> * Jeremiah proposed 0.12 being the LTS for SSLv2/3 support
<ide> * Rod: are we happy killing this off?
<ide> * Michael: we don’t know how extensively it’s being used?
<ide> * James: pending research into that question we’ll leave this alone, come back if there’s a compelling reason to revert
<ide>
<del>### doc: Add new working groups [#15](https://github.com/nodejs/node/pull/15)
<add>### doc: Add new working groups [#15](https://github.com/nodejs/node-convergence-archive/pull/15)
<ide>
<ide> * Michael: Benchmarking and Post Mortem Debugging working groups are ready and have started, i18n group needs a bit more work to get off the ground
<ide> * Group didn’t see any reason not to go forward with these groups, they have repos and can be in an “incubating” state for now
<ide><path>doc/tsc-meetings/2015-06-17.md
<ide> Extracted from **tsc-agenda** labelled issues and pull requests prior to meeting
<ide>
<ide> ### nodejs/node
<ide>
<del>* Create a security team [#48](https://github.com/nodejs/node/issues/48)
<add>* Create a security team [#48](https://github.com/nodejs/node-convergence-archive/issues/48)
<ide>
<ide> ### nodejs/io.js
<ide>
<ide> Extracted from **tsc-agenda** labelled issues and pull requests prior to meeting
<ide> * Steven: getting back on board
<ide> * Bert: libuv work for multi-worker on Windows (https://github.com/libuv/libuv/pull/396), found a potential libuv/Windows contributor at NodeConf, NF board meeting
<ide> * Alexis: Working on build & CI convergence with Rod, CI can now automatically decide what options to use for different node versions, and porting node-accept-pull-request CI job.
<del>* Julien: time off, launching nodejs.org updates for NF launch, working on changes for 0.10/0.12 releases, onboarded two new collaborators for joyent/node - https://github.com/nodejs/node/wiki/Breaking-changes-between-v0.12-and-next-LTS-release
<add>* Julien: time off, launching nodejs.org updates for NF launch, working on changes for 0.10/0.12 releases, onboarded two new collaborators for joyent/node - https://github.com/nodejs/LTS/wiki/Breaking-changes-between-v0.12-and-next-LTS-release
<ide> * Shigeki: Working on upgrading OpenSSL, the upgrade process is becoming much simpler, landed the CINNIC whitelist
<ide> * Jeremiah: NodeConf - brought back good feedback, helping spin up the Diversity WG, integrating timers heap impl, struggling with bugs
<ide> * Brian: not much, triage & PR review | 2 |
Text | Text | add link to the 30 free videos | f3fa414a8682e5ac1a5b683e10772a98e48ce1d8 | <ide><path>README.md
<ide> It is tiny (2kB) and has no dependencies.
<ide> [](https://discord.gg/0ZcbPKXt5bZ6au5t)
<ide> [](https://webchat.freenode.net/)
<ide>
<add>>**New! Learn Redux from its creator:
<add>>[Getting Started with Redux](https://egghead.io/series/getting-started-with-redux) (30 free videos)**
<ide>
<ide> ### Testimonials
<ide>
<ide> If you’re coming from Flux, there is a single important difference you need to
<ide>
<ide> This architecture might seem like an overkill for a counter app, but the beauty of this pattern is how well it scales to large and complex apps. It also enables very powerful developer tools, because it is possible to trace every mutation to the action that caused it. You can record user sessions and reproduce them just by replaying every action.
<ide>
<add>### 30 Free Videos
<add>
<add>[Getting Started with Redux](https://egghead.io/series/getting-started-with-redux) is a video course consisting of 30 videos narrated by Dan Abramov, author of Redux. It is designed to complement the “Basics” part of the docs while bringing additional insights about immutability, testing, Redux best practices, and using Redux with React. **This course is free and will always be.**
<add>
<add>>[“Great course on egghead.io by @dan_abramov - instead of just showing you how to use #redux, it also shows how and why redux was built!”](https://twitter.com/sandrinodm/status/670548531422326785)
<add>>Sandrino Di Mattia
<add>
<add>>[“Plowing through @dan_abramov 'Getting Started with Redux' - its amazing how much simpler concepts get with video.”](https://twitter.com/chrisdhanaraj/status/670328025553219584)
<add>>Chris Dhanaraj
<add>
<add>>[“This video series on Redux by @dan_abramov on @eggheadio is spectacular!”](https://twitter.com/eddiezane/status/670333133242408960)
<add>>Eddie Zaneski
<add>
<add>>[“Come for the name hype. Stay for the rock solid fundamentals. (Thanks, and great job @dan_abramov and @eggheadio!)”](https://twitter.com/danott/status/669909126554607617)
<add>>Dan
<add>
<add>>[“This series of videos on Redux by @dan_abramov is repeatedly blowing my mind - gunna do some serious refactoring”](https://twitter.com/gelatindesign/status/669658358643892224)
<add>>Laurence Roberts
<add>
<add>So, what are you waiting for?
<add>
<add>#### [Watch the 30 Free Videos!](https://egghead.io/series/getting-started-with-redux)
<add>
<add>If you enjoyed my course, consider supporting Egghead by [buying a subscription](https://egghead.io/pricing). Subscribers have access to the source code for the example in every my video, as well as to tons of advanced lessons on other topics, including JavaScript in depth, React, Angular, and more. Many [Egghead instructors](https://egghead.io/instructors) are also open source library authors, so buying a subscription is a nice way to thank them for the work that they’ve done.
<add>
<ide> ### Documentation
<ide>
<ide> * [Introduction](http://rackt.github.io/redux/docs/introduction/index.html) | 1 |
Text | Text | add new details and links to cjm | 274ef504c3f79771efd9d924c378f906b52292bc | <ide><path>guide/english/product-design/customer-journey-maps/index.md
<ide> title: Customer Journey Maps
<ide> ---
<ide> ## Customer Journey Maps
<ide>
<del>This is a stub. <a href='https://github.com/freecodecamp/guides/tree/master/src/pages/product-design/customer-journey-maps/index.md' target='_blank' rel='nofollow'>Help our community expand it</a>.
<add>Customer journey map is a story designed to provide insights into the customer’s journey. A customer journey map tells the story of the customer’s experience: from initial contact, through the process of engagement and into a long-term relationship.
<ide>
<del><a href='https://github.com/freecodecamp/guides/blob/master/README.md' target='_blank' rel='nofollow'>This quick style guide will help ensure your pull request gets accepted</a>.
<del>
<del><!-- The article goes here, in GitHub-flavored Markdown. Feel free to add YouTube videos, images, and CodePen/JSBin embeds -->
<add>
<ide>
<ide> #### More Information:
<del><!-- Please add any articles you think might be helpful to read before writing the article -->
<add>- [How to Create an Effective Customer Journey Map] (https://blog.hubspot.com/service/customer-journey-map)
<add>- [Customer Journey Mapping: Everything You Need to Know] (https://www.sailthru.com/marketing-blog/written-customer-journey-mapping-need-to-know/)
<ide> - [When and How to Create Customer Journey Maps by The Nielsen Norman Group](https://www.nngroup.com/articles/customer-journey-mapping/) | 1 |
Ruby | Ruby | return runtime_dependencies in to_hash | 28ad8a06cc24d2d71400e9c860a52ef046022cf1 | <ide><path>Library/Homebrew/formula.rb
<ide> def to_hash
<ide> "used_options" => tab.used_options.as_flags,
<ide> "built_as_bottle" => tab.built_as_bottle,
<ide> "poured_from_bottle" => tab.poured_from_bottle,
<add> "runtime_dependencies" => tab.runtime_dependencies,
<ide> }
<ide> end
<ide> | 1 |
Go | Go | fix second cachekey for schema1 | 0037da02308b13659ef65f9424c672feb44711f5 | <ide><path>builder/builder-next/adapters/containerimage/pull.go
<ide> func (p *puller) CacheKey(ctx context.Context, index int) (string, bool, error)
<ide> }
<ide>
<ide> if p.config != nil {
<del> return cacheKeyFromConfig(p.config).String(), true, nil
<add> k := cacheKeyFromConfig(p.config).String()
<add> if k == "" {
<add> return digest.FromBytes(p.config).String(), true, nil
<add> }
<add> return k, true, nil
<ide> }
<ide>
<ide> if err := p.resolve(ctx); err != nil {
<ide> func (p *puller) CacheKey(ctx context.Context, index int) (string, bool, error)
<ide> return dgst.String(), false, nil
<ide> }
<ide>
<del> return cacheKeyFromConfig(p.config).String(), true, nil
<add> k := cacheKeyFromConfig(p.config).String()
<add> if k == "" {
<add> dgst, err := p.mainManifestKey(p.desc.Digest, p.platform)
<add> if err != nil {
<add> return "", false, err
<add> }
<add> return dgst.String(), true, nil
<add> }
<add>
<add> return k, true, nil
<ide> }
<ide>
<ide> func (p *puller) Snapshot(ctx context.Context) (cache.ImmutableRef, error) {
<ide> func cacheKeyFromConfig(dt []byte) digest.Digest {
<ide> if err != nil {
<ide> return digest.FromBytes(dt)
<ide> }
<del> if img.RootFS.Type != "layers" {
<del> return digest.FromBytes(dt)
<add> if img.RootFS.Type != "layers" || len(img.RootFS.DiffIDs) == 0 {
<add> return ""
<ide> }
<ide> return identity.ChainID(img.RootFS.DiffIDs)
<ide> } | 1 |
Python | Python | update package import for logging connection | 0f9f53ea3f51a54b0d85d2a378ac2e3ff9f86a3d | <ide><path>libcloud/__init__.py
<ide> def enable_debug(fo):
<ide> :param fo: Where to append debugging information
<ide> :type fo: File like object, only write operations are used.
<ide> """
<del> from libcloud.common.base import (Connection,
<del> LoggingConnection)
<del> LoggingConnection.log = fo
<add> from libcloud.common.base import Connection
<add> from libcloud.utils.loggingconnection import LoggingConnection
<add>
<ide> LoggingConnection.log = fo
<ide> Connection.conn_class = LoggingConnection
<ide> | 1 |
Javascript | Javascript | remove old unused code from pdfview.close() | 4737e1ad8ddb3995d673ae8e3f3f67d5d3020c94 | <ide><path>web/viewer.js
<ide> var PDFView = {
<ide> thumbsView.removeChild(thumbsView.lastChild);
<ide> }
<ide>
<del> if ('_loadingInterval' in thumbsView) {
<del> clearInterval(thumbsView._loadingInterval);
<del> }
<del>
<ide> var container = document.getElementById('viewer');
<ide> while (container.hasChildNodes()) {
<ide> container.removeChild(container.lastChild); | 1 |
Javascript | Javascript | simplify path handling | 1c2546050f91cc93dcd6960a4d317e3176b5b8aa | <ide><path>examples/spline/spline.js
<ide> var line = d3.svg.line()
<ide> .x(function(d) { return d[0]; })
<ide> .y(function(d) { return d[1]; });
<ide>
<del>function update() {
<del> var path = vis.selectAll("path")
<del> .data(points.length ? [points] : []);
<del>
<del> path.enter().append("svg:path")
<del> .attr("class", "line")
<del> .attr("d", line);
<add>var path = vis.append("svg:path")
<add> .data([points])
<add> .attr("class", "line");
<ide>
<add>function update() {
<ide> path.attr("d", line);
<ide>
<del> path.exit().remove();
<del>
<ide> var circle = vis.selectAll("circle")
<ide> .data(points, function(d) { return d; });
<ide> | 1 |
Python | Python | prepare 1.0.5 pypi release | 4d404d1a5472bde5bef703bf8976022505d04ce9 | <ide><path>keras/__init__.py
<ide> from . import optimizers
<ide> from . import regularizers
<ide>
<del>__version__ = '1.0.4'
<add>__version__ = '1.0.5'
<ide><path>setup.py
<ide>
<ide>
<ide> setup(name='Keras',
<del> version='1.0.4',
<add> version='1.0.5',
<ide> description='Deep Learning for Python',
<ide> author='Francois Chollet',
<ide> author_email='francois.chollet@gmail.com',
<ide> url='https://github.com/fchollet/keras',
<del> download_url='https://github.com/fchollet/keras/tarball/1.0.4',
<add> download_url='https://github.com/fchollet/keras/tarball/1.0.5',
<ide> license='MIT',
<ide> install_requires=['theano', 'pyyaml', 'six'],
<ide> extras_require={ | 2 |
Python | Python | dump pod as yaml in logs for kubernetespodoperator | 719ae2bf6227894c3e926f717eb4dc669549d615 | <ide><path>airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py
<ide> import re
<ide> from typing import Dict, List, Optional, Tuple
<ide>
<add>import yaml
<ide> from kubernetes.client import models as k8s
<ide>
<ide> from airflow.exceptions import AirflowException
<ide> def create_new_pod_for_operator(self, labels, launcher) -> Tuple[State, k8s.V1Po
<ide> )
<ide>
<ide> self.pod = pod
<del> self.log.info("Starting pod %s", pod)
<add> self.log.debug("Starting pod:\n%s", yaml.safe_dump(pod.to_dict()))
<ide> try:
<ide> launcher.start_pod(
<ide> pod,
<ide><path>kubernetes_tests/test_kubernetes_pod_operator.py
<ide> # specific language governing permissions and limitations
<ide> # under the License.
<ide> import json
<add>import logging
<ide> import os
<ide> import shutil
<add>import textwrap
<ide> import unittest
<ide> from unittest import mock
<ide> from unittest.mock import ANY
<ide> def test_pod_template_file(
<ide> )
<ide> monitor_mock.return_value = (State.SUCCESS, None)
<ide> context = create_context(k)
<del> k.execute(context)
<add> with self.assertLogs(k.log, level=logging.DEBUG) as cm:
<add> k.execute(context)
<add> expected_line = textwrap.dedent("""\
<add> DEBUG:airflow.task.operators:Starting pod:
<add> api_version: v1
<add> kind: Pod
<add> metadata:
<add> annotations: null
<add> cluster_name: null
<add> creation_timestamp: null
<add> deletion_grace_period_seconds: null\
<add> """).strip()
<add> self.assertTrue(any(line.startswith(expected_line) for line in cm.output))
<add>
<ide> actual_pod = self.api_client.sanitize_for_serialization(k.pod)
<ide> self.assertEqual({
<ide> 'apiVersion': 'v1', | 2 |
Ruby | Ruby | convert checksum test to spec | 3bc0c6bd1a74fb5a77fdaca45543eca752eee64f | <ide><path>Library/Homebrew/test/checksum_spec.rb
<add>require "checksum"
<add>
<add>describe Checksum do
<add> describe "#empty?" do
<add> subject { described_class.new(:sha256, "") }
<add> it { is_expected.to be_empty }
<add> end
<add>
<add> describe "#==" do
<add> subject { described_class.new(:sha256, TEST_SHA256) }
<add> let(:other) { described_class.new(:sha256, TEST_SHA256) }
<add> let(:other_reversed) { described_class.new(:sha256, TEST_SHA256.reverse) }
<add> let(:other_sha1) { described_class.new(:sha1, TEST_SHA1) }
<add>
<add> it { is_expected.to eq(other) }
<add> it { is_expected.not_to eq(other_reversed) }
<add> it { is_expected.not_to eq(other_sha1) }
<add> end
<add>end
<ide><path>Library/Homebrew/test/checksum_test.rb
<del>require "testing_env"
<del>require "checksum"
<del>
<del>class ChecksumTests < Homebrew::TestCase
<del> def test_empty?
<del> assert_empty Checksum.new(:sha256, "")
<del> end
<del>
<del> def test_equality
<del> a = Checksum.new(:sha256, TEST_SHA256)
<del> b = Checksum.new(:sha256, TEST_SHA256)
<del> assert_equal a, b
<del>
<del> a = Checksum.new(:sha256, TEST_SHA256)
<del> b = Checksum.new(:sha256, TEST_SHA256.reverse)
<del> refute_equal a, b
<del>
<del> a = Checksum.new(:sha1, TEST_SHA1)
<del> b = Checksum.new(:sha256, TEST_SHA256)
<del> refute_equal a, b
<del> end
<del>end | 2 |
Javascript | Javascript | improve cache serialization by 30% | 3f378d9ff489b5534862cdebe4ef25b76294265f | <ide><path>lib/serialization/ObjectMiddleware.js
<ide> class ObjectMiddleware extends SerializerMiddleware {
<ide> };
<ide> this.extendContext(ctx);
<ide> const process = item => {
<del> // check if we can emit a reference
<del> const ref = referenceable.get(item);
<del>
<del> if (ref !== undefined) {
<del> result.push(ESCAPE, ref - currentPos);
<del>
<del> return;
<del> }
<del>
<ide> if (Buffer.isBuffer(item)) {
<add> // check if we can emit a reference
<add> const ref = referenceable.get(item);
<add> if (ref !== undefined) {
<add> result.push(ESCAPE, ref - currentPos);
<add> return;
<add> }
<ide> const alreadyUsedBuffer = dedupeBuffer(item);
<ide> if (alreadyUsedBuffer !== item) {
<ide> const ref = referenceable.get(alreadyUsedBuffer);
<ide> class ObjectMiddleware extends SerializerMiddleware {
<ide> addReferenceable(item);
<ide>
<ide> result.push(item);
<del> } else if (typeof item === "object" && item !== null) {
<add> } else if (item === ESCAPE) {
<add> result.push(ESCAPE, ESCAPE_ESCAPE_VALUE);
<add> } else if (
<add> typeof item === "object"
<add> // We don't have to check for null as ESCAPE is null and this has been checked before
<add> ) {
<add> // check if we can emit a reference
<add> const ref = referenceable.get(item);
<add> if (ref !== undefined) {
<add> result.push(ESCAPE, ref - currentPos);
<add> return;
<add> }
<add>
<ide> if (cycleStack.has(item)) {
<ide> throw new Error(`Circular references can't be serialized`);
<ide> }
<ide> class ObjectMiddleware extends SerializerMiddleware {
<ide> } else if (typeof item === "string") {
<ide> if (item.length > 1) {
<ide> // short strings are shorter when not emitting a reference (this saves 1 byte per empty string)
<add> // check if we can emit a reference
<add> const ref = referenceable.get(item);
<add> if (ref !== undefined) {
<add> result.push(ESCAPE, ref - currentPos);
<add> return;
<add> }
<ide> addReferenceable(item);
<ide> }
<ide>
<ide> class ObjectMiddleware extends SerializerMiddleware {
<ide> }
<ide>
<ide> result.push(item);
<del> } else if (item === ESCAPE) {
<del> result.push(ESCAPE, ESCAPE_ESCAPE_VALUE);
<ide> } else if (typeof item === "function") {
<ide> if (!SerializerMiddleware.isLazy(item))
<ide> throw new Error("Unexpected function " + item); | 1 |
Python | Python | add note for the choice of num_levels - 2 | 800a4c00fbb5bd1827fc22978e4bc944960c14f6 | <ide><path>research/object_detection/meta_architectures/faster_rcnn_meta_arch.py
<ide> def _compute_second_stage_input_feature_maps(self, features_to_crop,
<ide> box_levels = None
<ide> if num_levels != 1:
<ide> # If there are mutiple levels to select, get the box levels
<add> # unit_scale_index: num_levels-2 is chosen based on section 4.2 of
<add> # https://arxiv.org/pdf/1612.03144.pdf and works best for Resnet based
<add> # feature extractor.
<ide> box_levels = ops.fpn_feature_levels(
<ide> num_levels, num_levels - 2,
<ide> tf.sqrt(self._resize_shape[1] * self._resize_shape[2] * 1.0) / 224.0, | 1 |
Text | Text | add help on fixing ipv6 test failures | 6b4e413526c4305cd6c604072b8d3066519a549a | <ide><path>BUILDING.md
<ide> $ ./node ./test/parallel/test-stream2-transform.js
<ide> Remember to recompile with `make -j4` in between test runs if you change code in
<ide> the `lib` or `src` directories.
<ide>
<add>The tests attempt to detect support for IPv6 and exclude IPv6 tests if
<add>appropriate. If your main interface has IPv6 addresses, then your
<add>loopback interface must also have '::1' enabled. For some default installations
<add>on Ubuntu that does not seem to be the case. To enable '::1' on the
<add>loopback interface on Ubuntu:
<add>
<add>```bash
<add>sudo sysctl -w net.ipv6.conf.lo.disable_ipv6=0
<add>```
<add>
<ide> #### Running Coverage
<ide>
<ide> It's good practice to ensure any code you add or change is covered by tests. | 1 |
Text | Text | fix mtu option in documentation | e74a937b00af567b655c93224cc6a514f54e2b38 | <ide><path>docs/reference/commandline/network_create.md
<ide> equivalent docker daemon flags used for docker0 bridge:
<ide> | `com.docker.network.bridge.enable_ip_masquerade` | `--ip-masq` | Enable IP masquerading |
<ide> | `com.docker.network.bridge.enable_icc` | `--icc` | Enable or Disable Inter Container Connectivity |
<ide> | `com.docker.network.bridge.host_binding_ipv4` | `--ip` | Default IP when binding container ports |
<del>| `com.docker.network.mtu` | `--mtu` | Set the containers network MTU |
<add>| `com.docker.network.driver.mtu` | `--mtu` | Set the containers network MTU |
<ide>
<ide> The following arguments can be passed to `docker network create` for any
<ide> network driver, again with their approximate equivalents to `docker daemon`.
<ide><path>docs/userguide/networking/work-with-networks.md
<ide> equivalent docker daemon flags used for docker0 bridge:
<ide> | `com.docker.network.bridge.enable_ip_masquerade` | `--ip-masq` | Enable IP masquerading |
<ide> | `com.docker.network.bridge.enable_icc` | `--icc` | Enable or Disable Inter Container Connectivity |
<ide> | `com.docker.network.bridge.host_binding_ipv4` | `--ip` | Default IP when binding container ports |
<del>| `com.docker.network.mtu` | `--mtu` | Set the containers network MTU |
<add>| `com.docker.network.driver.mtu` | `--mtu` | Set the containers network MTU |
<ide>
<ide> The following arguments can be passed to `docker network create` for any network driver.
<ide> | 2 |
Python | Python | add s3_rgw_outscale provider | 2a450e4072c0468440877ec3decf3b21eaf369fa | <ide><path>libcloud/storage/drivers/s3.py
<ide> S3_AP_NORTHEAST_HOST = S3_AP_NORTHEAST1_HOST
<ide> S3_SA_EAST_HOST = 's3-sa-east-1.amazonaws.com'
<ide>
<add>S3_RGW_OUTSCALE_HOSTS_BY_REGION =\
<add> {'eu-west-1': 'osu.eu-west-1.outscale.com',
<add> 'eu-west-2': 'osu.eu-west-2.outscale.com',
<add> 'us-west-1': 'osu.us-west-1.outscale.com',
<add> 'us-east-2': 'osu.us-east-2.outscale.com',
<add> 'cn-southeast-1': 'osu.cn-southeast-1.outscale.hk'}
<add>
<add>S3_RGW_OUTSCALE_DEFAULT_REGION = 'eu-west-2'
<add>
<ide> API_VERSION = '2006-03-01'
<ide> NAMESPACE = 'http://s3.amazonaws.com/doc/%s/' % (API_VERSION)
<ide>
<ide> class S3SAEastStorageDriver(S3StorageDriver):
<ide> name = 'Amazon S3 (sa-east-1)'
<ide> connectionCls = S3SAEastConnection
<ide> ex_location_name = 'sa-east-1'
<add>
<add>
<add>class S3RGWOutscaleConnection(S3Connection):
<add> pass
<add>
<add>
<add>class S3RGWOutscaleStorageDriver(S3StorageDriver):
<add>
<add> def __init__(self, key, secret=None, secure=True, host=None, port=None,
<add> api_version=None, region=S3_RGW_OUTSCALE_DEFAULT_REGION,
<add> **kwargs):
<add> if region not in S3_RGW_OUTSCALE_HOSTS_BY_REGION:
<add> raise LibcloudError('Unknown region (%s)' % (region), driver=self)
<add> self.name = 'OUTSCALE Ceph RGW S3 (%s)' % (region)
<add> self.ex_location_name = region
<add> self.region_name = region
<add> self.connectionCls = S3RGWOutscaleConnection
<add> self.connectionCls.host = S3_RGW_OUTSCALE_HOSTS_BY_REGION[region]
<add> super(S3RGWOutscaleStorageDriver, self).__init__(key, secret,
<add> secure, host, port,
<add> api_version, region,
<add> **kwargs)
<ide><path>libcloud/storage/providers.py
<ide> ('libcloud.storage.drivers.s3', 'S3APNE2StorageDriver'),
<ide> Provider.S3_SA_EAST:
<ide> ('libcloud.storage.drivers.s3', 'S3SAEastStorageDriver'),
<add> Provider.S3_RGW_OUTSCALE:
<add> ('libcloud.storage.drivers.s3', 'S3RGWOutscaleStorageDriver'),
<ide> Provider.NINEFOLD:
<ide> ('libcloud.storage.drivers.ninefold', 'NinefoldStorageDriver'),
<ide> Provider.GOOGLE_STORAGE:
<ide><path>libcloud/storage/types.py
<ide> class Provider(object):
<ide> :cvar S3_EU_WEST: Amazon S3 EU West (Ireland)
<ide> :cvar S3_AP_SOUTHEAST_HOST: Amazon S3 Asia South East (Singapore)
<ide> :cvar S3_AP_NORTHEAST_HOST: Amazon S3 Asia South East (Tokyo)
<add> :cvar S3_RGW_OUTSCALE: OUTSCALE RGW S3
<ide> :cvar NINEFOLD: Ninefold
<ide> :cvar GOOGLE_STORAGE Google Storage
<ide> :cvar S3_US_WEST_OREGON: Amazon S3 US West 2 (Oregon)
<ide> class Provider(object):
<ide> S3_AP_NORTHEAST1 = 's3_ap_northeast_1'
<ide> S3_AP_NORTHEAST2 = 's3_ap_northeast_2'
<ide> S3_SA_EAST = 's3_sa_east'
<add> S3_RGW_OUTSCALE = 's3_rgw_outscale'
<ide> NINEFOLD = 'ninefold'
<ide> GOOGLE_STORAGE = 'google_storage'
<ide> S3_US_WEST_OREGON = 's3_us_west_oregon' | 3 |
Javascript | Javascript | match scottish gaelic months - | e19671f15adc57b1b060310e4bace8badc74d9e8 | <ide><path>src/lib/parse/regex.js
<ide> export var matchShortOffset = /Z|[+-]\d\d(?::?\d\d)?/gi; // +00 -00 +00:00 -00:0
<ide> export var matchTimestamp = /[+-]?\d+(\.\d{1,3})?/; // 123456789 123456789.123
<ide>
<ide> // any word (or two) characters or numbers including two/three word month in arabic.
<del>export var matchWord = /[0-9]*['a-z\u00A0-\u05FF\u0700-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF]+|[\u0600-\u06FF\/]+(\s*?[\u0600-\u06FF]+){1,2}/i;
<add>// includes scottish gaelic two word and hyphenated months
<add>export var matchWord = /[0-9]*(a[mn]\s?)?['a-z\u00A0-\u05FF\u0700-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF\-]+|[\u0600-\u06FF\/]+(\s*?[\u0600-\u06FF]+){1,2}/i;
<add>
<ide>
<ide> import hasOwnProp from '../utils/has-own-prop';
<ide> import isFunction from '../utils/is-function'; | 1 |
PHP | PHP | remove ambiguity in exception message | 352095c0209948c83decffdba63eb723c706e91a | <ide><path>src/Illuminate/View/Factory.php
<ide> public function startPush($section, $content = '')
<ide> public function stopPush()
<ide> {
<ide> if (empty($this->pushStack)) {
<del> throw new InvalidArgumentException('Cannot end a section without first starting one.');
<add> throw new InvalidArgumentException('Cannot end a push without first starting one.');
<ide> }
<ide>
<ide> $last = array_pop($this->pushStack); | 1 |
Ruby | Ruby | remove unneeded requires | 576fb33ba3c1cf0a0c3e83ffac56c46a83d7a57f | <ide><path>railties/lib/rails/application.rb
<ide> require "active_support/key_generator"
<ide> require "active_support/message_verifier"
<ide> require "active_support/encrypted_configuration"
<del>require "active_support/deprecation"
<ide> require "active_support/hash_with_indifferent_access"
<ide> require "active_support/configuration_file"
<ide> require "rails/engine"
<ide><path>railties/lib/rails/commands/dbconsole/dbconsole_command.rb
<ide> # frozen_string_literal: true
<ide>
<del>require "active_support/deprecation"
<ide> require "active_support/core_ext/string/filters"
<ide> require "rails/command/environment_argument"
<ide>
<ide><path>railties/lib/rails/commands/server/server_command.rb
<ide> require "fileutils"
<ide> require "action_dispatch"
<ide> require "rails"
<del>require "active_support/deprecation"
<ide> require "active_support/core_ext/string/filters"
<ide> require "active_support/core_ext/symbol/starts_ends_with"
<ide> require "rails/dev_caching"
<ide><path>railties/lib/rails/source_annotation_extractor.rb
<ide> # frozen_string_literal: true
<ide>
<del>require "active_support/deprecation"
<del>
<ide> module Rails
<ide> # Implements the logic behind <tt>Rails::Command::NotesCommand</tt>. See <tt>rails notes --help</tt> for usage information.
<ide> # | 4 |
Text | Text | add missing verb | 980cb01a845d92656776fc95ff60a1225f6b3406 | <ide><path>docs/tutorial/tutorial.md
<ide> Square no longer keeps its own state; it receives its value from its parent `Boa
<ide>
<ide> ## Why Immutability Is Important
<ide>
<del>In the previous code example, I suggest using the `.slice()` operator to copy the `squares` array prior to making changes and to prevent mutating the existing array. Let's talk about what this means and why it an important concept to learn.
<add>In the previous code example, I suggest using the `.slice()` operator to copy the `squares` array prior to making changes and to prevent mutating the existing array. Let's talk about what this means and why it is an important concept to learn.
<ide>
<ide> There are generally two ways for changing data. The first, and most common method in past, has been to *mutate* the data by directly changing the values of a variable. The second method is to replace the data with a new copy of the object that also includes desired changes.
<ide> | 1 |
Mixed | Ruby | add heif image types to variable content types | 87f37419c463e03cdfb73ac70ce87197c5f829f3 | <ide><path>activestorage/lib/active_storage/engine.rb
<ide> class Engine < Rails::Engine # :nodoc:
<ide> image/vnd.adobe.photoshop
<ide> image/vnd.microsoft.icon
<ide> image/webp
<add> image/avif
<add> image/heic
<add> image/heif
<ide> )
<ide>
<ide> config.active_storage.web_image_content_types = %w(
<ide><path>guides/source/configuring.md
<ide> You can find more detailed configuration options in the
<ide> config.active_storage.paths[:ffprobe] = '/usr/local/bin/ffprobe'
<ide> ```
<ide>
<del>* `config.active_storage.variable_content_types` accepts an array of strings indicating the content types that Active Storage can transform through ImageMagick. The default is `%w(image/png image/gif image/jpg image/jpeg image/pjpeg image/tiff image/bmp image/vnd.adobe.photoshop image/vnd.microsoft.icon image/webp)`.
<add>* `config.active_storage.variable_content_types` accepts an array of strings indicating the content types that Active Storage can transform through ImageMagick. The default is `%w(image/png image/gif image/jpg image/jpeg image/pjpeg image/tiff image/bmp image/vnd.adobe.photoshop image/vnd.microsoft.icon image/webp image/avif image/heic image/heif)`.
<ide>
<del>* `config.active_storage.web_image_content_types` accepts an array of strings regarded as web image content types in which variants can be processed without being converted to the fallback PNG format. If you want to use `WebP` variants in your application you can add `image/webp` to this array. The default is `%w(image/png image/jpeg image/jpg image/gif)`.
<add>* `config.active_storage.web_image_content_types` accepts an array of strings regarded as web image content types in which variants can be processed without being converted to the fallback PNG format. If you want to use `WebP` or `AVIF` variants in your application you can add `image/webp` or `image/avif` to this array. The default is `%w(image/png image/jpeg image/jpg image/gif)`.
<ide>
<ide> * `config.active_storage.content_types_to_serve_as_binary` accepts an array of strings indicating the content types that Active Storage will always serve as an attachment, rather than inline. The default is `%w(text/html
<ide> text/javascript image/svg+xml application/postscript application/x-shockwave-flash text/xml application/xml application/xhtml+xml application/mathml+xml text/cache-manifest)`. | 2 |
Javascript | Javascript | remove listeners on bind error | 115792dfde436354c82f38361cbff75ad226b8d0 | <ide><path>lib/dgram.js
<ide> Socket.prototype.bind = function(port_, address_ /* , callback */) {
<ide>
<ide> state.bindState = BIND_STATE_BINDING;
<ide>
<del> if (arguments.length && typeof arguments[arguments.length - 1] === 'function')
<del> this.once('listening', arguments[arguments.length - 1]);
<add> const cb = arguments.length && arguments[arguments.length - 1];
<add> if (typeof cb === 'function') {
<add> function removeListeners() {
<add> this.removeListener('error', removeListeners);
<add> this.removeListener('listening', onListening);
<add> }
<add>
<add> function onListening() {
<add> removeListeners.call(this);
<add> cb.call(this);
<add> }
<add>
<add> this.on('error', removeListeners);
<add> this.on('listening', onListening);
<add> }
<ide>
<ide> if (port instanceof UDP) {
<ide> replaceHandle(this, port);
<ide><path>test/parallel/test-dgram-bind-error-repeat.js
<add>'use strict';
<add>const common = require('../common');
<add>const dgram = require('dgram');
<add>
<add>// Regression test for https://github.com/nodejs/node/issues/30209
<add>// No warning should be emitted when re-trying `.bind()` on UDP sockets
<add>// repeatedly.
<add>
<add>process.on('warning', common.mustNotCall());
<add>
<add>const reservePortSocket = dgram.createSocket('udp4');
<add>reservePortSocket.bind(() => {
<add> const { port } = reservePortSocket.address();
<add>
<add> const newSocket = dgram.createSocket('udp4');
<add>
<add> let errors = 0;
<add> newSocket.on('error', common.mustCall(() => {
<add> if (++errors < 20) {
<add> newSocket.bind(port, common.mustNotCall());
<add> } else {
<add> newSocket.close();
<add> reservePortSocket.close();
<add> }
<add> }, 20));
<add> newSocket.bind(port, common.mustNotCall());
<add>}); | 2 |
PHP | PHP | fix comment typo | 874f02c7590e9e299515fc4157cef364b5cae111 | <ide><path>laravel/uri.php
<ide> public static function current()
<ide> // and use the first one we encounter for the URI.
<ide> static::$uri = static::detect();
<ide>
<del> // If you ever encounter this error, please information the Laravel
<add> // If you ever encounter this error, please inform the nerdy Laravel
<ide> // dev team with information about your server. We want to support
<ide> // Laravel an as many server environments as possible!
<ide> if (is_null(static::$uri)) | 1 |
Python | Python | remove redundant exception | 32a1f7ff2cfa88ca4c188656f970b5a9fc10a529 | <ide><path>airflow/models.py
<ide> def signal_handler(signum, frame):
<ide> else:
<ide> task_copy.execute(context=context)
<ide> task_copy.post_execute(context=context)
<del> except (Exception, StandardError, KeyboardInterrupt) as e:
<add> except (Exception, KeyboardInterrupt) as e:
<ide> self.handle_failure(e, test_mode, context)
<ide> raise
<ide> | 1 |
PHP | PHP | remove leading slash and extra brackets | 6cac4df0f66a8624eb553169291c2f09842120e1 | <ide><path>tests/Notifications/NotificationMailChannelTest.php
<ide> public function toMail($notifiable)
<ide> $mock = Mockery::mock(Illuminate\Contracts\Mail\Mailable::class);
<ide>
<ide> $mock->shouldReceive('send')->once()->with(Mockery::on(function ($mailer) {
<del> if (! ($mailer instanceof \Illuminate\Contracts\Mail\Mailer)) {
<add> if (! $mailer instanceof Illuminate\Contracts\Mail\Mailer) {
<ide> return false;
<ide> }
<ide> | 1 |
Ruby | Ruby | use a single thread for all connectionpool reapers | 3e2e8eeb9ea552bd4782538cf9348455f3d0e14a | <ide><path>activerecord/lib/active_record/connection_adapters/abstract/connection_pool.rb
<ide> def initialize(pool, frequency)
<ide> @frequency = frequency
<ide> end
<ide>
<add> @@mutex = Mutex.new
<add> @@pools = {}
<add>
<add> def self.register_pool(pool, frequency) # :nodoc:
<add> @@mutex.synchronize do
<add> if @@pools.key?(frequency)
<add> @@pools[frequency] << pool
<add> else
<add> @@pools[frequency] = [pool]
<add> Thread.new(frequency) do |t|
<add> loop do
<add> sleep t
<add> @@mutex.synchronize do
<add> @@pools[frequency].each do |p|
<add> p.reap
<add> p.flush
<add> end
<add> end
<add> end
<add> end
<add> end
<add> end
<add> end
<add>
<ide> def run
<ide> return unless frequency && frequency > 0
<del> Thread.new(frequency, pool) { |t, p|
<del> loop do
<del> sleep t
<del> p.reap
<del> p.flush
<del> end
<del> }
<add> self.class.register_pool(pool, frequency)
<ide> end
<ide> end
<ide> | 1 |
Python | Python | fix a bunch of tyops | b52e27263f46c5afb326e6eb4ec78a7cb3d99cfd | <ide><path>libcloud/compute/drivers/gce.py
<ide> def __init__(self, id, name, cidr, driver, extra=None):
<ide>
<ide> def destroy(self):
<ide> """
<del> Destroy this newtwork
<add> Destroy this network
<ide>
<ide> :return: True if successful
<ide> :rtype: ``bool``
<ide> def _get_next_maint(self):
<ide> :return: A dictionary containing maintenance window info (or None if
<ide> no maintenance windows are scheduled)
<ide> The dictionary contains 4 keys with values of type ``str``
<del> - name: The name of the maintence window
<add> - name: The name of the maintenance window
<ide> - description: Description of the maintenance window
<ide> - beginTime: RFC3339 Timestamp
<ide> - endTime: RFC3339 Timestamp
<ide> def ex_create_healthcheck(self, name, host=None, path=None, port=None,
<ide> :param name: Name of health check
<ide> :type name: ``str``
<ide>
<del> :keyword host: Hostname of health check requst. Defaults to empty and
<del> public IP is used instead.
<add> :keyword host: Hostname of health check request. Defaults to empty
<add> and public IP is used instead.
<ide> :type host: ``str``
<ide>
<ide> :keyword path: The request path for the check. Defaults to /.
<ide> def create_node(self, name, size, image, location=None,
<ide> :keyword ex_network: The network to associate with the node.
<ide> :type ex_network: ``str`` or :class:`GCENetwork`
<ide>
<del> :keyword ex_tags: A list of tags to assiciate with the node.
<add> :keyword ex_tags: A list of tags to associate with the node.
<ide> :type ex_tags: ``list`` of ``str`` or ``None``
<ide>
<ide> :keyword ex_metadata: Metadata dictionary for instance.
<ide> def deploy_node(self, name, size, image, script, location=None,
<ide> :keyword ex_network: The network to associate with the node.
<ide> :type ex_network: ``str`` or :class:`GCENetwork`
<ide>
<del> :keyword ex_tags: A list of tags to assiciate with the node.
<add> :keyword ex_tags: A list of tags to associate with the node.
<ide> :type ex_tags: ``list`` of ``str`` or ``None``
<ide>
<ide> :return: A Node object for the new node.
<ide> def destroy_volume_snapshot(self, snapshot):
<ide> :param snapshot: Snapshot object to destroy
<ide> :type snapshot: :class:`GCESnapshot`
<ide>
<del> :return: True if successfull
<add> :return: True if successful
<ide> :rtype: ``bool``
<ide> """
<ide> request = '/global/snapshots/%s' % (snapshot.name)
<ide> def _match_images(self, project, partial_name):
<ide> image.
<ide> :type partial_name: ``str``
<ide>
<del> :return: The latest image object that maches the partial name or None
<add> :return: The latest image object that matches the partial name or None
<ide> if no matching image is found.
<ide> :rtype: :class:`NodeImage` or ``None``
<ide> """
<ide> def _create_node_req(self, name, size, image, location, network,
<ide> external_ip='ephemeral'):
<ide> """
<ide> Returns a request and body to create a new node. This is a helper
<del> method to suppor both :class:`create_node` and
<add> method to support both :class:`create_node` and
<ide> :class:`ex_create_multiple_nodes`.
<ide>
<ide> :param name: The name of the node to create.
<ide> def _create_node_req(self, name, size, image, location, network,
<ide> :param network: The network to associate with the node.
<ide> :type network: :class:`GCENetwork`
<ide>
<del> :keyword tags: A list of tags to assiciate with the node.
<add> :keyword tags: A list of tags to associate with the node.
<ide> :type tags: ``list`` of ``str``
<ide>
<ide> :keyword metadata: Metadata dictionary for instance.
<ide> def _create_vol_req(self, size, name, location=None, snapshot=None,
<ide> :keyword image: Image to create disk from.
<ide> :type image: :class:`NodeImage` or ``str`` or ``None``
<ide>
<del> :return: Tuple containg the request string, the data dictionary and
<add> :return: Tuple containing the request string, the data dictionary and
<ide> the URL parameters
<ide> :rtype: ``tuple``
<ide> """ | 1 |
Javascript | Javascript | add trailing dot to l and l | 1e96d877aa61aecd5fd85da4c00ff84e0ebe6df5 | <ide><path>src/locale/ko.js
<ide> export default moment.defineLocale('ko', {
<ide> longDateFormat : {
<ide> LT : 'A h:mm',
<ide> LTS : 'A h:mm:ss',
<del> L : 'YYYY.MM.DD',
<add> L : 'YYYY.MM.DD.',
<ide> LL : 'YYYY년 MMMM D일',
<ide> LLL : 'YYYY년 MMMM D일 A h:mm',
<ide> LLLL : 'YYYY년 MMMM D일 dddd A h:mm',
<del> l : 'YYYY.MM.DD',
<add> l : 'YYYY.MM.DD.',
<ide> ll : 'YYYY년 MMMM D일',
<ide> lll : 'YYYY년 MMMM D일 A h:mm',
<ide> llll : 'YYYY년 MMMM D일 dddd A h:mm'
<ide><path>src/test/locale/ko.js
<ide> test('format', function (assert) {
<ide> ['a A', '오후 오후'],
<ide> ['일년 중 DDDo째 되는 날', '일년 중 45일째 되는 날'],
<ide> ['LTS', '오후 3:25:50'],
<del> ['L', '2010.02.14'],
<add> ['L', '2010.02.14.'],
<ide> ['LL', '2010년 2월 14일'],
<ide> ['LLL', '2010년 2월 14일 오후 3:25'],
<ide> ['LLLL', '2010년 2월 14일 일요일 오후 3:25'],
<del> ['l', '2010.02.14'],
<add> ['l', '2010.02.14.'],
<ide> ['ll', '2010년 2월 14일'],
<ide> ['lll', '2010년 2월 14일 오후 3:25'],
<ide> ['llll', '2010년 2월 14일 일요일 오후 3:25'] | 2 |
Text | Text | fix typo in 'integer' word | efa4e09efa9cd95c9737f4fccee60606e1820002 | <ide><path>curriculum/challenges/english/08-data-analysis-with-python/data-analysis-with-python-course/data-analysis-example-b.english.md
<ide> question:
<ide>
<ide> answers:
<ide> - |
<del> Retrieve a subset of rows and columns by supplying interger-location arguments.
<add> Retrieve a subset of rows and columns by supplying integer-location arguments.
<ide> - |
<ide> Access a group of rows and columns by supplying label(s) arguments.
<ide> - | | 1 |
Javascript | Javascript | update pane to use async showsavedialog | 95a994a1f856dbf01992bffc472e19fd926d8e91 | <ide><path>src/pane.js
<ide> class Pane {
<ide> // after the item is successfully saved, or with the error if it failed.
<ide> // The return value will be that of `nextAction` or `undefined` if it was not
<ide> // provided
<del> saveItemAs (item, nextAction) {
<add> async saveItemAs (item, nextAction) {
<ide> if (!item) return
<ide> if (typeof item.saveAs !== 'function') return
<ide>
<ide> class Pane {
<ide> const itemPath = item.getPath()
<ide> if (itemPath && !saveOptions.defaultPath) saveOptions.defaultPath = itemPath
<ide>
<del> const newItemPath = this.applicationDelegate.showSaveDialog(saveOptions)
<del> if (newItemPath) {
<del> return promisify(() => item.saveAs(newItemPath))
<del> .then(() => {
<del> if (nextAction) nextAction()
<del> })
<del> .catch(error => {
<del> if (nextAction) {
<del> nextAction(error)
<del> } else {
<del> this.handleSaveError(error, item)
<del> }
<del> })
<del> } else if (nextAction) {
<del> return nextAction(new SaveCancelledError('Save Cancelled'))
<del> }
<add> let resolveSaveDialogPromise = null
<add> const saveDialogPromise = new Promise(resolve => { resolveSaveDialogPromise = resolve })
<add> this.applicationDelegate.showSaveDialog(saveOptions, newItemPath => {
<add> if (newItemPath) {
<add> promisify(() => item.saveAs(newItemPath))
<add> .then(() => {
<add> if (nextAction) {
<add> resolveSaveDialogPromise(nextAction())
<add> } else {
<add> resolveSaveDialogPromise()
<add> }
<add> })
<add> .catch(error => {
<add> if (nextAction) {
<add> resolveSaveDialogPromise(nextAction(error))
<add> } else {
<add> this.handleSaveError(error, item)
<add> resolveSaveDialogPromise()
<add> }
<add> })
<add> } else if (nextAction) {
<add> resolveSaveDialogPromise(nextAction(new SaveCancelledError('Save Cancelled')))
<add> } else {
<add> resolveSaveDialogPromise()
<add> }
<add> })
<add>
<add> return await saveDialogPromise
<ide> }
<ide>
<ide> // Public: Save all items. | 1 |
Ruby | Ruby | remove unused `buffer` class | cb5684831e30e133ea9d771d6354adaa7e0af8cc | <ide><path>Library/Homebrew/cask/lib/hbc/utils.rb
<ide>
<ide> BUG_REPORTS_URL = "https://github.com/caskroom/homebrew-cask#reporting-bugs".freeze
<ide>
<del>class Buffer < StringIO
<del> extend Predicable
<del>
<del> attr_predicate :tty?
<del>
<del> def initialize(tty = false)
<del> super()
<del> @tty = tty
<del> end
<del>end
<del>
<ide> # global methods
<ide>
<ide> def odebug(title, *sput) | 1 |
Javascript | Javascript | decrease expiration time of input updates | 708fa77a783bbe729cfcebdd513d23eafc455b8b | <ide><path>packages/react-reconciler/src/ReactFiberLane.js
<ide> function computeExpirationTime(lane: Lane, currentTime: number) {
<ide> const priority = return_highestLanePriority;
<ide> if (priority >= InputContinuousLanePriority) {
<ide> // User interactions should expire slightly more quickly.
<del> return currentTime + 1000;
<add> //
<add> // NOTE: This is set to the corresponding constant as in Scheduler.js. When
<add> // we made it larger, a product metric in www regressed, suggesting there's
<add> // a user interaction that's being starved by a series of synchronous
<add> // updates. If that theory is correct, the proper solution is to fix the
<add> // starvation. However, this scenario supports the idea that expiration
<add> // times are an important safeguard when starvation does happen.
<add> //
<add> // Also note that, in the case of user input specifically, this will soon no
<add> // longer be an issue because we plan to make user input synchronous by
<add> // default (until you enter `startTransition`, of course.)
<add> //
<add> // If weren't planning to make these updates synchronous soon anyway, I
<add> // would probably make this number a configurable parameter.
<add> return currentTime + 250;
<ide> } else if (priority >= TransitionPriority) {
<ide> return currentTime + 5000;
<ide> } else { | 1 |
Python | Python | change all features_to_crop into a list of tensors | 6ebbfe13afda1d38aaf8183cdd3558620ce8ef6b | <ide><path>research/object_detection/meta_architectures/faster_rcnn_meta_arch.py
<ide> def __init__(self,
<ide> else:
<ide> self._first_stage_box_predictor_arg_scope_fn = (
<ide> first_stage_box_predictor_arg_scope_fn)
<del> def rpn_box_predictor_feature_extractor(rpn_features_to_crop):
<add> def rpn_box_predictor_feature_extractor(single_rpn_features_to_crop):
<ide> with slim.arg_scope(self._first_stage_box_predictor_arg_scope_fn()):
<ide> reuse = tf.get_variable_scope().reuse
<ide> return slim.conv2d(
<del> rpn_features_to_crop,
<add> single_rpn_features_to_crop,
<ide> self._first_stage_box_predictor_depth,
<ide> kernel_size=[
<ide> self._first_stage_box_predictor_kernel_size,
<ide> def predict(self, preprocessed_inputs, true_image_shapes, **side_inputs):
<ide> 1) rpn_box_predictor_features: A 4-D float32 tensor with shape
<ide> [batch_size, height, width, depth] to be used for predicting proposal
<ide> boxes and corresponding objectness scores.
<del> 2) rpn_features_to_crop: A 4-D float32 tensor with shape
<add> 2) rpn_features_to_crop: A list of 4-D float32 tensor with shape
<ide> [batch_size, height, width, depth] representing image features to crop
<ide> using the proposal boxes predicted by the RPN.
<ide> 3) image_shape: a 1-D tensor of shape [4] representing the input
<ide> def _predict_first_stage(self, preprocessed_inputs):
<ide> 1) rpn_box_predictor_features: A 4-D float32/bfloat16 tensor with shape
<ide> [batch_size, height, width, depth] to be used for predicting proposal
<ide> boxes and corresponding objectness scores.
<del> 2) rpn_features_to_crop: A 4-D float32/bfloat16 tensor with shape
<add> 2) rpn_features_to_crop: A list of 4-D float32/bfloat16 tensor with shape
<ide> [batch_size, height, width, depth] representing image features to crop
<ide> using the proposal boxes predicted by the RPN.
<ide> 3) image_shape: a 1-D tensor of shape [4] representing the input
<ide> def _predict_first_stage(self, preprocessed_inputs):
<ide> dtype=tf.float32),
<ide> 'anchors':
<ide> anchors_boxlist.data['boxes'],
<del> fields.PredictionFields.feature_maps: [rpn_features_to_crop]
<add> fields.PredictionFields.feature_maps: rpn_features_to_crop
<ide> }
<ide> return prediction_dict
<ide>
<ide> def _predict_second_stage(self, rpn_box_encodings,
<ide> [batch_size, num_valid_anchors, 2] containing class
<ide> predictions (logits) for each of the anchors. Note that this
<ide> tensor *includes* background class predictions (at class index 0).
<del> rpn_features_to_crop: A 4-D float32 or bfloat16 tensor with shape
<add> rpn_features_to_crop: A list of 4-D float32 or bfloat16 tensor with shape
<ide> [batch_size, height, width, depth] representing image features to crop
<ide> using the proposal boxes predicted by the RPN.
<ide> anchors: 2-D float tensor of shape
<ide> def _box_prediction(self, rpn_features_to_crop, proposal_boxes_normalized,
<ide> """Predicts the output tensors from second stage of Faster R-CNN.
<ide>
<ide> Args:
<del> rpn_features_to_crop: A 4-D float32 or bfloat16 tensor with shape
<add> rpn_features_to_crop: A list 4-D float32 or bfloat16 tensor with shape
<ide> [batch_size, height, width, depth] representing image features to crop
<ide> using the proposal boxes predicted by the RPN.
<ide> proposal_boxes_normalized: A float tensor with shape [batch_size,
<ide> def _extract_rpn_feature_maps(self, preprocessed_inputs):
<ide> preprocessed_inputs: a [batch, height, width, channels] image tensor.
<ide>
<ide> Returns:
<del> rpn_box_predictor_features: A 4-D float32 tensor with shape
<add> rpn_box_predictor_features: A list of 4-D float32 tensor with shape
<ide> [batch, height, width, depth] to be used for predicting proposal boxes
<ide> and corresponding objectness scores.
<del> rpn_features_to_crop: A 4-D float32 tensor with shape
<add> rpn_features_to_crop: A list of 4-D float32 tensor with shape
<ide> [batch, height, width, depth] representing image features to crop using
<ide> the proposals boxes.
<ide> anchors: A BoxList representing anchors (for the RPN) in
<ide> def _extract_rpn_feature_maps(self, preprocessed_inputs):
<ide>
<ide> rpn_features_to_crop, self.endpoints = self._extract_proposal_features(
<ide> preprocessed_inputs)
<del>
<del> feature_map_shape = tf.shape(rpn_features_to_crop)
<del> anchors = box_list_ops.concatenate(
<del> self._first_stage_anchor_generator.generate([(feature_map_shape[1],
<del> feature_map_shape[2])]))
<del> rpn_box_predictor_features = (
<del> self._first_stage_box_predictor_first_conv(rpn_features_to_crop))
<add>
<add> # decide if rpn_features_to_crop is a list. If not make it a list
<add> if not isinstance(rpn_features_to_crop, list):
<add> rpn_features_to_crop = [rpn_features_to_crop]
<add>
<add> rpn_box_predictor_features = []
<add> for single_rpn_features_to_crop in rpn_features_to_crop:
<add> feature_map_shape = tf.shape(single_rpn_features_to_crop)
<add> anchors = box_list_ops.concatenate(
<add> self._first_stage_anchor_generator.generate([(feature_map_shape[1],
<add> feature_map_shape[2])]))
<add> single_rpn_box_predictor_features = (
<add> self._first_stage_box_predictor_first_conv(single_rpn_features_to_crop))
<add> rpn_box_predictor_features.append(single_rpn_box_predictor_features)
<ide> return (rpn_box_predictor_features, rpn_features_to_crop,
<ide> anchors, image_shape)
<ide>
<ide> def _add_detection_features_output_node(self, detection_boxes,
<ide> Args:
<ide> detection_boxes: a 3-D float32 tensor of shape
<ide> [batch_size, max_detections, 4] which represents the bounding boxes.
<del> rpn_features_to_crop: A 4-D float32 tensor with shape
<add> rpn_features_to_crop: A list of 4-D float32 tensor with shape
<ide> [batch, height, width, depth] representing image features to crop using
<ide> the proposals boxes.
<ide> | 1 |
Ruby | Ruby | extend basic rendering, test it in railties | a2ca04bb3ac178bbe1503ac65dc88f5f3f8cb37f | <ide><path>actionpack/lib/action_controller/metal/rendering.rb
<ide> module BasicRendering
<ide> # :api: public
<ide> def render(*args, &block)
<ide> super(*args, &block)
<del> text = args.first[:text]
<del> if text.present?
<del> self.response_body = text
<add> opts = args.first
<add> if (opts.keys & [:text, :nothing]).present?
<add> self.response_body = if opts.has_key?(:text) && opts[:text].present?
<add> opts[:text]
<add> elsif opts.has_key?(:nothing) && opts[:nothing]
<add> " "
<add> end
<add> else
<add> raise UnsupportedOperationError
<ide> end
<ide> end
<ide>
<ide> def rendered_format
<ide> Mime::TEXT
<ide> end
<add>
<add> class UnsupportedOperationError < StandardError
<add> def initialize
<add> super "Unsupported render operation. BasicRendering supports only :text
<add> and :nothing options. For more, you need to include ActionView."
<add> end
<add> end
<ide> end
<ide>
<ide> module Rendering
<ide><path>actionpack/test/controller/basic_rendering_test.rb
<del>require 'abstract_unit'
<del>
<del>class BasicRenderingController < ActionController::Base
<del> def render_hello_world
<del> render text: "Hello World!"
<del> end
<del>end
<del>
<del>class BasicRenderingTest < ActionController::TestCase
<del> tests BasicRenderingController
<del>
<del> def test_render_hello_world
<del> get :render_hello_world
<del>
<del> assert_equal "Hello World!", @response.body
<del> assert_equal "text/plain", @response.content_type
<del> end
<del>end
<del>
<ide>\ No newline at end of file
<ide><path>railties/test/application/basic_rendering_test.rb
<add>require 'isolation/abstract_unit'
<add>require 'rack/test'
<add>
<add>module ApplicationTests
<add> class BasicRenderingTest < ActiveSupport::TestCase
<add> include ActiveSupport::Testing::Isolation
<add> include Rack::Test::Methods
<add>
<add> def setup
<add> build_app
<add> end
<add>
<add> def teardown
<add> teardown_app
<add> end
<add>
<add> test "Rendering without ActionView" do
<add> gsub_app_file 'config/application.rb', "require 'rails/all'", <<-RUBY
<add> require "active_model/railtie"
<add> require "action_controller/railtie"
<add> RUBY
<add>
<add> # Turn off ActionView and jquery-rails (it depends on AV)
<add> $:.reject! {|path| path =~ /(actionview|jquery\-rails)/ }
<add> boot_rails
<add>
<add> app_file 'app/controllers/pages_controller.rb', <<-RUBY
<add> class PagesController < ApplicationController
<add> def render_hello_world
<add> render text: "Hello World!"
<add> end
<add>
<add> def render_nothing
<add> render nothing: true
<add> end
<add>
<add> def no_render; end
<add>
<add> def raise_error
<add> render foo: "bar"
<add> end
<add> end
<add> RUBY
<add>
<add> get '/pages/render_hello_world'
<add> assert_equal 200, last_response.status
<add> assert_equal "Hello World!", last_response.body
<add> assert_equal "text/plain; charset=utf-8", last_response.content_type
<add>
<add> get '/pages/render_nothing'
<add> assert_equal 200, last_response.status
<add> assert_equal " ", last_response.body
<add> assert_equal "text/plain; charset=utf-8", last_response.content_type
<add>
<add> get '/pages/no_render'
<add> assert_equal 500, last_response.status
<add>
<add> get '/pages/raise_error'
<add> assert_equal 500, last_response.status
<add> end
<add> end
<add>end
<ide><path>railties/test/isolation/abstract_unit.rb
<ide> def app_file(path, contents)
<ide> end
<ide> end
<ide>
<add> def gsub_app_file(path, regexp, *args, &block)
<add> path = "#{app_path}/#{path}"
<add> content = File.read(path).gsub(regexp, *args, &block)
<add> File.open(path, 'wb') { |f| f.write(content) }
<add> end
<add>
<ide> def remove_file(path)
<ide> FileUtils.rm_rf "#{app_path}/#{path}"
<ide> end | 4 |
Ruby | Ruby | improve human_attribute_name performance | acbc39b66307870480422ff0d5d0279de42ac728 | <ide><path>activemodel/lib/active_model/translation.rb
<ide> def lookup_ancestors
<ide> ancestors.select { |x| x.respond_to?(:model_name) }
<ide> end
<ide>
<add> MISSING_TRANSLATION = Object.new # :nodoc:
<add>
<ide> # Transforms attribute names into a more human format, such as "First name"
<ide> # instead of "first_name".
<ide> #
<ide> # Person.human_attribute_name("first_name") # => "First name"
<ide> #
<ide> # Specify +options+ with additional translating options.
<ide> def human_attribute_name(attribute, options = {})
<del> options = { count: 1 }.merge!(options)
<del> parts = attribute.to_s.split(".")
<del> attribute = parts.pop
<del> namespace = parts.join("/") unless parts.empty?
<del> attributes_scope = "#{i18n_scope}.attributes"
<add> attribute = attribute.to_s
<add>
<add> if attribute.include?(".")
<add> namespace, _, attribute = attribute.rpartition(".")
<add> namespace.tr!(".", "/")
<ide>
<del> if namespace
<ide> defaults = lookup_ancestors.map do |klass|
<del> :"#{attributes_scope}.#{klass.model_name.i18n_key}/#{namespace}.#{attribute}"
<add> :"#{i18n_scope}.attributes.#{klass.model_name.i18n_key}/#{namespace}.#{attribute}"
<ide> end
<del> defaults << :"#{attributes_scope}.#{namespace}.#{attribute}"
<add> defaults << :"#{i18n_scope}.attributes.#{namespace}.#{attribute}"
<ide> else
<ide> defaults = lookup_ancestors.map do |klass|
<del> :"#{attributes_scope}.#{klass.model_name.i18n_key}.#{attribute}"
<add> :"#{i18n_scope}.attributes.#{klass.model_name.i18n_key}.#{attribute}"
<ide> end
<ide> end
<ide>
<ide> defaults << :"attributes.#{attribute}"
<del> defaults << options.delete(:default) if options[:default]
<del> defaults << attribute.humanize
<add> defaults << options[:default] if options[:default]
<add> defaults << MISSING_TRANSLATION
<ide>
<del> options[:default] = defaults
<del> I18n.translate(defaults.shift, **options)
<add> translation = I18n.translate(defaults.shift, count: 1, **options, default: defaults)
<add> translation = attribute.humanize if translation == MISSING_TRANSLATION
<add> translation
<ide> end
<ide> end
<ide> end | 1 |
Python | Python | fix ui redirect | 56e7555c42f013f789a4b718676ff09b4a9d5135 | <ide><path>airflow/www/views.py
<ide> def confirm(self):
<ide> task_id = args.get('task_id')
<ide> dag_run_id = args.get('dag_run_id')
<ide> state = args.get('state')
<del> origin = args.get('origin')
<add> origin = get_safe_url(args.get('origin'))
<ide>
<ide> if 'map_index' not in args:
<ide> map_indexes: list[int] | None = None | 1 |
Javascript | Javascript | add default option for platform.select | f30ab35e9278f120247f49d6a355e263cc357946 | <ide><path>Libraries/Utilities/Platform.android.js
<ide> const Platform = {
<ide> const constants = require('NativeModules').AndroidConstants;
<ide> return constants && constants.isTesting;
<ide> },
<del> select: (obj: Object) => obj.android,
<add> select: (obj: Object) => 'android' in obj ? obj.android : obj.default,
<ide> };
<ide>
<ide> module.exports = Platform;
<ide><path>Libraries/Utilities/Platform.ios.js
<ide> const Platform = {
<ide> const constants = require('NativeModules').IOSConstants;
<ide> return constants && constants.isTesting;
<ide> },
<del> select: (obj: Object) => obj.ios,
<add> select: (obj: Object) => 'ios' in obj ? obj.ios : obj.default,
<ide> };
<ide>
<ide> module.exports = Platform;
<ide><path>packager/src/JSTransformer/worker/__tests__/inline-test.js
<ide> describe('inline constants', () => {
<ide> expect(toString(ast)).toEqual(normalize(code.replace(/Platform\.select[^;]+/, '1')));
<ide> });
<ide>
<add> it('inlines Platform.select in the code if Platform is a global and the argument doesn\'t have target platform in it\'s keys', () => {
<add> const code = `function a() {
<add> var a = Platform.select({ios: 1, default: 2});
<add> var b = a.Platform.select({ios: 1, default: 2});
<add> }`;
<add> const {ast} = inline('arbitrary.js', {code}, {platform: 'android'});
<add> expect(toString(ast)).toEqual(normalize(code.replace(/Platform\.select[^;]+/, '2')));
<add> });
<add>
<ide> it('replaces Platform.select in the code if Platform is a top level import', () => {
<ide> const code = `
<ide> var Platform = require('Platform');
<ide><path>packager/src/JSTransformer/worker/inline.js
<ide> const isDev = (node, parent, scope) =>
<ide> isGlobal(scope.getBinding(dev.name)) &&
<ide> !(t.isMemberExpression(parent));
<ide>
<del>function findProperty(objectExpression, key) {
<add>function findProperty(objectExpression, key, fallback) {
<ide> const property = objectExpression.properties.find(p => p.key.name === key);
<del> return property ? property.value : t.identifier('undefined');
<add> return property ? property.value : fallback();
<ide> }
<ide>
<ide> const inlinePlugin = {
<ide> const inlinePlugin = {
<ide> isPlatformSelect(node, scope, opts.isWrapped) ||
<ide> isReactPlatformSelect(node, scope, opts.isWrapped)
<ide> ) {
<add> const fallback = () =>
<add> findProperty(arg, 'default', () => t.identifier('undefined'));
<ide> const replacement = t.isObjectExpression(arg)
<del> ? findProperty(arg, opts.platform)
<add> ? findProperty(arg, opts.platform, fallback)
<ide> : node;
<ide>
<ide> path.replaceWith(replacement); | 4 |
Java | Java | move state inside subscriber | c54b3220701735d3db7fb2f42559117413678c88 | <ide><path>rxjava-core/src/main/java/rx/operators/OperatorRetry.java
<ide> private static final int INFINITE_RETRY = -1;
<ide>
<ide> private final int retryCount;
<del> private final AtomicInteger attempts = new AtomicInteger(0);
<ide>
<ide> public OperatorRetry(int retryCount) {
<ide> this.retryCount = retryCount;
<ide> public OperatorRetry() {
<ide> @Override
<ide> public Subscriber<? super Observable<T>> call(final Subscriber<? super T> s) {
<ide> return new Subscriber<Observable<T>>(s) {
<del>
<add> final AtomicInteger attempts = new AtomicInteger(0);
<add>
<ide> @Override
<ide> public void onCompleted() {
<ide> // ignore as we expect a single nested Observable<T> | 1 |
Python | Python | use correct connection id | 2a80e73d0038c7210966db4a2329874eab528820 | <ide><path>tests/core.py
<ide> def test_mysql_to_hive_partition(self):
<ide> sql = "SELECT * FROM baby_names LIMIT 1000;"
<ide> t = MySqlToHiveTransfer(
<ide> task_id='test_m2h',
<del> mysql_conn_id='airflow_db',
<add> mysql_conn_id='airflow_ci',
<ide> hive_cli_conn_id='beeline_default',
<ide> sql=sql,
<ide> hive_table='test_mysql_to_hive_part', | 1 |
PHP | PHP | fix another test | e2b8b9d2d7ff78297aae4c928d9cf1c60604408b | <ide><path>tests/TestCase/ORM/QueryTest.php
<ide> public function testSelectLargeNumbers()
<ide> ->first();
<ide> $this->assertNotEmpty($out, 'Should get a record');
<ide> // There will be loss of precision if too large/small value is set as float instead of string.
<del> $this->assertSame('0.1234567890123500000', $out->fraction);
<add> $this->assertRegExp('/^0?\.123456789012350+$/', $out->fraction);
<ide> }
<ide>
<ide> /** | 1 |
Ruby | Ruby | use a url instead of an url everywhere | 6d133a482af2428cfb1b67714ce7ffcb5bbd5f29 | <ide><path>actionpack/lib/action_dispatch/routing/route_set.rb
<ide> def define_url_helper(mod, route, name, opts, route_key, url_strategy)
<ide> if last.permitted?
<ide> args.pop.to_h
<ide> else
<del> raise ArgumentError, "Generating an URL from non sanitized request parameters is insecure!"
<add> raise ArgumentError, "Generating a URL from non sanitized request parameters is insecure!"
<ide> end
<ide> end
<ide> helper.call self, args, options
<ide><path>actionpack/lib/action_dispatch/routing/url_for.rb
<ide> def url_for(options = nil)
<ide> route_name)
<ide> when ActionController::Parameters
<ide> unless options.permitted?
<del> raise ArgumentError.new("Generating an URL from non sanitized request parameters is insecure!")
<add> raise ArgumentError.new("Generating a URL from non sanitized request parameters is insecure!")
<ide> end
<ide> route_name = options.delete :use_route
<ide> _routes.url_for(options.to_h.symbolize_keys.
<ide><path>actionpack/test/controller/redirect_test.rb
<ide> def test_redirect_to_params
<ide> error = assert_raise(ArgumentError) do
<ide> get :redirect_to_params
<ide> end
<del> assert_equal "Generating an URL from non sanitized request parameters is insecure!", error.message
<add> assert_equal "Generating a URL from non sanitized request parameters is insecure!", error.message
<ide> end
<ide>
<ide> def test_redirect_to_with_block | 3 |
Java | Java | update javadoc in extendedbeaninfo | fc859ffd6e0d5ca8e69b736bc93baf8081b89932 | <ide><path>spring-beans/src/main/java/org/springframework/beans/ExtendedBeanInfo.java
<ide> * {@link Introspector#getBeanInfo(Class)}) by including non-void returning setter
<ide> * methods in the collection of {@link #getPropertyDescriptors() property descriptors}.
<ide> * Both regular and
<del> * <a href="http://download.oracle.com/javase/tutorial/javabeans/properties/indexed.html">
<add> * <a href="http://docs.oracle.com/javase/tutorial/javabeans/writing/properties.html">
<ide> * indexed properties</a> are fully supported.
<ide> *
<ide> * <p>The wrapped {@code BeanInfo} object is not modified in any way.
<ide> public PropertyDescriptor[] getPropertyDescriptors() {
<ide>
<ide>
<ide> /**
<del> * Sorts PropertyDescriptor instances alphanumerically to emulate the behavior of {@link java.beans.BeanInfo#getPropertyDescriptors()}.
<add> * Sorts PropertyDescriptor instances alpha-numerically to emulate the behavior of
<add> * {@link java.beans.BeanInfo#getPropertyDescriptors()}.
<ide> *
<ide> * @see ExtendedBeanInfo#propertyDescriptors
<ide> */
<ide><path>spring-beans/src/main/java/org/springframework/beans/ExtendedBeanInfoFactory.java
<ide> class ExtendedBeanInfoFactory implements Ordered, BeanInfoFactory {
<ide>
<ide> /**
<ide> * Return whether the given bean class declares or inherits any non-void returning
<del> * JavaBeans or <em>indexed</em> setter methods.
<add> * JavaBeans or <em>indexed property</em> setter methods.
<ide> */
<ide> public boolean supports(Class<?> beanClass) {
<ide> for (Method method : beanClass.getMethods()) { | 2 |
Ruby | Ruby | require unzip to be installed | ddeadaefce845256be3371c0d914f73274586376 | <ide><path>Library/Homebrew/dev-cmd/pr-pull.rb
<ide> def download_artifact(url, dir, pr)
<ide> def pr_pull
<ide> args = pr_pull_args.parse
<ide>
<add> # Needed when extracting the CI artifact.
<add> ensure_executable!("unzip", reason: "extracting CI artifacts")
<add>
<ide> workflows = args.workflows.presence || ["tests.yml"]
<ide> artifact = args.artifact || "bottles"
<ide> tap = Tap.fetch(args.tap || CoreTap.instance.name) | 1 |
Ruby | Ruby | drop unnecessary string conversion in skip_clean | 676f29d7578c7abf1493ee5c3edc4692f9315006 | <ide><path>Library/Homebrew/formula.rb
<ide> def skip_clean *paths
<ide> return
<ide> end
<ide>
<del> paths.each do |p|
<del> p = p.to_s unless p == :la # Keep :la in paths as a symbol
<del> skip_clean_paths << p
<del> end
<add> skip_clean_paths.merge(paths)
<ide> end
<ide>
<ide> def skip_clean_all? | 1 |
Python | Python | handle capitalised extensions in list_pictures | 794f814343b655e01de73578d71e8b8a53523687 | <ide><path>keras/preprocessing/image.py
<ide> def load_img(path, grayscale=False, target_size=None,
<ide> def list_pictures(directory, ext='jpg|jpeg|bmp|png|ppm'):
<ide> return [os.path.join(root, f)
<ide> for root, _, files in os.walk(directory) for f in files
<del> if re.match(r'([\w]+\.(?:' + ext + '))', f)]
<add> if re.match(r'([\w]+\.(?:' + ext + '))', f.lower())]
<ide>
<ide>
<ide> class ImageDataGenerator(object): | 1 |
Go | Go | add flusher check to utils.writeflusher | 3b05005a1262e53d042512e88c52a6dae0f2e93d | <ide><path>api/server/server.go
<ide> func (s *Server) postBuild(eng *engine.Engine, version version.Version, w http.R
<ide> }
<ide> }
<ide>
<del> stdout := engine.NewOutput()
<del> stdout.Set(utils.NewWriteFlusher(w))
<del>
<ide> if version.GreaterThanOrEqualTo("1.8") {
<ide> w.Header().Set("Content-Type", "application/json")
<ide> buildConfig.JSONFormat = true
<ide> func (s *Server) postBuild(eng *engine.Engine, version version.Version, w http.R
<ide> buildConfig.Pull = true
<ide> }
<ide>
<del> buildConfig.Stdout = stdout
<add> output := utils.NewWriteFlusher(w)
<add> buildConfig.Stdout = output
<ide> buildConfig.Context = r.Body
<ide>
<ide> buildConfig.RemoteURL = r.FormValue("remote")
<ide> func (s *Server) postBuild(eng *engine.Engine, version version.Version, w http.R
<ide> }
<ide>
<ide> if err := builder.Build(s.daemon, eng, buildConfig); err != nil {
<del> if !stdout.Used() {
<add> // Do not write the error in the http output if it's still empty.
<add> // This prevents from writing a 200(OK) when there is an interal error.
<add> if !output.Flushed() {
<ide> return err
<ide> }
<ide> sf := streamformatter.NewStreamFormatter(version.GreaterThanOrEqualTo("1.8"))
<ide><path>utils/utils.go
<ide> type WriteFlusher struct {
<ide> sync.Mutex
<ide> w io.Writer
<ide> flusher http.Flusher
<add> flushed bool
<ide> }
<ide>
<ide> func (wf *WriteFlusher) Write(b []byte) (n int, err error) {
<ide> wf.Lock()
<ide> defer wf.Unlock()
<ide> n, err = wf.w.Write(b)
<add> wf.flushed = true
<ide> wf.flusher.Flush()
<ide> return n, err
<ide> }
<ide> func (wf *WriteFlusher) Write(b []byte) (n int, err error) {
<ide> func (wf *WriteFlusher) Flush() {
<ide> wf.Lock()
<ide> defer wf.Unlock()
<add> wf.flushed = true
<ide> wf.flusher.Flush()
<ide> }
<ide>
<add>func (wf *WriteFlusher) Flushed() bool {
<add> wf.Lock()
<add> defer wf.Unlock()
<add> return wf.flushed
<add>}
<add>
<ide> func NewWriteFlusher(w io.Writer) *WriteFlusher {
<ide> var flusher http.Flusher
<ide> if f, ok := w.(http.Flusher); ok { | 2 |
Ruby | Ruby | adjust docs of create_or_find_by | 562972caa86feea056065d071ddf13ab9003effa | <ide><path>activerecord/lib/active_record/relation.rb
<ide> def find_or_create_by!(attributes, &block)
<ide> find_by(attributes) || create!(attributes, &block)
<ide> end
<ide>
<del> # Attempts to create a record with the given attributes in a table that has a unique constraint
<add> # Attempts to create a record with the given attributes in a table that has a unique database constraint
<ide> # on one or several of its columns. If a row already exists with one or several of these
<ide> # unique constraints, the exception such an insertion would normally raise is caught,
<ide> # and the existing record with those attributes is found using #find_by!.
<ide> def find_or_create_by!(attributes, &block)
<ide> #
<ide> # There are several drawbacks to #create_or_find_by, though:
<ide> #
<del> # * The underlying table must have the relevant columns defined with unique constraints.
<add> # * The underlying table must have the relevant columns defined with unique database constraints.
<ide> # * A unique constraint violation may be triggered by only one, or at least less than all,
<ide> # of the given attributes. This means that the subsequent #find_by! may fail to find a
<ide> # matching record, which will then raise an <tt>ActiveRecord::RecordNotFound</tt> exception, | 1 |
Ruby | Ruby | use conditional instead of try | b08bf9979232f40309a826947f8ff85d8ccbaa8a | <ide><path>activerecord/lib/active_record/dynamic_finder_match.rb
<ide> def self.match(method)
<ide> klass = [FindBy, FindByBang, FindOrInitializeCreateBy].find do |klass|
<ide> klass.matches?(method)
<ide> end
<del> klass.try(:new, method)
<add> klass.new(method) if klass
<ide> end
<ide>
<ide> def self.matches?(method) | 1 |
Python | Python | remove print statement | 2883ebfca266f7df9c15542f9b042479c5b94e5e | <ide><path>spacy/tests/regression/test_issue636.py
<ide> def test_issue636(EN, text):
<ide> doc1 = EN(text)
<ide> doc2 = Doc(EN.vocab)
<ide> doc2.from_bytes(doc1.to_bytes())
<del> print([t.lemma_ for t in doc1], [t.lemma_ for t in doc2])
<ide> assert [t.lemma_ for t in doc1] == [t.lemma_ for t in doc2] | 1 |
Go | Go | fix non-tty run issue | 2eaa0a1dd7354e429c056d68515f903351e3eeb4 | <ide><path>commands.go
<ide> func (cli *DockerCli) CmdAttach(args ...string) error {
<ide> connections += 1
<ide> }
<ide> chErrors := make(chan error, connections)
<del> cli.monitorTtySize(cmd.Arg(0))
<add> if container.Config.Tty {
<add> cli.monitorTtySize(cmd.Arg(0))
<add> }
<ide> if splitStderr {
<ide> go func() {
<ide> chErrors <- cli.hijack("POST", "/containers/"+cmd.Arg(0)+"/attach?stream=1&stderr=1", false, nil, os.Stderr)
<ide> func (cli *DockerCli) CmdRun(args ...string) error {
<ide> }
<ide> if connections > 0 {
<ide> chErrors := make(chan error, connections)
<del> cli.monitorTtySize(out.ID)
<add> if config.Tty {
<add> cli.monitorTtySize(out.ID)
<add> }
<ide>
<ide> if splitStderr && config.AttachStderr {
<ide> go func() {
<ide> func (cli *DockerCli) CmdRun(args ...string) error {
<ide> for connections > 0 {
<ide> err := <-chErrors
<ide> if err != nil {
<add> utils.Debugf("Error hijack: %s", err)
<ide> return err
<ide> }
<ide> connections -= 1
<ide> func (cli *DockerCli) hijack(method, path string, setRawTerminal bool, in *os.Fi
<ide> defer term.RestoreTerminal(oldState)
<ide> }
<ide> sendStdin := utils.Go(func() error {
<del> _, err := io.Copy(rwc, in)
<add> io.Copy(rwc, in)
<ide> if err := rwc.(*net.TCPConn).CloseWrite(); err != nil {
<del> fmt.Fprintf(os.Stderr, "Couldn't send EOF: %s\n", err)
<add> utils.Debugf("Couldn't send EOF: %s\n", err)
<ide> }
<del> return err
<add> // Discard errors due to pipe interruption
<add> return nil
<ide> })
<ide>
<ide> if err := <-receiveStdout; err != nil {
<add> utils.Debugf("Error receiveStdout: %s", err)
<ide> return err
<ide> }
<ide>
<ide> if !term.IsTerminal(in.Fd()) {
<ide> if err := <-sendStdin; err != nil {
<add> utils.Debugf("Error sendStdin: %s", err)
<ide> return err
<ide> }
<ide> } | 1 |
PHP | PHP | add additional test case for named parameters | cd68002246824b176112458c462f20b552b17ee8 | <ide><path>lib/Cake/Test/Case/View/Helper/FormHelperTest.php
<ide> public function testSecuredFormUrlIgnoresHost() {
<ide> $this->assertNotContains($expected, $result, 'URL is different');
<ide> }
<ide>
<add>/**
<add> * Ensure named parameters work correctly with hash generation.
<add> *
<add> * @return void
<add> */
<add> public function testSecuredFormUrlWorksWithNamedParameter() {
<add> $this->Form->request['_Token'] = array('key' => 'testKey');
<add>
<add> $expected = 'c890c5f041b1d83d1610dee8f52cd257df7ce618%3A';
<add> $this->Form->create('Address', array(
<add> 'url' => array('controller' => 'articles', 'action' => 'view', 1, 'type' => 'red')
<add> ));
<add> $result = $this->Form->secure();
<add> $this->assertContains($expected, $result);
<add> }
<add>
<ide> /**
<ide> * Test that URL, HTML and identifer show up in their hashs.
<ide> * | 1 |
PHP | PHP | fix chunk test when negative value | 57ba218e1dd4a543f24623b1b4e3e89b5e690469 | <ide><path>tests/Support/SupportCollectionTest.php
<ide> public function testChunkWhenGivenLessThanZero()
<ide>
<ide> $this->assertEquals(
<ide> [],
<del> $collection->chunk(0)->toArray()
<add> $collection->chunk(-1)->toArray()
<ide> );
<ide> }
<ide> | 1 |
Javascript | Javascript | move deflate logic in convertimgdatatopng | 94f1dde07d1a72232b699fcd38ffbf4efb7d2514 | <ide><path>src/display/svg.js
<ide> var convertImgDataToPng = (function convertImgDataToPngClosure() {
<ide> return (b << 16) | a;
<ide> }
<ide>
<add> /**
<add> * @param {Uint8Array} literals The input data.
<add> * @returns {Uint8Array} The DEFLATE-compressed data stream in zlib format.
<add> * This is the required format for compressed streams in the PNG format:
<add> * http://www.libpng.org/pub/png/spec/1.2/PNG-Compression.html
<add> */
<add> function deflateSync(literals) {
<add> return deflateSyncUncompressed(literals);
<add> }
<add>
<add> // An implementation of DEFLATE with compression level 0 (Z_NO_COMPRESSION).
<add> function deflateSyncUncompressed(literals) {
<add> var len = literals.length;
<add> var maxBlockLength = 0xFFFF;
<add>
<add> var deflateBlocks = Math.ceil(len / maxBlockLength);
<add> var idat = new Uint8Array(2 + len + deflateBlocks * 5 + 4);
<add> var pi = 0;
<add> idat[pi++] = 0x78; // compression method and flags
<add> idat[pi++] = 0x9c; // flags
<add>
<add> var pos = 0;
<add> while (len > maxBlockLength) {
<add> // writing non-final DEFLATE blocks type 0 and length of 65535
<add> idat[pi++] = 0x00;
<add> idat[pi++] = 0xff;
<add> idat[pi++] = 0xff;
<add> idat[pi++] = 0x00;
<add> idat[pi++] = 0x00;
<add> idat.set(literals.subarray(pos, pos + maxBlockLength), pi);
<add> pi += maxBlockLength;
<add> pos += maxBlockLength;
<add> len -= maxBlockLength;
<add> }
<add>
<add> // writing non-final DEFLATE blocks type 0
<add> idat[pi++] = 0x01;
<add> idat[pi++] = len & 0xff;
<add> idat[pi++] = len >> 8 & 0xff;
<add> idat[pi++] = (~len & 0xffff) & 0xff;
<add> idat[pi++] = (~len & 0xffff) >> 8 & 0xff;
<add> idat.set(literals.subarray(pos), pi);
<add> pi += literals.length - pos;
<add>
<add> var adler = adler32(literals, 0, literals.length); // checksum
<add> idat[pi++] = adler >> 24 & 0xff;
<add> idat[pi++] = adler >> 16 & 0xff;
<add> idat[pi++] = adler >> 8 & 0xff;
<add> idat[pi++] = adler & 0xff;
<add> return idat;
<add> }
<add>
<ide> function encode(imgData, kind, forceDataSchema) {
<ide> var width = imgData.width;
<ide> var height = imgData.height;
<ide> var convertImgDataToPng = (function convertImgDataToPngClosure() {
<ide> 0x00 // interlace method
<ide> ]);
<ide>
<del> var len = literals.length;
<del> var maxBlockLength = 0xFFFF;
<del>
<del> var deflateBlocks = Math.ceil(len / maxBlockLength);
<del> var idat = new Uint8Array(2 + len + deflateBlocks * 5 + 4);
<del> var pi = 0;
<del> idat[pi++] = 0x78; // compression method and flags
<del> idat[pi++] = 0x9c; // flags
<del>
<del> var pos = 0;
<del> while (len > maxBlockLength) {
<del> // writing non-final DEFLATE blocks type 0 and length of 65535
<del> idat[pi++] = 0x00;
<del> idat[pi++] = 0xff;
<del> idat[pi++] = 0xff;
<del> idat[pi++] = 0x00;
<del> idat[pi++] = 0x00;
<del> idat.set(literals.subarray(pos, pos + maxBlockLength), pi);
<del> pi += maxBlockLength;
<del> pos += maxBlockLength;
<del> len -= maxBlockLength;
<del> }
<del>
<del> // writing non-final DEFLATE blocks type 0
<del> idat[pi++] = 0x01;
<del> idat[pi++] = len & 0xff;
<del> idat[pi++] = len >> 8 & 0xff;
<del> idat[pi++] = (~len & 0xffff) & 0xff;
<del> idat[pi++] = (~len & 0xffff) >> 8 & 0xff;
<del> idat.set(literals.subarray(pos), pi);
<del> pi += literals.length - pos;
<del>
<del> var adler = adler32(literals, 0, literals.length); // checksum
<del> idat[pi++] = adler >> 24 & 0xff;
<del> idat[pi++] = adler >> 16 & 0xff;
<del> idat[pi++] = adler >> 8 & 0xff;
<del> idat[pi++] = adler & 0xff;
<add> var idat = deflateSync(literals);
<ide>
<ide> // PNG will consists: header, IHDR+data, IDAT+data, and IEND.
<ide> var pngLength = PNG_HEADER.length + (CHUNK_WRAPPER_SIZE * 3) + | 1 |
Python | Python | fix mappings of get_data_disks | 9c31df46bdc3aec4e7b300b0fd5db7dce712140d | <ide><path>libcloud/compute/drivers/ecs.py
<ide> def _get_data_disks(self, ex_data_disks):
<ide> mappings = {'size': 'Size',
<ide> 'category': 'Category',
<ide> 'snapshot_id': 'SnapshotId',
<del> 'disk_name': 'DiskName',
<add> 'name': 'DiskName',
<ide> 'description': 'Description',
<ide> 'device': 'Device',
<del> 'delete_with_instance': 'DeleteWithInstance'}
<add> 'delete_on_termination': 'DeleteWithInstance'}
<ide> params = {}
<ide> for idx, disk in enumerate(data_disks):
<ide> key_base = 'DataDisk.{0}.'.format(idx + 1) | 1 |
Text | Text | add missing entry in v6 changelog table | 289e53265a6657e42b9057c4ac9b90b5754db40a | <ide><path>doc/changelogs/CHANGELOG_V6.md
<ide> </tr>
<ide> <tr>
<ide> <td valign="top">
<add><a href="#6.10.0">6.10.0</a><br/>
<ide> <a href="#6.9.5">6.9.5</a><br/>
<ide> <a href="#6.9.4">6.9.4</a><br/>
<ide> <a href="#6.9.3">6.9.3</a><br/> | 1 |
Text | Text | correct vcbuild options for windows testing | ed9b6c1264a07084e44f1d269bfe82e0f1b37c74 | <ide><path>BUILDING.md
<ide> Prerequisites:
<ide> To run the tests:
<ide>
<ide> ```console
<del>> .\vcbuild test
<add>> .\vcbuild nosign test
<ide> ```
<ide>
<ide> To test if Node.js was built correctly:
<ide> $ ./configure --with-intl=full-icu --download=all
<ide> ##### Windows:
<ide>
<ide> ```console
<del>> .\vcbuild full-icu download-all
<add>> .\vcbuild nosign full-icu download-all
<ide> ```
<ide>
<ide> #### Building without Intl support
<ide> $ ./configure --without-intl
<ide> ##### Windows:
<ide>
<ide> ```console
<del>> .\vcbuild without-intl
<add>> .\vcbuild nosign without-intl
<ide> ```
<ide>
<ide> #### Use existing installed ICU (Unix / OS X only):
<ide> First unpack latest ICU to `deps/icu`
<ide> as `deps/icu` (You'll have: `deps/icu/source/...`)
<ide>
<ide> ```console
<del>> .\vcbuild full-icu
<add>> .\vcbuild nosign full-icu
<ide> ```
<ide>
<ide> ## Building Node.js with FIPS-compliant OpenSSL
<ide><path>CONTRIBUTING.md
<ide> $ ./configure && make -j4 test
<ide> Windows:
<ide>
<ide> ```text
<del>> vcbuild test
<add> .\vcbuild nosign test
<ide> ```
<ide>
<ide> (See the [BUILDING.md](./BUILDING.md) for more details.)
<ide>
<ide> Make sure the linter is happy and that all tests pass. Please, do not submit
<ide> patches that fail either check.
<ide>
<del>Running `make test`/`vcbuild test` will run the linter as well unless one or
<add>Running `make test`/`.\vcbuild nosign test` will run the linter as well unless one or
<ide> more tests fail.
<ide>
<ide> If you want to run the linter without running tests, use
<del>`make lint`/`vcbuild jslint`.
<add>`make lint`/`.\vcbuild nosign jslint`.
<ide>
<ide> If you are updating tests and just want to run a single test to check it, you
<ide> can use this syntax to run it exactly as the test harness would: | 2 |
Javascript | Javascript | add buffering to randomint | 5dae7d67589c908a1fe672b084838c1397d00e54 | <ide><path>benchmark/crypto/randomInt.js
<add>'use strict';
<add>
<add>const common = require('../common.js');
<add>const { randomInt } = require('crypto');
<add>
<add>const bench = common.createBenchmark(main, {
<add> mode: ['sync', 'async-sequential', 'async-parallel'],
<add> min: [-(2 ** 47) + 1, -10_000, -100],
<add> max: [100, 10_000, 2 ** 47],
<add> n: [1e3, 1e5]
<add>});
<add>
<add>function main({ mode, min, max, n }) {
<add> if (mode === 'sync') {
<add> bench.start();
<add> for (let i = 0; i < n; i++)
<add> randomInt(min, max);
<add> bench.end(n);
<add> } else if (mode === 'async-sequential') {
<add> bench.start();
<add> (function next(i) {
<add> if (i === n)
<add> return bench.end(n);
<add> randomInt(min, max, () => {
<add> next(i + 1);
<add> });
<add> })(0);
<add> } else {
<add> bench.start();
<add> let done = 0;
<add> for (let i = 0; i < n; i++) {
<add> randomInt(min, max, () => {
<add> if (++done === n)
<add> bench.end(n);
<add> });
<add> }
<add> }
<add>}
<ide><path>lib/internal/crypto/random.js
<ide>
<ide> const {
<ide> Array,
<add> ArrayPrototypeForEach,
<add> ArrayPrototypePush,
<add> ArrayPrototypeShift,
<add> ArrayPrototypeSplice,
<ide> BigInt,
<ide> FunctionPrototypeBind,
<ide> FunctionPrototypeCall,
<ide> function randomFill(buf, offset, size, callback) {
<ide> // e.g.: Buffer.from("ff".repeat(6), "hex").readUIntBE(0, 6);
<ide> const RAND_MAX = 0xFFFF_FFFF_FFFF;
<ide>
<add>// Cache random data to use in randomInt. The cache size must be evenly
<add>// divisible by 6 because each attempt to obtain a random int uses 6 bytes.
<add>const randomCache = new FastBuffer(6 * 1024);
<add>let randomCacheOffset = randomCache.length;
<add>let asyncCacheFillInProgress = false;
<add>const asyncCachePendingTasks = [];
<add>
<ide> // Generates an integer in [min, max) range where min is inclusive and max is
<ide> // exclusive.
<ide> function randomInt(min, max, callback) {
<ide> function randomInt(min, max, callback) {
<ide> // than or equal to 0 and less than randLimit.
<ide> const randLimit = RAND_MAX - (RAND_MAX % range);
<ide>
<del> if (isSync) {
<del> // Sync API
<del> while (true) {
<del> const x = randomBytes(6).readUIntBE(0, 6);
<del> if (x >= randLimit) {
<del> // Try again.
<del> continue;
<del> }
<del> return (x % range) + min;
<add> // If we don't have a callback, or if there is still data in the cache, we can
<add> // do this synchronously, which is super fast.
<add> while (isSync || (randomCacheOffset < randomCache.length)) {
<add> if (randomCacheOffset === randomCache.length) {
<add> // This might block the thread for a bit, but we are in sync mode.
<add> randomFillSync(randomCache);
<add> randomCacheOffset = 0;
<add> }
<add>
<add> const x = randomCache.readUIntBE(randomCacheOffset, 6);
<add> randomCacheOffset += 6;
<add>
<add> if (x < randLimit) {
<add> const n = (x % range) + min;
<add> if (isSync) return n;
<add> process.nextTick(callback, undefined, n);
<add> return;
<ide> }
<del> } else {
<del> // Async API
<del> const pickAttempt = () => {
<del> randomBytes(6, (err, bytes) => {
<del> if (err) return callback(err);
<del> const x = bytes.readUIntBE(0, 6);
<del> if (x >= randLimit) {
<del> // Try again.
<del> return pickAttempt();
<del> }
<del> const n = (x % range) + min;
<del> callback(null, n);
<del> });
<del> };
<del>
<del> pickAttempt();
<ide> }
<add>
<add> // At this point, we are in async mode with no data in the cache. We cannot
<add> // simply refill the cache, because another async call to randomInt might
<add> // already be doing that. Instead, queue this call for when the cache has
<add> // been refilled.
<add> ArrayPrototypePush(asyncCachePendingTasks, { min, max, callback });
<add> asyncRefillRandomIntCache();
<add>}
<add>
<add>function asyncRefillRandomIntCache() {
<add> if (asyncCacheFillInProgress)
<add> return;
<add>
<add> asyncCacheFillInProgress = true;
<add> randomFill(randomCache, (err) => {
<add> asyncCacheFillInProgress = false;
<add>
<add> const tasks = asyncCachePendingTasks;
<add> const errorReceiver = err && ArrayPrototypeShift(tasks);
<add> if (!err)
<add> randomCacheOffset = 0;
<add>
<add> // Restart all pending tasks. If an error occurred, we only notify a single
<add> // callback (errorReceiver) about it. This way, every async call to
<add> // randomInt has a chance of being successful, and it avoids complex
<add> // exception handling here.
<add> ArrayPrototypeForEach(ArrayPrototypeSplice(tasks, 0), (task) => {
<add> randomInt(task.min, task.max, task.callback);
<add> });
<add>
<add> // This is the only call that might throw, and is therefore done at the end.
<add> if (errorReceiver)
<add> errorReceiver.callback(err);
<add> });
<ide> }
<ide>
<ide> | 2 |
Python | Python | fix error in send_file helper | 92ce20eeacf5de24803abaf70a3658806fa4d74f | <ide><path>flask/helpers.py
<ide> def send_file(filename_or_fp, mimetype=None, as_attachment=False,
<ide> if mimetype is None:
<ide> if attachment_filename is not None:
<ide> raise ValueError(
<del> 'Unable to infer MIME-type from filename {!r}, please '
<del> 'pass one explicitly.'.format(mimetype_filename)
<add> 'Unable to infer MIME-type from filename {0!r}, please '
<add> 'pass one explicitly.'.format(attachment_filename)
<ide> )
<ide> raise ValueError(
<ide> 'Unable to infer MIME-type because no filename is available. '
<ide><path>tests/test_helpers.py
<ide> def test_send_file_object_without_mimetype(self):
<ide> with app.test_request_context():
<ide> with pytest.raises(ValueError) as excinfo:
<ide> flask.send_file(StringIO("LOL"))
<del>
<ide> assert 'Unable to infer MIME-type' in str(excinfo)
<ide> assert 'no filename is available' in str(excinfo)
<ide>
<add> with app.test_request_context():
<add> with pytest.raises(ValueError) as excinfo:
<add> flask.send_file(StringIO("LOL"), attachment_filename='filename')
<add> assert "Unable to infer MIME-type from filename 'filename'" in str(excinfo)
<add>
<ide> def test_send_file_object(self):
<ide> app = flask.Flask(__name__)
<ide> | 2 |
Text | Text | add 1.0.1 to changelog | 7cf753f428ad70c04acd2648e7f329ee578f4fdd | <ide><path>CHANGELOG.md
<ide> All notable changes to this project will be documented in this file.
<ide> This project adheres to [Semantic Versioning](http://semver.org/).
<ide>
<add>## [1.0.1](https://github.com/rackt/redux/compare/v1.0.0...v1.0.1) - 2015/08/15
<add>
<add>* Fixes “process is not defined” on React Native ([#525](https://github.com/rackt/redux/issues/525), [#526](https://github.com/rackt/redux/pull/526))
<add>* Removes dependencies on `invariant` and `warning` ([#528](https://github.com/rackt/redux/pull/528))
<add>* Fixes TodoMVC example ([#524](https://github.com/rackt/redux/issues/524), [#529](https://github.com/rackt/redux/pull/529))
<add>
<ide> ## [1.0.0](https://github.com/rackt/redux/compare/v1.0.0-rc...v1.0.0) - 2015/08/14
<ide>
<ide> ### Breaking Changes | 1 |
PHP | PHP | fix boot on unserialize | 909ddcb8dd5a412b6f71ab45b8e784ee41188cb0 | <ide><path>src/Illuminate/Database/Eloquent/Model.php
<ide> abstract class Model implements ArrayAccess, ArrayableInterface, JsonableInterfa
<ide> * @return void
<ide> */
<ide> public function __construct(array $attributes = array())
<add> {
<add> $this->bootIfNotBooted();
<add>
<add> $this->syncOriginal();
<add>
<add> $this->fill($attributes);
<add> }
<add>
<add> /**
<add> * Check if the model needs to be booted and if so, do it.
<add> *
<add> * @return void
<add> */
<add> protected function bootIfNotBooted()
<ide> {
<ide> if ( ! isset(static::$booted[get_class($this)]))
<ide> {
<ide> public function __construct(array $attributes = array())
<ide>
<ide> $this->fireModelEvent('booted', false);
<ide> }
<del>
<del> $this->syncOriginal();
<del>
<del> $this->fill($attributes);
<ide> }
<ide>
<ide> /**
<ide> public function __toString()
<ide> return $this->toJson();
<ide> }
<ide>
<add> /**
<add> * When a model is being unserialized, check if it needs to be booted.
<add> *
<add> * @return void
<add> */
<add> public function __wakeup()
<add> {
<add> $this->bootIfNotBooted();
<add> }
<add>
<ide> }
<ide><path>tests/Database/DatabaseEloquentModelTest.php
<ide> public function testGetModelAttributeMethodThrowsExceptionIfNotRelation()
<ide> }
<ide>
<ide>
<add> public function testModelIsBootedOnUnserialize()
<add> {
<add> $model = new EloquentModelBootingTestStub;
<add> $this->assertTrue(EloquentModelBootingTestStub::isBooted());
<add> $model->foo = 'bar';
<add> $string = serialize($model);
<add> $model = null;
<add> EloquentModelBootingTestStub::unboot();
<add> $this->assertFalse(EloquentModelBootingTestStub::isBooted());
<add> $model = unserialize($string);
<add> $this->assertTrue(EloquentModelBootingTestStub::isBooted());
<add> }
<add>
<add>
<ide> protected function addMockConnection($model)
<ide> {
<ide> $model->setConnectionResolver($resolver = m::mock('Illuminate\Database\ConnectionResolverInterface'));
<ide> public function newQuery($excludeDeleted = true)
<ide> }
<ide>
<ide> class EloquentModelWithoutTableStub extends Illuminate\Database\Eloquent\Model {}
<add>
<add>class EloquentModelBootingTestStub extends Illuminate\Database\Eloquent\Model {
<add> public static function unboot()
<add> {
<add> unset(static::$booted[get_called_class()]);
<add> }
<add> public static function isBooted()
<add> {
<add> return array_key_exists(get_called_class(), static::$booted);
<add> }
<add>} | 2 |
Text | Text | add referecnes for translations | 1854725650012e8675882af33b836bcba41bf969 | <ide><path>threejs/lessons/threejs-fundamentals.md
<ide> see what you need to change. It would be too much work to maintain both an es6 m
<ide> version of this site so going forward this site will only show es6 module style. As stated elsewhere,
<ide> to support legacy browsers look into a <a href="https://babeljs.io">transpiler</a>.</p>
<ide> </div>
<add>
<add><!-- needed for out of date translations -->
<add><a href="threejs-geometry.html"></a>
<ide>\ No newline at end of file | 1 |
Javascript | Javascript | fix lint errors | 880328bf6e1f709373e59e03549da562d4f5f197 | <ide><path>src/state-store.js
<ide> class StateStore {
<ide> }
<ide>
<ide> connect () {
<del> return this.dbPromise.then(db => !!db)
<add> return this.dbPromise.then((db) => !!db)
<ide> }
<ide>
<ide> save (key, value) {
<ide> return new Promise((resolve, reject) => {
<del> this.dbPromise.then(db => {
<add> this.dbPromise.then((db) => {
<ide> if (db == null) return resolve()
<ide>
<ide> var request = db.transaction(['states'], 'readwrite')
<ide> class StateStore {
<ide> }
<ide>
<ide> load (key) {
<del> return this.dbPromise.then(db => {
<add> return this.dbPromise.then((db) => {
<ide> if (!db) return
<ide>
<ide> return new Promise((resolve, reject) => {
<ide> class StateStore {
<ide> }
<ide>
<ide> clear () {
<del> return this.dbPromise.then(db => {
<add> return this.dbPromise.then((db) => {
<ide> if (!db) return
<ide>
<ide> return new Promise((resolve, reject) => {
<ide> class StateStore {
<ide> }
<ide>
<ide> count () {
<del> return this.dbPromise.then(db => {
<add> return this.dbPromise.then((db) => {
<ide> if (!db) return
<ide>
<ide> return new Promise((resolve, reject) => {
<ide><path>src/text-editor-registry.js
<ide> export default class TextEditorRegistry {
<ide> // Private
<ide>
<ide> grammarAddedOrUpdated (grammar) {
<del> this.editorsWithMaintainedGrammar.forEach(editor => {
<add> this.editorsWithMaintainedGrammar.forEach((editor) => {
<ide> if (grammar.injectionSelector) {
<ide> if (editor.tokenizedBuffer.hasTokenForSelector(grammar.injectionSelector)) {
<ide> editor.tokenizedBuffer.retokenizeLines()
<ide> export default class TextEditorRegistry {
<ide> for (const [settingKey, paramName] of EDITOR_PARAMS_BY_SETTING_KEY) {
<ide> this.subscriptions.add(
<ide> this.config.onDidChange(settingKey, configOptions, ({newValue}) => {
<del> this.editorsWithMaintainedConfig.forEach(editor => {
<add> this.editorsWithMaintainedConfig.forEach((editor) => {
<ide> if (editor.getRootScopeDescriptor().isEqual(scopeDescriptor)) {
<ide> editor.update({[paramName]: newValue})
<ide> }
<ide> export default class TextEditorRegistry {
<ide> const updateTabTypes = () => {
<ide> const tabType = this.config.get('editor.tabType', configOptions)
<ide> const softTabs = this.config.get('editor.softTabs', configOptions)
<del> this.editorsWithMaintainedConfig.forEach(editor => {
<add> this.editorsWithMaintainedConfig.forEach((editor) => {
<ide> if (editor.getRootScopeDescriptor().isEqual(scopeDescriptor)) {
<ide> editor.setSoftTabs(shouldEditorUseSoftTabs(editor, tabType, softTabs))
<ide> }
<ide> class ScopedSettingsDelegate {
<ide> const commentStartEntries = this.config.getAll('editor.commentStart', {scope})
<ide> const commentEndEntries = this.config.getAll('editor.commentEnd', {scope})
<ide> const commentStartEntry = commentStartEntries[0]
<del> const commentEndEntry = commentEndEntries.find(entry => {
<add> const commentEndEntry = commentEndEntries.find((entry) => {
<ide> return entry.scopeSelector === commentStartEntry.scopeSelector
<ide> })
<ide> return { | 2 |
PHP | PHP | use string cast | f774693e4e57d74b316ab426e8a17142ff270468 | <ide><path>tests/TestCase/Error/Middleware/ErrorHandlerMiddlewareTest.php
<ide> public function testHandleRedirectException()
<ide> $result = $middleware->process($request, $handler);
<ide> $this->assertInstanceOf(ResponseInterface::class, $result);
<ide> $this->assertEquals(302, $result->getStatusCode());
<del> $this->assertEmpty('' . $result->getBody());
<add> $this->assertEmpty((string)$result->getBody());
<ide> $expected = [
<ide> 'location' => ['http://example.org/login'],
<ide> ]; | 1 |
Text | Text | add note regarding pushing release tags | 0d31293d4aa2c5a8e9de4e7ef446f2566def1908 | <ide><path>doc/releases.md
<ide> following command:
<ide> $ git push <remote> <vx.y.z>
<ide> ```
<ide>
<add>*Note*: Please do not push the tag unless you are ready to complete the
<add>remainder of the release steps.
<add>
<ide> ### 12. Set Up For the Next Release
<ide>
<ide> On release proposal branch, edit `src/node_version.h` again and: | 1 |
Python | Python | remove outdated comment | 7f93747602352758fb7df7be451c2cb69d8e7dad | <ide><path>keras/backend/theano_backend.py
<ide> def random_binomial(shape, p=0.0, dtype=_FLOATX, seed=None):
<ide> seed = np.random.randint(10e6)
<ide> rng = RandomStreams(seed=seed)
<ide> return rng.binomial(shape, p=p, dtype=dtype)
<del>
<del>'''
<del>more TODO:
<del>
<del>tensordot -> soon to be introduced in TF
<del>batched_tensordot -> reimplement
<del>''' | 1 |
Text | Text | fix version history for loaders api | 0d58c0be3e1c3013959c02d42a2a2f21dd31c5f8 | <ide><path>doc/api/cli.md
<ide> Enable experimental `import.meta.resolve()` support.
<ide> ### `--experimental-loader=module`
<ide>
<ide> <!-- YAML
<del>added: v9.0.0
<add>added: v8.8.0
<add>changes:
<add> - version: v12.11.1
<add> pr-url: https://github.com/nodejs/node/pull/29752
<add> description: This flag was renamed from `--loader` to
<add> `--experimental-loader`.
<ide> -->
<ide>
<ide> Specify the `module` of a custom experimental [ECMAScript module loader][].
<ide><path>doc/api/esm.md
<ide> of Node.js applications.
<ide>
<ide> ## Loaders
<ide>
<add><!-- YAML
<add>added: v8.8.0
<add>changes:
<add> - version: v16.12.0
<add> pr-url: https://github.com/nodejs/node/pull/37468
<add> description: Removed `getFormat`, `getSource`, `transformSource`, and
<add> `globalPreload`; added `load` hook and `getGlobalPreload` hook.
<add>-->
<add>
<ide> > Stability: 1 - Experimental
<ide>
<ide> > This API is currently being redesigned and will still change. | 2 |
Python | Python | fix border mode = same in conv2d | 6a4aab453f42bda2368e51bc707c22c40c384b34 | <ide><path>keras/layers/convolutional.py
<ide> def get_output(self, train):
<ide>
<ide> conv_out = theano.tensor.nnet.conv.conv2d(X, self.W,
<ide> border_mode=border_mode, subsample=self.subsample)
<del> output = self.activation(conv_out + self.b.dimshuffle('x', 0, 'x', 'x'))
<ide>
<ide> if self.border_mode == 'same':
<del> clip_row = (self.nb_row - 1) // 2
<del> clip_col = (self.nb_col - 1) // 2
<del> output = output[:, :, clip_row:-clip_row, clip_col:-clip_col]
<del> return output
<add> shift_x = (self.nb_row - 1) // 2
<add> shift_y = (self.nb_col - 1) // 2
<add> conv_out = conv_out[:, :, shift_x:X.shape[2] + shift_x, shift_y:X.shape[3] + shift_y]
<add>
<add> return self.activation(conv_out + self.b.dimshuffle('x', 0, 'x', 'x'))
<add>
<ide>
<ide> def get_config(self):
<ide> return {"name":self.__class__.__name__, | 1 |
Text | Text | add v3.10.2 to changelog.md | 8ead9d08387c460c9d70079b5fbe8fc2bee2ef38 | <ide><path>CHANGELOG.md
<ide> - [#17940](https://github.com/emberjs/ember.js/pull/17940) [CLEANUP] Remove `sync` queue from @ember/runloop.
<ide> - [#18026](https://github.com/emberjs/ember.js/pull/18026) Enabling featured discussed in 2019-05-03 core team meeting.
<ide>
<add>### v3.10.2 (June 17, 2019)
<add>
<add>- [#17971](https://github.com/emberjs/ember.js/pull/17971) [BUGFIX] Ensure query param only link-to's work in error states.
<add>
<ide> ### v3.10.1 (June 4, 2019)
<ide>
<ide> - [#18071](https://github.com/emberjs/ember.js/pull/18071) [BUGFIX] Ensure modifiers do not run in FastBoot modes. (#18071) | 1 |
PHP | PHP | add default parameters to route (fixes ) | 33c16b0c866b32911a1f6cc5cb6916d6d037e813 | <ide><path>src/Illuminate/Routing/Route.php
<ide> public function bindParameters(Request $request)
<ide> );
<ide> }
<ide>
<del> return $this->parameters = $this->replaceDefaults($params);
<add> return $this->parameters = $this->fillDefaults($this->replaceDefaults($params));
<ide> }
<ide>
<ide> /**
<ide> protected function replaceDefaults(array $parameters)
<ide> return $parameters;
<ide> }
<ide>
<add> /**
<add> * Fill missing parameters with their defaults.
<add> *
<add> * @param array $parameters
<add> * @return array
<add> */
<add> protected function fillDefaults(array $parameters)
<add> {
<add> foreach ($this->defaults as $key => $value) {
<add> if (! isset($parameters[$key])) {
<add> $parameters[$key] = $value;
<add> }
<add> }
<add>
<add> return $parameters;
<add> }
<add>
<ide> /**
<ide> * Parse the route action into a standard array.
<ide> * | 1 |
Ruby | Ruby | add missing require | 9143032a108bd41121337c82416ba90f460d8214 | <ide><path>activerecord/test/cases/associations/inner_join_association_test.rb
<ide> require 'models/author'
<ide> require 'models/category'
<ide> require 'models/categorization'
<add>require 'models/person'
<ide> require 'models/tagging'
<ide> require 'models/tag'
<ide> | 1 |
Javascript | Javascript | replace internal use of deprecated api | e10525f76e314369d3d3e2f943d2cc57de35736f | <ide><path>lib/internal/inspector/inspect_repl.js
<ide> function extractFunctionName(description) {
<ide> }
<ide>
<ide> const PUBLIC_BUILTINS = require('module').builtinModules;
<del>const NATIVES = PUBLIC_BUILTINS ? process.binding('natives') : {};
<add>const NATIVES = PUBLIC_BUILTINS ? internalBinding('natives') : {};
<ide> function isNativeUrl(url) {
<ide> url = url.replace(/\.js$/, '');
<ide> if (PUBLIC_BUILTINS) { | 1 |
Python | Python | add docstrings, error messages and fix consistency | 1694c24e5248d98befb323d6e9569213f4c86b2c | <ide><path>spacy/util.py
<ide> def get_lang_class(name):
<ide>
<ide>
<ide> def load_lang_class(lang):
<add> """Import and load a Language class.
<add>
<add> Args:
<add> lang (unicode): Two-letter language code, e.g. 'en'.
<add> Returns:
<add> Language: Language class.
<add> """
<ide> module = importlib.import_module('.lang.%s' % lang, 'spacy')
<ide> return getattr(module, module.__all__[0])
<ide>
<ide>
<ide> def get_data_path(require_exists=True):
<add> """Get path to spaCy data directory.
<add>
<add> Args:
<add> require_exists (bool): Only return path if it exists, otherwise None.
<add> Returns:
<add> Path or None: Data path or None.
<add> """
<ide> if not require_exists:
<ide> return _data_path
<ide> else:
<ide> return _data_path if _data_path.exists() else None
<ide>
<ide>
<ide> def set_data_path(path):
<add> """Set path to spaCy data directory.
<add>
<add> Args:
<add> path (unicode or Path): Path to new data directory.
<add> """
<ide> global _data_path
<ide> _data_path = ensure_path(path)
<ide>
<ide> def ensure_path(path):
<ide>
<ide>
<ide> def resolve_model_path(name):
<add> """Resolve a model name or string to a model path.
<add>
<add> Args:
<add> name (unicode): Package name, shortcut link or model path.
<add> Returns:
<add> Path: Path to model data directory.
<add> """
<ide> data_path = get_data_path()
<ide> if not data_path or not data_path.exists():
<ide> raise IOError("Can't find spaCy data path: %s" % path2str(data_path))
<ide> def resolve_model_path(name):
<ide> raise IOError("Can't find model '%s'" % name)
<ide>
<ide>
<del>def is_package(origin):
<del> """
<del> Check if string maps to a package installed via pip.
<add>def is_package(name):
<add> """Check if string maps to a package installed via pip.
<add>
<add> Args:
<add> name (unicode): Name of package.
<add> Returns:
<add> bool: True if installed package, False if not.
<add>
<ide> """
<ide> packages = pip.get_installed_distributions()
<ide> for package in packages:
<del> if package.project_name.replace('-', '_') == origin:
<add> if package.project_name.replace('-', '_') == name:
<ide> return True
<ide> return False
<ide>
<ide>
<ide> def get_model_package_path(package_name):
<add> """Get path to a model package installed via pip.
<add>
<add> Args:
<add> package_name (unicode): Name of installed package.
<add> Returns:
<add> Path: Path to model data directory.
<add> """
<ide> # Here we're importing the module just to find it. This is worryingly
<ide> # indirect, but it's otherwise very difficult to find the package.
<ide> # Python's installation and import rules are very complicated.
<ide> def get_model_package_path(package_name):
<ide>
<ide>
<ide> def parse_package_meta(package_path, require=True):
<del> """
<del> Check if a meta.json exists in a package and return its contents as a
<del> dictionary. If require is set to True, raise an error if no meta.json found.
<add> """Check if a meta.json exists in a package and return its contents.
<add>
<add> Args:
<add> package_path (Path): Path to model package directory.
<add> require (bool): If True, raise error if no meta.json is found.
<add> Returns:
<add> dict or None: Model meta.json data or None.
<ide> """
<ide> location = package_path / 'meta.json'
<ide> if location.is_file():
<ide> def compile_infix_regex(entries):
<ide>
<ide>
<ide> def update_exc(base_exceptions, *addition_dicts):
<add> """Update and validate tokenizer exceptions. Will overwrite exceptions.
<add>
<add> Args:
<add> base_exceptions (dict): Base exceptions.
<add> *addition_dicts (dict): Exceptions to add to the base dict, in order.
<add> Returns:
<add> dict: Combined tokenizer exceptions.
<add> """
<ide> exc = dict(base_exceptions)
<ide> for additions in addition_dicts:
<ide> for orth, token_attrs in additions.items():
<ide> def update_exc(base_exceptions, *addition_dicts):
<ide> raise ValueError(msg % (orth, token_attrs))
<ide> described_orth = ''.join(attr[ORTH] for attr in token_attrs)
<ide> if orth != described_orth:
<del> # TODO: Better error
<del> msg = "Invalid tokenizer exception: key='%s', orths='%s'"
<del> raise ValueError(msg % (orth, described_orth))
<add> raise ValueError("Invalid tokenizer exception: ORTH values "
<add> "combined don't match original string. "
<add> "key='%s', orths='%s'" % (orth, described_orth))
<ide> # overlap = set(exc.keys()).intersection(set(additions))
<ide> # assert not overlap, overlap
<ide> exc.update(additions)
<ide> def update_exc(base_exceptions, *addition_dicts):
<ide>
<ide>
<ide> def expand_exc(excs, search, replace):
<add> """Find string in tokenizer exceptions, duplicate entry and replace string.
<add> For example, to add additional versions with typographic apostrophes.
<add>
<add> Args:
<add> excs (dict): Tokenizer exceptions.
<add> search (unicode): String to find and replace.
<add> replace (unicode): Replacement.
<add> Returns:
<add> dict:
<add> """
<ide> def _fix_token(token, search, replace):
<ide> fixed = dict(token)
<ide> fixed[ORTH] = fixed[ORTH].replace(search, replace)
<ide> def check_renamed_kwargs(renamed, kwargs):
<ide>
<ide>
<ide> def read_json(location):
<add> """Open and load JSON from file.
<add>
<add> Args:
<add> location (Path): Path to JSON file.
<add> Returns:
<add> dict: Loaded JSON content.
<add> """
<ide> with location.open('r', encoding='utf8') as f:
<ide> return ujson.load(f)
<ide>
<ide>
<ide> def get_raw_input(description, default=False):
<del> """
<del> Get user input via raw_input / input and return input value. Takes a
<del> description, and an optional default value to display with the prompt.
<add> """Get user input from the command line via raw_input / input.
<add>
<add> Args:
<add> description (unicode): Text to display before prompt.
<add> default (unicode or False/None): Default value to display with prompt.
<add> Returns:
<add> unicode: User input.
<ide> """
<ide> additional = ' (default: %s)' % default if default else ''
<ide> prompt = ' %s%s: ' % (description, additional)
<ide> def get_raw_input(description, default=False):
<ide>
<ide>
<ide> def print_table(data, title=None):
<add> """Print data in table format.
<add>
<add> Args:
<add> data (dict or list of tuples): Label/value pairs.
<add> title (unicode or None): Title, will be printed above.
<ide> """
<del> Print data in table format. Can either take a list of tuples or a
<del> dictionary, which will be converted to a list of tuples.
<del> """
<del> if type(data) == dict:
<add> if isinstance(data, dict):
<ide> data = list(data.items())
<ide> tpl_row = ' {:<15}' * len(data[0])
<ide> table = '\n'.join([tpl_row.format(l, v) for l, v in data])
<ide> def print_table(data, title=None):
<ide>
<ide>
<ide> def print_markdown(data, title=None):
<del> """
<del> Print listed data in GitHub-flavoured Markdown format so it can be
<del> copy-pasted into issues. Can either take a list of tuples or a dictionary.
<add> """Print data in GitHub-flavoured Markdown format for issues etc.
<add>
<add> Args:
<add> data (dict or list of tuples): Label/value pairs.
<add> title (unicode or None): Title, will be rendered as headline 2.
<ide> """
<ide> def excl_value(value):
<ide> return Path(value).exists() # contains path (personal info)
<ide>
<del> if type(data) == dict:
<add> if isinstance(data, dict):
<ide> data = list(data.items())
<ide> markdown = ["* **{}:** {}".format(l, v) for l, v in data if not excl_value(v)]
<ide> if title:
<ide> def excl_value(value):
<ide>
<ide>
<ide> def prints(*texts, **kwargs):
<del> """
<del> Print formatted message. Each positional argument is rendered as newline-
<del> separated paragraph. An optional highlighted title is printed above the text
<del> (using ANSI escape sequences manually to avoid unnecessary dependency).
<add> """Print formatted message (manual ANSI escape sequences to avoid dependency)
<add>
<add> Args:
<add> *texts (unicode): Texts to print. Each argument is rendered as paragraph.
<add> **kwargs: 'title' is rendered as coloured headline. 'exits'=True performs
<add> system exit after printing.
<ide> """
<ide> exits = kwargs.get('exits', False)
<ide> title = kwargs.get('title', None)
<ide> def prints(*texts, **kwargs):
<ide>
<ide>
<ide> def _wrap(text, wrap_max=80, indent=4):
<del> """
<del> Wrap text at given width using textwrap module. Indent should consist of
<del> spaces. Its length is deducted from wrap width to ensure exact wrapping.
<add> """Wrap text at given width using textwrap module.
<add>
<add> Args:
<add> text (unicode): Text to wrap. If it's a Path, it's converted to string.
<add> wrap_max (int): Maximum line length (indent is deducted).
<add> indent (int): Number of spaces for indentation.
<add> Returns:
<add> unicode: Wrapped text.
<ide> """
<ide> indent = indent * ' '
<ide> wrap_width = wrap_max - len(indent) | 1 |
Javascript | Javascript | add dynamic output for legacy challenges | 9a97d639f515c410aac208b4f424716e6c0d4d26 | <ide><path>client/src/templates/Challenges/utils/build.js
<ide> export function challengeHasPreview({ challengeType }) {
<ide> }
<ide>
<ide> export function isJavaScriptChallenge({ challengeType }) {
<del> return challengeType === challengeTypes.js;
<add> return (
<add> challengeType === challengeTypes.js ||
<add> challengeType === challengeTypes.bonfire
<add> );
<ide> } | 1 |
Python | Python | pass options to route_for_task | 1b7a9f6187bfeae34b9abe38a721b0937ff08848 | <ide><path>celery/app/routes.py
<ide> def __init__(self, routes=None, queues=None,
<ide> def route(self, options, task, args=(), kwargs={}):
<ide> options = self.expand_destination(options) # expands 'queue'
<ide> if self.routes:
<del> route = self.lookup_route(task, args, kwargs)
<add> route = self.lookup_route(task, args, kwargs, options)
<ide> if route: # expands 'queue' in route.
<ide> return lpmerge(self.expand_destination(route), options)
<ide> if 'queue' not in options:
<ide> def expand_destination(self, route):
<ide> 'Queue {0!r} missing from task_queues'.format(queue))
<ide> return route
<ide>
<del> def lookup_route(self, task, args=None, kwargs=None):
<del> return _first_route(self.routes, task, args, kwargs)
<add> def lookup_route(self, task, args=None, kwargs=None, options=None):
<add> return _first_route(self.routes, task, args, kwargs, options)
<ide>
<ide>
<ide> def prepare(routes): | 1 |
PHP | PHP | fix style ci | d3e10fc137d997015e888e098845d8493af617e8 | <ide><path>tests/Database/DatabaseEloquentMorphToTest.php
<ide>
<ide> namespace Illuminate\Tests\Database;
<ide>
<del>use Doctrine\Instantiator\Exception\InvalidArgumentException;
<ide> use Illuminate\Database\Eloquent\Builder;
<ide> use Illuminate\Database\Eloquent\Model;
<ide> use Illuminate\Database\Eloquent\Relations\MorphTo;
<ide> protected function tearDown(): void
<ide>
<ide> public function testLookupDictionaryIsProperlyConstructedForEnums()
<ide> {
<del>
<del> if (version_compare(PHP_VERSION,'8.1') < 0) {
<add> if (version_compare(PHP_VERSION, '8.1') < 0) {
<ide> $this->markTestSkipped('PHP 8.1 is required');
<ide> } else {
<ide> $relation = $this->getRelation();
<ide> $relation->addEagerConstraints([
<del> $one = (object) ['morph_type' => 'morph_type_2', 'foreign_key' => TestEnum::test]
<add> $one = (object) ['morph_type' => 'morph_type_2', 'foreign_key' => TestEnum::test],
<ide> ]);
<ide> $dictionary = $relation->getDictionary();
<ide> $relation->getDictionary();
<ide> $value = $dictionary['morph_type_2'][TestEnum::test->value][0]->foreign_key;
<ide> $this->assertEquals(TestEnum::test, $value);
<ide> }
<del>
<ide> }
<ide>
<ide> public function testLookupDictionaryIsProperlyConstructed()
<ide><path>tests/Database/stubs/TestEnum.php
<ide>
<ide> enum TestEnum: string
<ide> {
<del> case test = "test";
<add> case test = 'test';
<ide>
<ide> } | 2 |
Javascript | Javascript | use fipsmode instead of common.hasfipscrypto | c0acece7ed70b3584474d646fcb8ef93204540b6 | <ide><path>test/parallel/test-cli-node-print-help.js
<add>// Flags: --expose-internals
<ide> 'use strict';
<ide>
<ide> const common = require('../common');
<ide> const common = require('../common');
<ide>
<ide> const assert = require('assert');
<ide> const { exec } = require('child_process');
<add>const { internalBinding } = require('internal/test/binding');
<add>const { fipsMode } = internalBinding('config');
<ide> let stdOut;
<ide>
<ide>
<ide> function startPrintHelpTest() {
<ide> function validateNodePrintHelp() {
<ide> const config = process.config;
<ide> const HAVE_OPENSSL = common.hasCrypto;
<del> const NODE_FIPS_MODE = common.hasFipsCrypto;
<ide> const NODE_HAVE_I18N_SUPPORT = common.hasIntl;
<ide> const HAVE_INSPECTOR = config.variables.v8_enable_inspector === 1;
<ide>
<ide> const cliHelpOptions = [
<ide> { compileConstant: HAVE_OPENSSL,
<ide> flags: [ '--openssl-config=...', '--tls-cipher-list=...',
<ide> '--use-bundled-ca', '--use-openssl-ca' ] },
<del> { compileConstant: NODE_FIPS_MODE,
<add> { compileConstant: fipsMode,
<ide> flags: [ '--enable-fips', '--force-fips' ] },
<ide> { compileConstant: NODE_HAVE_I18N_SUPPORT,
<ide> flags: [ '--icu-data-dir=...', 'NODE_ICU_DATA' ] }, | 1 |
Python | Python | add morph rules | 29ad8143d877b9a1ce505bfd1ca391487c9dc24e | <ide><path>spacy/en/morph_rules.py
<add># encoding: utf8
<add>from __future__ import unicode_literals
<add>
<add>from ..symbols import *
<add>from ..language_data import PRON_LEMMA
<add>
<add>
<add>MORPH_RULES = {
<add> "PRP": {
<add> "I": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "One", "Number": "Sing", "Case": "Nom"},
<add> "me": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "One", "Number": "Sing", "Case": "Acc"},
<add> "you": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Two"},
<add> "he": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Three", "Number": "Sing", "Gender": "Masc", "Case": "Nom"},
<add> "him": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Three", "Number": "Sing", "Gender": "Masc", "Case": "Acc"},
<add> "she": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Three", "Number": "Sing", "Gender": "Fem", "Case": "Nom"},
<add> "her": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Three", "Number": "Sing", "Gender": "Fem", "Case": "Acc"},
<add> "it": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Three", "Number": "Sing", "Gender": "Neut"},
<add> "we": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "One", "Number": "Plur", "Case": "Nom"},
<add> "us": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "One", "Number": "Plur", "Case": "Acc"},
<add> "they": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Three", "Number": "Plur", "Case": "Nom"},
<add> "them": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Three", "Number": "Plur", "Case": "Acc"},
<add>
<add> "mine": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "One", "Number": "Sing", "Poss": "Yes", "Reflex": "Yes"},
<add> "yours": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Two", "Poss": "Yes", "Reflex": "Yes"},
<add> "his": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Three", "Number": "Sing", "Gender": "Masc", "Poss": "Yes", "Reflex": "Yes"},
<add> "hers": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Three", "Number": "Sing", "Gender": "Fem", "Poss": "Yes", "Reflex": "Yes"},
<add> "its": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Three", "Number": "Sing", "Gender": "Neut", "Poss": "Yes", "Reflex": "Yes"},
<add> "ours": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "One", "Number": "Plur", "Poss": "Yes", "Reflex": "Yes"},
<add> "yours": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Two", "Number": "Plur", "Poss": "Yes", "Reflex": "Yes"},
<add> "theirs": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Three", "Number": "Plur", "Poss": "Yes", "Reflex": "Yes"},
<add>
<add> "myself": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "One", "Number": "Sing", "Case": "Acc", "Reflex": "Yes"},
<add> "yourself": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Two", "Case": "Acc", "Reflex": "Yes"},
<add> "himself": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Three", "Number": "Sing", "Case": "Acc", "Gender": "Masc", "Reflex": "Yes"},
<add> "herself": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Three", "Number": "Sing", "Case": "Acc", "Gender": "Fem", "Reflex": "Yes"},
<add> "itself": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Three", "Number": "Sing", "Case": "Acc", "Gender": "Neut", "Reflex": "Yes"},
<add> "themself": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Three", "Number": "Sing", "Case": "Acc", "Reflex": "Yes"},
<add> "ourselves": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "One", "Number": "Plur", "Case": "Acc", "Reflex": "Yes"},
<add> "yourselves": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Two", "Case": "Acc", "Reflex": "Yes"},
<add> "themselves": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Three", "Number": "Plur", "Case": "Acc", "Reflex": "Yes"}
<add> },
<add>
<add> "PRP$": {
<add> "my": {LEMMA: PRON_LEMMA, "Person": "One", "Number": "Sing", "PronType": "Prs", "Poss": "Yes"},
<add> "your": {LEMMA: PRON_LEMMA, "Person": "Two", "PronType": "Prs", "Poss": "Yes"},
<add> "his": {LEMMA: PRON_LEMMA, "Person": "Three", "Number": "Sing", "Gender": "Masc", "PronType": "Prs", "Poss": "Yes"},
<add> "her": {LEMMA: PRON_LEMMA, "Person": "Three", "Number": "Sing", "Gender": "Fem", "PronType": "Prs", "Poss": "Yes"},
<add> "its": {LEMMA: PRON_LEMMA, "Person": "Three", "Number": "Sing", "Gender": "Neut", "PronType": "Prs", "Poss": "Yes"},
<add> "our": {LEMMA: PRON_LEMMA, "Person": "One", "Number": "Plur", "PronType": "Prs", "Poss": "Yes"},
<add> "their": {LEMMA: PRON_LEMMA, "Person": "Three", "Number": "Plur", "PronType": "Prs", "Poss": "Yes"}
<add> },
<add>
<add> "VBZ": {
<add> "am": {LEMMA: "be", "VerbForm": "Fin", "Person": "One", "Tense": "Pres", "Mood": "Ind"},
<add> "are": {LEMMA: "be", "VerbForm": "Fin", "Person": "Two", "Tense": "Pres", "Mood": "Ind"},
<add> "is": {LEMMA: "be", "VerbForm": "Fin", "Person": "Three", "Tense": "Pres", "Mood": "Ind"},
<add> },
<add>
<add> "VBP": {
<add> "are": {LEMMA: "be", "VerbForm": "Fin", "Tense": "Pres", "Mood": "Ind"}
<add> },
<add>
<add> "VBD": {
<add> "was": {LEMMA: "be", "VerbForm": "Fin", "Tense": "Past", "Number": "Sing"},
<add> "were": {LEMMA: "be", "VerbForm": "Fin", "Tense": "Past", "Number": "Plur"}
<add> }
<add>} | 1 |
Text | Text | remove dco small patch exception | af72f2128e7c4ebd12a9fef7e7381ef42a52ed0c | <ide><path>CONTRIBUTING.md
<ide> Note that the old-style `Docker-DCO-1.1-Signed-off-by: ...` format is still
<ide> accepted, so there is no need to update outstanding pull requests to the new
<ide> format right away, but please do adjust your processes for future contributions.
<ide>
<del>#### Small patch exception
<del>
<del>There are several exceptions to the signing requirement. Currently these are:
<del>
<del>* Your patch fixes spelling or grammar errors.
<del>* Your patch is a single line change to documentation contained in the
<del> `docs` directory.
<del>* Your patch fixes Markdown formatting or syntax errors in the
<del> documentation contained in the `docs` directory.
<del>
<del>If you have any questions, please refer to the FAQ in the [docs](http://docs.docker.com)
<del>
<ide> ### How can I become a maintainer?
<ide>
<ide> * Step 1: Learn the component inside out | 1 |
Text | Text | add simple example to rename function | 1d2ab79f2ce0057df1c0f0fe5eef15c143ad9942 | <ide><path>doc/api/fs.md
<ide> changes:
<ide> * `callback` {Function}
<ide> * `err` {Error}
<ide>
<del>Asynchronous rename(2). No arguments other than a possible exception are given
<del>to the completion callback.
<add>Asynchronously rename file at `oldPath` to the pathname provided
<add>as `newPath`. In the case that `newPath` already exists, it will
<add>be overwritten. No arguments other than a possible exception are
<add>given to the completion callback.
<add>
<add>See also: rename(2).
<add>
<add>```js
<add>fs.rename('oldFile.txt', 'newFile.txt', (err) => {
<add> if (err) throw err;
<add> console.log('Rename complete!');
<add>});
<add>```
<ide>
<ide> ## fs.renameSync(oldPath, newPath)
<ide> <!-- YAML | 1 |
Java | Java | allow multiple locations via @propertysource#value | 2ceeff370aff402bd669f9125d93e99d09e8ce71 | <ide><path>org.springframework.context/src/main/java/org/springframework/context/annotation/ConfigurationClassParser.java
<ide> protected void doProcessConfigurationClass(ConfigurationClass configClass, Annot
<ide> metadata.getAnnotationAttributes(org.springframework.context.annotation.PropertySource.class.getName());
<ide> if (propertySourceAttributes != null) {
<ide> String name = (String) propertySourceAttributes.get("name");
<del> String location = (String) propertySourceAttributes.get("value");
<add> String[] locations = (String[]) propertySourceAttributes.get("value");
<ide> ClassLoader classLoader = this.resourceLoader.getClassLoader();
<del> ResourcePropertySource ps = StringUtils.hasText(name) ?
<del> new ResourcePropertySource(name, location, classLoader) :
<del> new ResourcePropertySource(location, classLoader);
<del> this.propertySources.push(ps);
<add> for (String location : locations) {
<add> ResourcePropertySource ps = StringUtils.hasText(name) ?
<add> new ResourcePropertySource(name, location, classLoader) :
<add> new ResourcePropertySource(location, classLoader);
<add> this.propertySources.push(ps);
<add> }
<ide> }
<ide>
<ide> // process any @ComponentScan annotions
<ide><path>org.springframework.context/src/main/java/org/springframework/context/annotation/PropertySource.java
<ide> String name() default "";
<ide>
<ide> /**
<del> * Indicate the resource location of the properties file to be loaded.
<add> * Indicate the resource location(s) of the properties file to be loaded.
<ide> * For example, {@code "classpath:/com/myco/app.properties"} or
<ide> * {@code "file:/path/to/file"}. Note that resource location wildcards
<del> * are not permitted, and that a location must evaluate to exactly one
<del> * {@code .properties} resource.
<add> * are not permitted, and that each location must evaluate to exactly one
<add> * {@code .properties} resource. Each location will be added to the
<add> * enclosing {@code Environment} as its own property source, and in the order
<add> * declared.
<ide> */
<del> String value();
<add> String[] value();
<ide>
<ide> } | 2 |
Text | Text | add link to webpack/webpacker pr | 061927b4a84770c71f26e706e8720e56eed32b58 | <ide><path>guides/source/5_1_release_notes.md
<ide> Rails 5.1 app.
<ide>
<ide> ### Optional Webpack support
<ide>
<add>[Pull Request](https://github.com/rails/rails/pull/27288)
<add>
<ide> Rails apps can integrate with [Webpack](https://webpack.js.org/), a JavaScript
<ide> asset bundler, more easily using the new [Webpacker](https://github.com/rails/webpacker)
<ide> gem. Use the `--webpack` flag when generating new applications to enable Webpack | 1 |
Javascript | Javascript | use ciphers supported by shared openssl | a974753676536e239fbd937e6ffebb74c84d7cbb | <ide><path>test/parallel/test-tls-ecdh-disable.js
<ide> const fs = require('fs');
<ide> const options = {
<ide> key: fs.readFileSync(`${common.fixturesDir}/keys/agent2-key.pem`),
<ide> cert: fs.readFileSync(`${common.fixturesDir}/keys/agent2-cert.pem`),
<del> ciphers: 'ECDHE-RSA-RC4-SHA',
<add> ciphers: 'ECDHE-RSA-AES128-SHA',
<ide> ecdhCurve: false
<ide> };
<ide>
<ide><path>test/parallel/test-tls-set-ciphers.js
<ide> const fs = require('fs');
<ide> const options = {
<ide> key: fs.readFileSync(`${common.fixturesDir}/keys/agent2-key.pem`),
<ide> cert: fs.readFileSync(`${common.fixturesDir}/keys/agent2-cert.pem`),
<del> ciphers: 'DES-CBC3-SHA'
<add> ciphers: 'AES256-SHA'
<ide> };
<ide>
<ide> const reply = 'I AM THE WALRUS'; // something recognizable | 2 |
Javascript | Javascript | add test for bug | 3892df207d7a4a3babf5017a309263cceff48b65 | <ide><path>src/attributes.js
<ide> jQuery.extend({
<ide>
<ide> // Check form objects in IE (multiple bugs related)
<ide> if ( isFormObjects ) {
<del> // Returns undefined for empty string, which is the blank nodeValue in IE
<add> // Return undefined for empty string, which is the blank nodeValue in IE
<ide> ret = elem.getAttributeNode( name ).nodeValue || undefined;
<ide> } else {
<ide> ret = elem.getAttribute( name );
<ide><path>test/unit/attributes.js
<ide> test("attr(Hash)", function() {
<ide> });
<ide>
<ide> test("attr(String, Object)", function() {
<del> expect(28);
<add> expect(29);
<ide>
<ide> var div = jQuery("div").attr("foo", "bar"),
<ide> fail = false;
<ide> test("attr(String, Object)", function() {
<ide> jQuery("#name").attr('someAttr', '0');
<ide> equals( jQuery("#name").attr('someAttr'), '0', 'Set attribute to a string of "0"' );
<ide> jQuery("#name").attr('someAttr', 0);
<del> equals( jQuery("#name").attr('someAttr'), 0, 'Set attribute to the number 0' );
<add> equals( jQuery("#name").attr('someAttr'), '0', 'Set attribute to the number 0' );
<ide> jQuery("#name").attr('someAttr', 1);
<del> equals( jQuery("#name").attr('someAttr'), 1, 'Set attribute to the number 1' );
<add> equals( jQuery("#name").attr('someAttr'), '1', 'Set attribute to the number 1' );
<ide>
<ide> // using contents will get comments regular, text, and comment nodes
<ide> var j = jQuery("#nonnodes").contents();
<ide> test("attr(String, Object)", function() {
<ide> j.removeAttr("name");
<ide>
<ide> QUnit.reset();
<del>
<add>
<add> // Type
<ide> var type = jQuery("#check2").attr('type');
<ide> var thrown = false;
<ide> try {
<ide> test("attr(String, Object)", function() {
<ide> }
<ide> ok( thrown, "Exception thrown when trying to change type property" );
<ide> equals( "button", button.attr('type'), "Verify that you can't change the type of a button element" );
<add>
<add> // Setting attributes on svg elements (bug #3116)
<add> var $svg = jQuery('<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" version="1.1" baseProfile="full" width="3000" height="3000">'
<add> + '<circle cx="200" cy="200" r="150" />'
<add> + '</svg>').appendTo('body');
<add> equals( $svg.attr('cx', 100).attr('cx'), "100", "Set attribute on svg element" );
<add> $svg.remove();
<ide> });
<ide>
<ide> test("attr(jquery_method)", function(){ | 2 |
Python | Python | specify dataset dtype | 8cbd0bd137d9f35c6e909cd602e6dde5866b8574 | <ide><path>tests/test_trainer.py
<ide> def test_trainer_with_datasets(self):
<ide> self.check_trained_model(trainer.model)
<ide>
<ide> # Can return tensors.
<del> train_dataset.set_format(type="torch")
<add> train_dataset.set_format(type="torch", dtype=torch.float32)
<ide> model = RegressionModel()
<ide> trainer = Trainer(model, args, train_dataset=train_dataset)
<ide> trainer.train() | 1 |
Javascript | Javascript | improve regex matching in challenge controller | 24f4648760b273906561cf41284de7cf7d89f484 | <ide><path>controllers/challenge.js
<ide> exports.returnCurrentChallenge = function(req, res, next) {
<ide> exports.returnIndividualChallenge = function(req, res, next) {
<ide> var dashedName = req.params.challengeName;
<ide>
<del> var challengeName = /^(bonfire|waypoint|zipline|basejump)/.test(dashedName) ? dashedName
<add> var challengeName = /^(bonfire|waypoint|zipline|basejump)/i.test(dashedName) ? dashedName
<ide> .replace(/\-/g, ' ')
<ide> .split(' ')
<ide> .slice(1)
<ide> .join(' ')
<del> : dashedName;
<add> : dashedName.replace(/\-/g, ' ');
<ide>
<ide> Challenge.find({'name': new RegExp(challengeName, 'i')},
<ide> function(err, challengeFromMongo) { | 1 |
Javascript | Javascript | remove react.createref api | a3a464c8fcd2fc9d04a5765f41f56315c12de44a | <ide><path>src/browser/ui/React.js
<ide> var ReactMount = require('ReactMount');
<ide> var ReactMultiChild = require('ReactMultiChild');
<ide> var ReactPerf = require('ReactPerf');
<ide> var ReactPropTypes = require('ReactPropTypes');
<del>var ReactRef = require('ReactRef');
<ide> var ReactServerRendering = require('ReactServerRendering');
<ide>
<ide> var assign = require('Object.assign');
<ide> var React = {
<ide> createClass: ReactClass.createClass,
<ide> createElement: createElement,
<ide> createFactory: createFactory,
<del> createRef: function() {
<del> return new ReactRef();
<del> },
<ide> constructAndRenderComponent: ReactMount.constructAndRenderComponent,
<ide> constructAndRenderComponentByID: ReactMount.constructAndRenderComponentByID,
<ide> findDOMNode: findDOMNode, | 1 |
Ruby | Ruby | set default array to details | aaa0c3279745e3405bc3279924e41cb641e1af8e | <ide><path>activemodel/lib/active_model/errors.rb
<ide> def details
<ide> group_by_attribute.each do |attribute, errors|
<ide> hash[attribute] = errors.map(&:detail)
<ide> end
<del> hash
<add> DeprecationHandlingDetailsHash.new(hash)
<ide> end
<ide>
<ide> def group_by_attribute
<ide> def <<(message)
<ide> end
<ide> end
<ide>
<add> class DeprecationHandlingDetailsHash < SimpleDelegator
<add> def initialize(details)
<add> details.default = []
<add> details.freeze
<add> super(details)
<add> end
<add> end
<add>
<ide> # Raised when a validation cannot be corrected by end users and are considered
<ide> # exceptional.
<ide> #
<ide><path>activemodel/test/cases/errors_test.rb
<ide> def test_no_key
<ide> assert_equal [:name], person.errors.details.keys
<ide> end
<ide>
<add> test "details returns empty array when accessed with non-existent attribute" do
<add> errors = ActiveModel::Errors.new(Person.new)
<add>
<add> assert_equal [], errors.details[:foo]
<add> end
<add>
<ide> test "copy errors" do
<ide> errors = ActiveModel::Errors.new(Person.new)
<ide> errors.add(:name, :invalid) | 2 |
Javascript | Javascript | consolidate duplicated method | aa4e4d20a4ce1cd0f58560acffcb2f9f40e6ed47 | <ide><path>packages/ember-metal/lib/chains.js
<ide> ChainNode.prototype = {
<ide> }
<ide>
<ide> if (this._parent) {
<del> this._parent.chainWillChange(this, this._key, 1, events);
<add> this._parent.notifyChainChange(this, this._key, 1, events);
<ide> }
<ide> },
<ide>
<del> chainWillChange(chain, path, depth, events) {
<add> notifyChainChange(chain, path, depth, events) {
<ide> if (this._key) {
<ide> path = this._key + '.' + path;
<ide> }
<ide>
<ide> if (this._parent) {
<del> this._parent.chainWillChange(this, path, depth + 1, events);
<del> } else {
<del> if (depth > 1) {
<del> events.push(this.value(), path);
<del> }
<del> path = 'this.' + path;
<del> if (this._paths[path] > 0) {
<del> events.push(this.value(), path);
<del> }
<del> }
<del> },
<del>
<del> chainDidChange(chain, path, depth, events) {
<del> if (this._key) {
<del> path = this._key + '.' + path;
<del> }
<del>
<del> if (this._parent) {
<del> this._parent.chainDidChange(this, path, depth + 1, events);
<add> this._parent.notifyChainChange(this, path, depth + 1, events);
<ide> } else {
<ide> if (depth > 1) {
<ide> events.push(this.value(), path);
<ide> ChainNode.prototype = {
<ide>
<ide> // and finally tell parent about my path changing...
<ide> if (this._parent) {
<del> this._parent.chainDidChange(this, this._key, 1, events);
<add> this._parent.notifyChainChange(this, this._key, 1, events);
<ide> }
<ide> }
<ide> }; | 1 |
Python | Python | use celery.states constants in unittests | b8dc2c3b5bf7b8e5fb19c802977d90e355c1d337 | <ide><path>celery/tests/test_backends/test_amqp.py
<ide> from __future__ import with_statement
<ide>
<ide> import sys
<del>import unittest
<ide> import errno
<add>import unittest
<ide>
<ide> from django.core.exceptions import ImproperlyConfigured
<ide>
<del>from celery.backends.amqp import AMQPBackend
<add>from celery import states
<ide> from celery.utils import gen_unique_id
<add>from celery.backends.amqp import AMQPBackend
<ide> from celery.datastructures import ExceptionInfo
<ide>
<ide>
<ide> def test_mark_as_done(self):
<ide>
<ide> tb.mark_as_done(tid, 42)
<ide> self.assertTrue(tb.is_successful(tid))
<del> self.assertEquals(tb.get_status(tid), "SUCCESS")
<add> self.assertEquals(tb.get_status(tid), states.SUCCESS)
<ide> self.assertEquals(tb.get_result(tid), 42)
<ide> self.assertTrue(tb._cache.get(tid))
<ide> self.assertTrue(tb.get_result(tid), 42)
<ide> def test_mark_as_failure(self):
<ide> einfo = ExceptionInfo(sys.exc_info())
<ide> tb.mark_as_failure(tid3, exception, traceback=einfo.traceback)
<ide> self.assertFalse(tb.is_successful(tid3))
<del> self.assertEquals(tb.get_status(tid3), "FAILURE")
<add> self.assertEquals(tb.get_status(tid3), states.FAILURE)
<ide> self.assertTrue(isinstance(tb.get_result(tid3), KeyError))
<ide> self.assertEquals(tb.get_traceback(tid3), einfo.traceback)
<ide>
<ide><path>celery/tests/test_backends/test_base.py
<ide> from billiard.serialization import UnpickleableExceptionWrapper
<ide> from billiard.serialization import get_pickleable_exception as gpe
<ide>
<add>from celery import states
<ide> from celery.backends.base import BaseBackend, KeyValueStoreBackend
<ide>
<ide>
<ide> def test_get_status(self):
<ide>
<ide> def test_store_result(self):
<ide> self.assertRaises(NotImplementedError,
<del> b.store_result, "SOMExx-N0nex1stant-IDxx-", 42, "SUCCESS")
<add> b.store_result, "SOMExx-N0nex1stant-IDxx-", 42, states.SUCCESS)
<ide>
<ide> def test_get_result(self):
<ide> self.assertRaises(NotImplementedError,
<ide><path>celery/tests/test_backends/test_cache.py
<ide>
<ide> from billiard.serialization import pickle
<ide>
<add>from celery import states
<ide> from celery.utils import gen_unique_id
<ide> from celery.backends.cache import CacheBackend
<ide> from celery.datastructures import ExceptionInfo
<ide> def test_mark_as_done(self):
<ide> tid = gen_unique_id()
<ide>
<ide> self.assertFalse(cb.is_successful(tid))
<del> self.assertEquals(cb.get_status(tid), "PENDING")
<add> self.assertEquals(cb.get_status(tid), states.PENDING)
<ide> self.assertEquals(cb.get_result(tid), None)
<ide>
<ide> cb.mark_as_done(tid, 42)
<ide> self.assertTrue(cb.is_successful(tid))
<del> self.assertEquals(cb.get_status(tid), "SUCCESS")
<add> self.assertEquals(cb.get_status(tid), states.SUCCESS)
<ide> self.assertEquals(cb.get_result(tid), 42)
<ide> self.assertTrue(cb._cache.get(tid))
<ide> self.assertTrue(cb.get_result(tid), 42)
<ide> def test_mark_as_failure(self):
<ide> pass
<ide> cb.mark_as_failure(tid3, exception, traceback=einfo.traceback)
<ide> self.assertFalse(cb.is_successful(tid3))
<del> self.assertEquals(cb.get_status(tid3), "FAILURE")
<add> self.assertEquals(cb.get_status(tid3), states.FAILURE)
<ide> self.assertTrue(isinstance(cb.get_result(tid3), KeyError))
<ide> self.assertEquals(cb.get_traceback(tid3), einfo.traceback)
<ide>
<ide><path>celery/tests/test_backends/test_database.py
<ide> import unittest
<ide> from datetime import timedelta
<ide>
<add>from celery import states
<ide> from celery.task import PeriodicTask
<ide> from celery.utils import gen_unique_id
<ide> from celery.backends.database import DatabaseBackend
<ide> def test_backend(self):
<ide> tid = gen_unique_id()
<ide>
<ide> self.assertFalse(b.is_successful(tid))
<del> self.assertEquals(b.get_status(tid), "PENDING")
<add> self.assertEquals(b.get_status(tid), states.PENDING)
<ide> self.assertTrue(b.get_result(tid) is None)
<ide>
<ide> b.mark_as_done(tid, 42)
<ide> self.assertTrue(b.is_successful(tid))
<del> self.assertEquals(b.get_status(tid), "SUCCESS")
<add> self.assertEquals(b.get_status(tid), states.SUCCESS)
<ide> self.assertEquals(b.get_result(tid), 42)
<ide> self.assertTrue(b._cache.get(tid))
<ide> self.assertTrue(b.get_result(tid), 42)
<ide> def test_backend(self):
<ide> pass
<ide> b.mark_as_failure(tid3, exception)
<ide> self.assertFalse(b.is_successful(tid3))
<del> self.assertEquals(b.get_status(tid3), "FAILURE")
<add> self.assertEquals(b.get_status(tid3), states.FAILURE)
<ide> self.assertTrue(isinstance(b.get_result(tid3), KeyError))
<ide>
<ide> def test_taskset_store(self):
<ide><path>celery/tests/test_backends/test_redis.py
<ide>
<ide> from django.core.exceptions import ImproperlyConfigured
<ide>
<add>from celery import states
<add>from celery.utils import gen_unique_id
<ide> from celery.backends import pyredis
<ide> from celery.backends.pyredis import RedisBackend
<del>from celery.utils import gen_unique_id
<ide>
<ide> _no_redis_msg = "* Redis %s. Will not execute related tests."
<ide> _no_redis_msg_emitted = False
<ide> def test_mark_as_done(self):
<ide> tid = gen_unique_id()
<ide>
<ide> self.assertFalse(tb.is_successful(tid))
<del> self.assertEquals(tb.get_status(tid), "PENDING")
<add> self.assertEquals(tb.get_status(tid), states.PENDING)
<ide> self.assertEquals(tb.get_result(tid), None)
<ide>
<ide> tb.mark_as_done(tid, 42)
<ide> self.assertTrue(tb.is_successful(tid))
<del> self.assertEquals(tb.get_status(tid), "SUCCESS")
<add> self.assertEquals(tb.get_status(tid), states.SUCCESS)
<ide> self.assertEquals(tb.get_result(tid), 42)
<ide> self.assertTrue(tb._cache.get(tid))
<ide> self.assertTrue(tb.get_result(tid), 42)
<ide> def test_mark_as_failure(self):
<ide> pass
<ide> tb.mark_as_failure(tid3, exception)
<ide> self.assertFalse(tb.is_successful(tid3))
<del> self.assertEquals(tb.get_status(tid3), "FAILURE")
<add> self.assertEquals(tb.get_status(tid3), states.FAILURE)
<ide> self.assertTrue(isinstance(tb.get_result(tid3), KeyError))
<ide>
<ide> def test_process_cleanup(self):
<ide><path>celery/tests/test_backends/test_tyrant.py
<ide> import sys
<del>import unittest
<ide> import errno
<ide> import socket
<add>import unittest
<add>
<add>from django.core.exceptions import ImproperlyConfigured
<add>
<add>from celery import states
<add>from celery.utils import gen_unique_id
<ide> from celery.backends import tyrant
<ide> from celery.backends.tyrant import TyrantBackend
<del>from celery.utils import gen_unique_id
<del>from django.core.exceptions import ImproperlyConfigured
<ide>
<ide> _no_tyrant_msg = "* Tokyo Tyrant %s. Will not execute related tests."
<ide> _no_tyrant_msg_emitted = False
<ide> def test_mark_as_done(self):
<ide> tid = gen_unique_id()
<ide>
<ide> self.assertFalse(tb.is_successful(tid))
<del> self.assertEquals(tb.get_status(tid), "PENDING")
<add> self.assertEquals(tb.get_status(tid), states.PENDING)
<ide> self.assertEquals(tb.get_result(tid), None)
<ide>
<ide> tb.mark_as_done(tid, 42)
<ide> self.assertTrue(tb.is_successful(tid))
<del> self.assertEquals(tb.get_status(tid), "SUCCESS")
<add> self.assertEquals(tb.get_status(tid), states.SUCCESS)
<ide> self.assertEquals(tb.get_result(tid), 42)
<ide> self.assertTrue(tb._cache.get(tid))
<ide> self.assertTrue(tb.get_result(tid), 42)
<ide> def test_mark_as_failure(self):
<ide> pass
<ide> tb.mark_as_failure(tid3, exception)
<ide> self.assertFalse(tb.is_successful(tid3))
<del> self.assertEquals(tb.get_status(tid3), "FAILURE")
<add> self.assertEquals(tb.get_status(tid3), states.FAILURE)
<ide> self.assertTrue(isinstance(tb.get_result(tid3), KeyError))
<ide>
<ide> def test_process_cleanup(self):
<ide><path>celery/tests/test_models.py
<ide> import unittest
<ide> from datetime import datetime, timedelta
<ide>
<add>from celery import states
<ide> from celery.utils import gen_unique_id
<ide> from celery.models import TaskMeta, TaskSetMeta
<ide>
<ide> def test_taskmeta(self):
<ide> self.assertEquals(TaskMeta.objects.get_task(m1.task_id).task_id,
<ide> m1.task_id)
<ide> self.assertFalse(
<del> TaskMeta.objects.get_task(m1.task_id).status == "SUCCESS")
<del> TaskMeta.objects.store_result(m1.task_id, True, status="SUCCESS")
<del> TaskMeta.objects.store_result(m2.task_id, True, status="SUCCESS")
<add> TaskMeta.objects.get_task(m1.task_id).status == states.SUCCESS)
<add> TaskMeta.objects.store_result(m1.task_id, True, status=states.SUCCESS)
<add> TaskMeta.objects.store_result(m2.task_id, True, status=states.SUCCESS)
<ide> self.assertTrue(
<del> TaskMeta.objects.get_task(m1.task_id).status == "SUCCESS")
<add> TaskMeta.objects.get_task(m1.task_id).status == states.SUCCESS)
<ide> self.assertTrue(
<del> TaskMeta.objects.get_task(m2.task_id).status == "SUCCESS")
<add> TaskMeta.objects.get_task(m2.task_id).status == states.SUCCESS)
<ide>
<ide> # Have to avoid save() because it applies the auto_now=True.
<ide> TaskMeta.objects.filter(task_id=m1.task_id).update(
<ide><path>celery/tests/test_result.py
<ide> import unittest
<ide>
<add>from celery import states
<ide> from celery.utils import gen_unique_id
<ide> from celery.tests.utils import skip_if_quick
<ide> from celery.result import AsyncResult, TaskSetResult
<ide> def mock_task(name, status, result):
<ide>
<ide> def save_result(task):
<ide> traceback = "Some traceback"
<del> if task["status"] == "SUCCESS":
<add> if task["status"] == states.SUCCESS:
<ide> default_backend.mark_as_done(task["id"], task["result"])
<del> elif task["status"] == "RETRY":
<add> elif task["status"] == states.RETRY:
<ide> default_backend.mark_as_retry(task["id"], task["result"],
<ide> traceback=traceback)
<ide> else:
<ide> def save_result(task):
<ide>
<ide>
<ide> def make_mock_taskset(size=10):
<del> tasks = [mock_task("ts%d" % i, "SUCCESS", i) for i in xrange(size)]
<add> tasks = [mock_task("ts%d" % i, states.SUCCESS, i) for i in xrange(size)]
<ide> [save_result(task) for task in tasks]
<ide> return [AsyncResult(task["id"]) for task in tasks]
<ide>
<ide>
<ide> class TestAsyncResult(unittest.TestCase):
<ide>
<ide> def setUp(self):
<del> self.task1 = mock_task("task1", "SUCCESS", "the")
<del> self.task2 = mock_task("task2", "SUCCESS", "quick")
<del> self.task3 = mock_task("task3", "FAILURE", KeyError("brown"))
<del> self.task4 = mock_task("task3", "RETRY", KeyError("red"))
<add> self.task1 = mock_task("task1", states.SUCCESS, "the")
<add> self.task2 = mock_task("task2", states.SUCCESS, "quick")
<add> self.task3 = mock_task("task3", states.FAILURE, KeyError("brown"))
<add> self.task4 = mock_task("task3", states.RETRY, KeyError("red"))
<ide>
<ide> for task in (self.task1, self.task2, self.task3, self.task4):
<ide> save_result(task)
<ide> def result(self):
<ide>
<ide> @property
<ide> def status(self):
<del> return "FAILURE"
<add> return states.FAILURE
<ide>
<ide>
<ide> class MockAsyncResultSuccess(AsyncResult):
<ide> def result(self):
<ide>
<ide> @property
<ide> def status(self):
<del> return "SUCCESS"
<add> return states.SUCCESS
<ide>
<ide>
<ide> class TestTaskSetResult(unittest.TestCase):
<ide> class TestFailedTaskSetResult(TestTaskSetResult):
<ide> def setUp(self):
<ide> self.size = 11
<ide> subtasks = make_mock_taskset(10)
<del> failed = mock_task("ts11", "FAILED", KeyError("Baz"))
<add> failed = mock_task("ts11", states.FAILURE, KeyError("Baz"))
<ide> save_result(failed)
<ide> failed_res = AsyncResult(failed["id"])
<ide> self.ts = TaskSetResult(gen_unique_id(), subtasks + [failed_res])
<ide><path>celery/tests/test_views.py
<ide> from billiard.utils.functional import curry
<ide>
<ide> from celery import conf
<add>from celery import states
<ide> from celery.utils import gen_unique_id, get_full_cls_name
<ide> from celery.backends import default_backend
<ide> from celery.exceptions import RetryTaskError
<ide> def assertStatusForIs(self, status, res, traceback=None):
<ide> self.assertJSONEquals(json, dict(task=expect))
<ide>
<ide> def test_task_status_success(self):
<del> self.assertStatusForIs("SUCCESS", "The quick brown fox")
<add> self.assertStatusForIs(states.SUCCESS, "The quick brown fox")
<ide>
<ide> def test_task_status_failure(self):
<ide> exc, tb = catch_exception(KeyError("foo"))
<del> self.assertStatusForIs("FAILURE", exc, tb)
<add> self.assertStatusForIs(states.FAILURE, exc, tb)
<ide>
<ide> def test_task_status_retry(self):
<ide> oexc, _ = catch_exception(KeyError("Resource not available"))
<ide> exc, tb = catch_exception(RetryTaskError(str(oexc), oexc))
<del> self.assertStatusForIs("RETRY", exc, tb)
<add> self.assertStatusForIs(states.RETRY, exc, tb)
<ide>
<ide>
<ide> class TestTaskIsSuccessful(ViewTestCase):
<ide> def assertStatusForIs(self, status, outcome):
<ide> "executed": outcome}})
<ide>
<ide> def test_is_successful_success(self):
<del> self.assertStatusForIs("SUCCESS", True)
<add> self.assertStatusForIs(states.SUCCESS, True)
<ide>
<ide> def test_is_successful_pending(self):
<del> self.assertStatusForIs("PENDING", False)
<add> self.assertStatusForIs(states.PENDING, False)
<ide>
<ide> def test_is_successful_failure(self):
<del> self.assertStatusForIs("FAILURE", False)
<add> self.assertStatusForIs(states.FAILURE, False)
<ide>
<ide> def test_is_successful_retry(self):
<del> self.assertStatusForIs("RETRY", False)
<add> self.assertStatusForIs(states.RETRY, False)
<ide><path>celery/tests/test_worker_job.py
<ide> from django.core import cache
<ide> from carrot.backends.base import BaseMessage
<ide>
<add>from celery import states
<ide> from celery.log import setup_logger
<ide> from celery.task.base import Task
<ide> from celery.utils import gen_unique_id
<ide> def test_worker_task_trace_handle_retry(self):
<ide> exc=value_))
<ide> w._store_errors = False
<ide> w.handle_retry(value_, type_, tb_, "")
<del> self.assertEquals(mytask.backend.get_status(uuid), "PENDING")
<add> self.assertEquals(mytask.backend.get_status(uuid), states.PENDING)
<ide> w._store_errors = True
<ide> w.handle_retry(value_, type_, tb_, "")
<del> self.assertEquals(mytask.backend.get_status(uuid), "RETRY")
<add> self.assertEquals(mytask.backend.get_status(uuid), states.RETRY)
<ide>
<ide> def test_worker_task_trace_handle_failure(self):
<ide> from celery.worker.job import WorkerTaskTrace
<ide> def test_worker_task_trace_handle_failure(self):
<ide> type_, value_, tb_ = self.create_exception(ValueError("foo"))
<ide> w._store_errors = False
<ide> w.handle_failure(value_, type_, tb_, "")
<del> self.assertEquals(mytask.backend.get_status(uuid), "PENDING")
<add> self.assertEquals(mytask.backend.get_status(uuid), states.PENDING)
<ide> w._store_errors = True
<ide> w.handle_failure(value_, type_, tb_, "")
<del> self.assertEquals(mytask.backend.get_status(uuid), "FAILURE")
<add> self.assertEquals(mytask.backend.get_status(uuid), states.FAILURE)
<ide>
<ide> def test_executed_bit(self):
<ide> from celery.worker.job import AlreadyExecutedError
<ide> def test_execute(self):
<ide> self.assertEquals(tw.execute(), 256)
<ide> meta = TaskMeta.objects.get(task_id=tid)
<ide> self.assertEquals(meta.result, 256)
<del> self.assertEquals(meta.status, "SUCCESS")
<add> self.assertEquals(meta.status, states.SUCCESS)
<ide>
<ide> def test_execute_success_no_kwargs(self):
<ide> tid = gen_unique_id()
<ide> tw = TaskWrapper(mytask_no_kwargs.name, tid, [4], {})
<ide> self.assertEquals(tw.execute(), 256)
<ide> meta = TaskMeta.objects.get(task_id=tid)
<ide> self.assertEquals(meta.result, 256)
<del> self.assertEquals(meta.status, "SUCCESS")
<add> self.assertEquals(meta.status, states.SUCCESS)
<ide>
<ide> def test_execute_success_some_kwargs(self):
<ide> tid = gen_unique_id()
<ide> def test_execute_success_some_kwargs(self):
<ide> meta = TaskMeta.objects.get(task_id=tid)
<ide> self.assertEquals(some_kwargs_scratchpad.get("logfile"), "foobaz.log")
<ide> self.assertEquals(meta.result, 256)
<del> self.assertEquals(meta.status, "SUCCESS")
<add> self.assertEquals(meta.status, states.SUCCESS)
<ide>
<ide> def test_execute_ack(self):
<ide> tid = gen_unique_id()
<ide> def test_execute_ack(self):
<ide> meta = TaskMeta.objects.get(task_id=tid)
<ide> self.assertTrue(scratch["ACK"])
<ide> self.assertEquals(meta.result, 256)
<del> self.assertEquals(meta.status, "SUCCESS")
<add> self.assertEquals(meta.status, states.SUCCESS)
<ide>
<ide> def test_execute_fail(self):
<ide> tid = gen_unique_id()
<ide> tw = TaskWrapper(mytask_raising.name, tid, [4], {"f": "x"})
<ide> self.assertTrue(isinstance(tw.execute(), ExceptionInfo))
<ide> meta = TaskMeta.objects.get(task_id=tid)
<del> self.assertEquals(meta.status, "FAILURE")
<add> self.assertEquals(meta.status, states.FAILURE)
<ide> self.assertTrue(isinstance(meta.result, KeyError))
<ide>
<ide> def test_execute_using_pool(self): | 10 |
PHP | PHP | apply style ci | 01d6896cb402d7b641c84ed8d541b1bc96af34ce | <ide><path>tests/Database/DatabaseEloquentModelTest.php
<ide> public function testDirtyAttributes()
<ide> $this->assertTrue($model->isDirty(['foo', 'bar']));
<ide> }
<ide>
<del> public function testDirtyOnCastOrDateAttributes(){
<add> public function testDirtyOnCastOrDateAttributes()
<add> {
<ide> $model = new EloquentModelCastingStub;
<ide> $model->setDateFormat('Y-m-d H:i:s');
<ide> $model->boolAttribute = 1; | 1 |
Javascript | Javascript | remove eslint comments and rename variables | 6cc74b038fa0d2e64e8060415487706117b4a77c | <ide><path>lib/internal/util/inspect.js
<ide> function formatPrimitive(fn, value, ctx) {
<ide> if (ctx.compact === false &&
<ide> ctx.indentationLvl + value.length > ctx.breakLength &&
<ide> value.length > kMinLineLength) {
<del> // eslint-disable-next-line max-len
<del> const minLineLength = Math.max(ctx.breakLength - ctx.indentationLvl, kMinLineLength);
<del> // eslint-disable-next-line max-len
<del> const averageLineLength = Math.ceil(value.length / Math.ceil(value.length / minLineLength));
<add> const rawMaxLineLength = ctx.breakLength - ctx.indentationLvl;
<add> const maxLineLength = Math.max(rawMaxLineLength, kMinLineLength);
<add> const lines = Math.ceil(value.length / maxLineLength);
<add> const averageLineLength = Math.ceil(value.length / lines);
<ide> const divisor = Math.max(averageLineLength, kMinLineLength);
<del> let res = '';
<ide> if (readableRegExps[divisor] === undefined) {
<ide> // Build a new RegExp that naturally breaks text into multiple lines.
<ide> //
<ide> function formatPrimitive(fn, value, ctx) {
<ide> const matches = value.match(readableRegExps[divisor]);
<ide> if (matches.length > 1) {
<ide> const indent = ' '.repeat(ctx.indentationLvl);
<del> res += `${fn(strEscape(matches[0]), 'string')} +\n`;
<add> let res = `${fn(strEscape(matches[0]), 'string')} +\n`;
<ide> for (var i = 1; i < matches.length - 1; i++) {
<ide> res += `${indent} ${fn(strEscape(matches[i]), 'string')} +\n`;
<ide> } | 1 |
Text | Text | fix broken link to database design book | b6e1586d382b7e1f8f27776c3f422ef59efcd33a | <ide><path>docs/recipes/structuring-reducers/PrerequisiteConcepts.md
<ide> Because of these rules, it's important that the following core concepts are full
<ide> - [Redux Without Profanity: Normalizr](https://tonyhb.gitbooks.io/redux-without-profanity/content/normalizer.html)
<ide> - [Querying a Redux Store](https://medium.com/@adamrackis/querying-a-redux-store-37db8c7f3b0f)
<ide> - [Wikipedia: Associative Entity](https://en.wikipedia.org/wiki/Associative_entity)
<del>- [Database Design: Many-to-Many](http://www.tomjewett.com/dbdesign/dbdesign.php?page=manymany.php)
<add>- [Database Design: Many-to-Many](http://web.csulb.edu/colleges/coe/cecs/dbdesign/dbdesign.php?page=manymany.php)
<ide> - [Avoiding Accidental Complexity When Structuring Your App State](https://medium.com/@talkol/avoiding-accidental-complexity-when-structuring-your-app-state-6e6d22ad5e2a) | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.