content_type stringclasses 8 values | main_lang stringclasses 7 values | message stringlengths 1 50 | sha stringlengths 40 40 | patch stringlengths 52 962k | file_count int64 1 300 |
|---|---|---|---|---|---|
Python | Python | fix missing message in validationerror | ccf3c508bd6750073ea3bbaefff567b92880df73 | <ide><path>rest_framework/authtoken/serializers.py
<ide> def validate(self, attrs):
<ide> if user:
<ide> if not user.is_active:
<ide> msg = _('User account is disabled.')
<del> raise serializers.ValidationError()
<add> raise serializers.ValidationError(msg)
<ide> attrs['user'] = user
<ide> return attrs
<ide> else: | 1 |
Mixed | Javascript | add highwatermark option for connect | 58682d823acb0e566f16d1d8b33b83dfebf3aa5d | <ide><path>doc/api/https.md
<ide> Global instance of [`https.Agent`][] for all HTTPS client requests.
<ide> <!-- YAML
<ide> added: v0.3.6
<ide> changes:
<add> - version: REPLACEME
<add> pr-url: https://github.com/nodejs/node/pull/32786
<add> description: The `highWaterMark` option is accepted now.
<ide> - version: v10.9.0
<ide> pr-url: https://github.com/nodejs/node/pull/21616
<ide> description: The `url` parameter can now be passed along with a separate
<ide> Makes a request to a secure web server.
<ide> The following additional `options` from [`tls.connect()`][] are also accepted:
<ide> `ca`, `cert`, `ciphers`, `clientCertEngine`, `crl`, `dhparam`, `ecdhCurve`,
<ide> `honorCipherOrder`, `key`, `passphrase`, `pfx`, `rejectUnauthorized`,
<del>`secureOptions`, `secureProtocol`, `servername`, `sessionIdContext`.
<add>`secureOptions`, `secureProtocol`, `servername`, `sessionIdContext`,
<add>`highWaterMark`.
<ide>
<ide> `options` can be an object, a string, or a [`URL`][] object. If `options` is a
<ide> string, it is automatically parsed with [`new URL()`][]. If it is a [`URL`][]
<ide><path>doc/api/tls.md
<ide> being issued by trusted CA (`options.ca`).
<ide> <!-- YAML
<ide> added: v0.11.3
<ide> changes:
<add> - version: REPLACEME
<add> pr-url: https://github.com/nodejs/node/pull/32786
<add> description: The `highWaterMark` option is accepted now.
<ide> - version: v13.6.0
<ide> pr-url: https://github.com/nodejs/node/pull/23188
<ide> description: The `pskCallback` option is now supported.
<ide> changes:
<ide> TLS connection. When a server offers a DH parameter with a size less
<ide> than `minDHSize`, the TLS connection is destroyed and an error is thrown.
<ide> **Default:** `1024`.
<add> * `highWaterMark`: {number} Consistent with the readable stream `highWaterMark` parameter.
<add> **Default:** `16 * 1024`.
<ide> * `secureContext`: TLS context object created with
<ide> [`tls.createSecureContext()`][]. If a `secureContext` is _not_ provided, one
<ide> will be created by passing the entire `options` object to
<ide><path>lib/_tls_wrap.js
<ide> function TLSSocket(socket, opts) {
<ide> handle: this._wrapHandle(wrap),
<ide> allowHalfOpen: socket ? socket.allowHalfOpen : tlsOptions.allowHalfOpen,
<ide> pauseOnCreate: tlsOptions.pauseOnConnect,
<del> manualStart: true
<add> manualStart: true,
<add> highWaterMark: tlsOptions.highWaterMark,
<ide> });
<ide>
<ide> // Proxy for API compatibility
<ide> exports.connect = function connect(...args) {
<ide> requestOCSP: options.requestOCSP,
<ide> enableTrace: options.enableTrace,
<ide> pskCallback: options.pskCallback,
<add> highWaterMark: options.highWaterMark,
<ide> });
<ide>
<ide> tlssock[kConnectOptions] = options;
<ide><path>test/parallel/test-https-hwm.js
<add>'use strict';
<add>
<add>// Test https highWaterMark
<add>
<add>const common = require('../common');
<add>if (!common.hasCrypto)
<add> common.skip('missing crypto');
<add>
<add>const assert = require('assert');
<add>const https = require('https');
<add>const fixtures = require('../common/fixtures');
<add>
<add>let counter = 0;
<add>
<add>function loadCallback(highWaterMark) {
<add> return common.mustCall(function(res) {
<add> assert.strictEqual(highWaterMark, res.readableHighWaterMark);
<add> counter--;
<add> console.log('back from https request. ',
<add> `highWaterMark = ${res.readableHighWaterMark}`);
<add> if (counter === 0) {
<add> httpsServer.close();
<add> console.log('ok');
<add> }
<add> res.resume();
<add> });
<add>}
<add>
<add>// create server
<add>const httpsServer = https.createServer({
<add> key: fixtures.readKey('agent1-key.pem'),
<add> cert: fixtures.readKey('agent1-cert.pem')
<add>}, common.mustCall(function(req, res) {
<add> res.writeHead(200, {});
<add> res.end('ok');
<add>}, 3)).listen(0, common.mustCall(function(err) {
<add> console.log(`test https server listening on port ${this.address().port}`);
<add> assert.ifError(err);
<add>
<add> https.request({
<add> method: 'GET',
<add> path: `/${counter++}`,
<add> host: 'localhost',
<add> port: this.address().port,
<add> rejectUnauthorized: false,
<add> highWaterMark: 128000,
<add> }, loadCallback(128000)).on('error', common.mustNotCall()).end();
<add>
<add> https.request({
<add> method: 'GET',
<add> path: `/${counter++}`,
<add> host: 'localhost',
<add> port: this.address().port,
<add> rejectUnauthorized: false,
<add> highWaterMark: 0,
<add> }, loadCallback(0)).on('error', common.mustNotCall()).end();
<add>
<add> https.request({
<add> method: 'GET',
<add> path: `/${counter++}`,
<add> host: 'localhost',
<add> port: this.address().port,
<add> rejectUnauthorized: false,
<add> highWaterMark: undefined,
<add> }, loadCallback(16 * 1024)).on('error', common.mustNotCall()).end();
<add>}));
<ide><path>test/parallel/test-tls-connect-hwm-option.js
<add>'use strict';
<add>
<add>const common = require('../common');
<add>if (!common.hasCrypto)
<add> common.skip('missing crypto');
<add>
<add>const assert = require('assert');
<add>const tls = require('tls');
<add>const fixtures = require('../common/fixtures');
<add>
<add>const pem = (n) => fixtures.readKey(`${n}.pem`);
<add>
<add>let clients = 0;
<add>
<add>const server = tls.createServer({
<add> key: pem('agent1-key'),
<add> cert: pem('agent1-cert')
<add>}, common.mustCall(() => {
<add> if (--clients === 0)
<add> server.close();
<add>}, 3));
<add>
<add>server.listen(0, common.mustCall(() => {
<add> clients++;
<add> const highBob = tls.connect({
<add> port: server.address().port,
<add> rejectUnauthorized: false,
<add> highWaterMark: 128000,
<add> }, common.mustCall(() => {
<add> assert.strictEqual(highBob.readableHighWaterMark, 128000);
<add> highBob.end();
<add> }));
<add>
<add> clients++;
<add> const defaultHighBob = tls.connect({
<add> port: server.address().port,
<add> rejectUnauthorized: false,
<add> highWaterMark: undefined,
<add> }, common.mustCall(() => {
<add> assert.strictEqual(defaultHighBob.readableHighWaterMark, 16 * 1024);
<add> defaultHighBob.end();
<add> }));
<add>
<add> clients++;
<add> const zeroHighBob = tls.connect({
<add> port: server.address().port,
<add> rejectUnauthorized: false,
<add> highWaterMark: 0,
<add> }, common.mustCall(() => {
<add> assert.strictEqual(zeroHighBob.readableHighWaterMark, 0);
<add> zeroHighBob.end();
<add> }));
<add>})); | 5 |
Python | Python | fix invalid parameter types used in `dtype` | a5beccfa3574f4fcb1b6030737b728e65803791f | <ide><path>numpy/array_api/_typing.py
<ide> "PyCapsule",
<ide> ]
<ide>
<del>from typing import Any, Literal, Sequence, Type, Union
<add>import sys
<add>from typing import Any, Literal, Sequence, Type, Union, TYPE_CHECKING
<ide>
<del>from . import (
<del> Array,
<add>from . import Array
<add>from numpy import (
<add> dtype,
<ide> int8,
<ide> int16,
<ide> int32,
<ide> NestedSequence = Sequence[Sequence[Any]]
<ide>
<ide> Device = Literal["cpu"]
<del>Dtype = Type[
<del> Union[int8, int16, int32, int64, uint8, uint16, uint32, uint64, float32, float64]
<del>]
<add>if TYPE_CHECKING or sys.version_info >= (3, 9):
<add> Dtype = dtype[Union[
<add> int8,
<add> int16,
<add> int32,
<add> int64,
<add> uint8,
<add> uint16,
<add> uint32,
<add> uint64,
<add> float32,
<add> float64,
<add> ]]
<add>else:
<add> Dtype = dtype
<add>
<ide> SupportsDLPack = Any
<ide> SupportsBufferProtocol = Any
<ide> PyCapsule = Any | 1 |
Text | Text | add changelog entry | f945178a601f18c2c649fd925ca069da8c0f1547 | <ide><path>railties/CHANGELOG.md
<ide> ## Rails 4.0.0 (unreleased) ##
<ide>
<add>* Rake test:uncommitted finds git directory in ancestors *Nicolas Despres*
<add>
<ide> * Add dummy app Rake tasks when --skip-test-unit and --dummy-path is passed to the plugin generator.
<ide> Fix #8121
<ide> | 1 |
Javascript | Javascript | add displayname to nested render warnings [] | 28f50c8a78cffc0b0e9d60e4a7c77cc14a8e80c7 | <ide><path>src/browser/ui/ReactMount.js
<ide> var ReactMount = {
<ide> // verify that that's the case.
<ide> warning(
<ide> ReactCurrentOwner.current == null,
<del> '_renderNewRootComponent(): Render methods should be a pure function ' +
<del> 'of props and state; triggering nested component updates from ' +
<add> '%s._renderNewRootComponent(): Render methods should be a pure ' +
<add> 'function of props and state; triggering nested component updates from ' +
<ide> 'render is not allowed. If necessary, trigger nested updates in ' +
<del> 'componentDidUpdate.'
<add> 'componentDidUpdate.',
<add> this.constructor.displayName || 'ReactCompositeComponent'
<ide> );
<ide>
<ide> var componentInstance = instantiateReactComponent(nextComponent, null);
<ide> var ReactMount = {
<ide> // render but we still don't expect to be in a render call here.)
<ide> warning(
<ide> ReactCurrentOwner.current == null,
<del> 'unmountComponentAtNode(): Render methods should be a pure function of ' +
<del> 'props and state; triggering nested component updates from render is ' +
<del> 'not allowed. If necessary, trigger nested updates in ' +
<del> 'componentDidUpdate.'
<add> '%s.unmountComponentAtNode(): Render methods should be a pure function ' +
<add> 'of props and state; triggering nested component updates from render ' +
<add> 'is not allowed. If necessary, trigger nested updates in ' +
<add> 'componentDidUpdate.',
<add> this.constructor.displayName || 'ReactCompositeComponent'
<ide> );
<ide>
<ide> invariant(
<ide><path>src/core/__tests__/ReactCompositeComponent-test.js
<ide> describe('ReactCompositeComponent', function() {
<ide> ReactTestUtils.renderIntoDocument(<Outer />);
<ide> expect(console.warn.argsForCall.length).toBe(1);
<ide> expect(console.warn.argsForCall[0][0]).toBe(
<del> 'Warning: _renderNewRootComponent(): Render methods should ' +
<add> 'Warning: ReactCompositeComponent._renderNewRootComponent(): Render methods should ' +
<ide> 'be a pure function of props and state; triggering nested component ' +
<ide> 'updates from render is not allowed. If necessary, trigger nested ' +
<ide> 'updates in componentDidUpdate.' | 2 |
Javascript | Javascript | remove unneeded comment task | 8704c58fc4fff2d2ae2ccec23c0951d37667904c | <ide><path>test/parallel/test-http-url.parse-only-support-http-https-protocol.js
<ide> assert.throws(function() {
<ide> return true;
<ide> }
<ide> });
<del>
<del>//TODO do I need to test url.parse(notPrococol.example.com)? | 1 |
Text | Text | add v3.1.0-beta.5 to changelog | 66c39c2f1888c6533f0d807d7f4804ec2bfbcb65 | <ide><path>CHANGELOG.md
<ide> # Ember Changelog
<ide>
<add>### v3.1.0-beta.5 (March 12, 2018)
<add>- [#15601](https://github.com/emberjs/ember.js/pull/15601) [BUGFIX] Ensure Mixin.prototype.toString does not return constructor code
<add>- [#16326](https://github.com/emberjs/ember.js/pull/16326) [BUGFIX] Expanded syntax error for if handlebars helper to include source of error
<add>- [#16347](https://github.com/emberjs/ember.js/pull/16347) [BUGFIX] Adds toJSON to list of descriptorTrap assertion exception
<add>- [#16350](https://github.com/emberjs/ember.js/pull/16350) [BUGFIX] Fix initialiters tests blueprints
<add>- [#16351](https://github.com/emberjs/ember.js/pull/16351) [BUGFIX] Bring RSVP.cast back from the dead
<add>- [#16365](https://github.com/emberjs/ember.js/pull/16365) [BUGFIX] Fold all trap methods together
<add>
<ide> ### v3.1.0-beta.4 (March 5, 2018)
<ide> - [#16294](https://github.com/emberjs/ember.js/pull/16294) [BUGFIX] Fix input macro params handling
<ide> - [#16297](https://github.com/emberjs/ember.js/pull/16297) [BUGFIX] Revert "Update to backburner.js@2.2.0." | 1 |
Text | Text | add a shell completion doc | 36e961ad985325f3087317f42b56266189936c75 | <ide><path>docs/README.md
<ide> - [Installation](Installation.md)
<ide> - [Frequently Asked Questions](FAQ.md)
<ide> - [Common Issues](Common-Issues.md)
<add>- [`brew` Shell Completion](Shell-Completion.md)
<ide>
<ide> - [Tips and Tricks](Tips-N'-Tricks.md)
<ide> - [Bottles (binary packages)](Bottles.md)
<ide><path>docs/Shell-Completion.md
<add># Homebrew Shell Completion
<add>
<add>Homebrew comes with completion definitions for the `brew` command. Some packages also provide completion definitions for their own programs.
<add>
<add>`zsh`, `bash` and `fish` are currently supported. (Homebrew provides `brew` completions for `zsh` and `bash`; `fish` provides its own `brew` completions.)
<add>
<add>You must configure your shell to enable the completion support. This is because the Homebrew-managed completions are stored under `HOMEBREW_PREFIX`, which your system shell may not be aware of, and because it is difficult to automatically configure `bash` and `zsh` completions in a robust manner, so the Homebrew installer cannot do it for you.
<add>
<add>## Configuring Completions in `bash`
<add>To make Homebrew's completions available in `bash`, you must source the definitions as part of your shell startup. Add the following to your `~/.bashrc` file:
<add>
<add>```sh
<add>if type brew 2&>/dev/null; then
<add> for completion_file in $(brew --prefix)/etc/bash_completion.d/*; do
<add> source "$completion_file"
<add> done
<add>fi
<add>```
<add>
<add>## Configuring Completions in `zsh`
<add>To make Homebrew's completions available in `zsh`, you must get the Homebrew-managed zsh site-functions on your `$FPATH` before initializing `zsh`'s completion facility. Add the following to your `~/.zshrc` file:
<add>
<add>```sh
<add>if type brew &>/dev/null; then
<add> FPATH=$(brew --prefix)/share/zsh/site-functions:$FPATH
<add>fi
<add>```
<add>
<add>This must be done before `compinit` is called. (Note: if you are using Oh My Zsh, it will call `compinit` for you, so this must be done before you call `oh-my-zsh.sh`.)
<add>
<add>You may also need to forcibly rebuild `zcompdump`:
<add>
<add>```sh
<add> rm -f ~/.zcompdump; compinit
<add>```
<add>
<add>Additionally, if you receive "zsh compinit: insecure directories" warnings when attempting to load these completions, you may need to run this:
<add>
<add>```sh
<add> chmod go-w "$(brew --prefix)/share"
<add>```
<add>
<add>## Configuring Completions in `fish`
<add>No configuration is needed in `fish`. Friendly! | 2 |
PHP | PHP | fix minore typo | 01e398030bd894fb5c8996a6a9dd3d60794d1a74 | <ide><path>src/Illuminate/Database/Eloquent/Concerns/HasAttributes.php
<ide> protected function addCastAttributesToArray(array $attributes, array $mutatedAtt
<ide> );
<ide>
<ide> // If the attribute cast was a date or a datetime, we will serialize the date as
<del> // a string. This allows the developers to customize hwo dates are serialized
<add> // a string. This allows the developers to customize how dates are serialized
<ide> // into an array without affecting how they are persisted into the storage.
<ide> if ($attributes[$key] &&
<ide> ($value === 'date' || $value === 'datetime')) { | 1 |
PHP | PHP | add global callback handler | 188c99485dc3954be93c6c5e0f509d9cbe2ab13d | <ide><path>src/Illuminate/Database/Eloquent/Concerns/HasAttributes.php
<ide> public function isRelation($key)
<ide> */
<ide> protected function violatedLazyLoading($key)
<ide> {
<add> if (isset(static::$violatedLazyLoadingCallback)) {
<add> call_user_func(static::$violatedLazyLoadingCallback, $this, $key);
<add> return;
<add> }
<add>
<ide> throw new LazyLoadingViolationException($this, $key);
<ide> }
<ide>
<ide><path>src/Illuminate/Database/Eloquent/Model.php
<ide> abstract class Model implements Arrayable, ArrayAccess, Jsonable, JsonSerializab
<ide> */
<ide> protected static $modelsShouldPreventLazyLoading = false;
<ide>
<add> /**
<add> * The callback that is responsible for handing lazy loading violations.
<add> *
<add> * @var callable|null
<add> */
<add> protected static $violatedLazyLoadingCallback;
<add>
<ide> /**
<ide> * The name of the "created at" column.
<ide> *
<ide> public static function preventLazyLoading($value = true)
<ide> static::$modelsShouldPreventLazyLoading = $value;
<ide> }
<ide>
<add> /**
<add> * Register a callback that is responsible for handling lazy loading violations.
<add> *
<add> * @param callable $callback
<add> */
<add> public static function handleLazyLoadingViolationUsing(callable $callback)
<add> {
<add> static::$violatedLazyLoadingCallback = $callback;
<add> }
<add>
<ide> /**
<ide> * Fill the model with an array of attributes.
<ide> *
<ide><path>tests/Integration/Database/EloquentStrictLoadingTest.php
<ide> public function testStrictModeThrowsAnExceptionOnLazyLoadingInRelations()
<ide>
<ide> $models[0]->modelTwos[0]->modelThrees;
<ide> }
<add>
<add> public function testStrictModeWithCustomCallbackOnLazyLoading()
<add> {
<add> $this->expectsEvents(ViolatedLazyLoadingEvent::class);
<add>
<add> Model::handleLazyLoadingViolationUsing(function ($model, $key) {
<add> event(new ViolatedLazyLoadingEvent($model, $key));
<add> });
<add>
<add> EloquentStrictLoadingTestModel1::create();
<add> EloquentStrictLoadingTestModel1::create();
<add>
<add> $models = EloquentStrictLoadingTestModel1::get();
<add>
<add> $models[0]->modelTwos;
<add> }
<add>
<add> public function testStrictModeWithOverriddenHandlerOnLazyLoading()
<add> {
<add> $this->expectException(\RuntimeException::class);
<add> $this->expectExceptionMessage('Violated');
<add>
<add> EloquentStrictLoadingTestModel1WithCustomHandler::create();
<add> EloquentStrictLoadingTestModel1WithCustomHandler::create();
<add>
<add> $models = EloquentStrictLoadingTestModel1WithCustomHandler::get();
<add>
<add> $models[0]->modelTwos;
<add> }
<ide> }
<ide>
<ide> class EloquentStrictLoadingTestModel1 extends Model
<ide> public function modelTwos()
<ide> }
<ide> }
<ide>
<add>class EloquentStrictLoadingTestModel1WithCustomHandler extends Model
<add>{
<add> public $table = 'test_model1';
<add> public $timestamps = false;
<add> protected $guarded = [];
<add>
<add> public function modelTwos()
<add> {
<add> return $this->hasMany(EloquentStrictLoadingTestModel2::class, 'model_1_id');
<add> }
<add>
<add> protected function violatedLazyLoading($key)
<add> {
<add> throw new \RuntimeException("Violated {$key}");
<add> }
<add>}
<add>
<ide> class EloquentStrictLoadingTestModel2 extends Model
<ide> {
<ide> public $table = 'test_model2';
<ide> class EloquentStrictLoadingTestModel3 extends Model
<ide> public $timestamps = false;
<ide> protected $guarded = [];
<ide> }
<add>
<add>class ViolatedLazyLoadingEvent
<add>{
<add> public $model;
<add> public $key;
<add>
<add> public function __construct($model, $key)
<add> {
<add> $this->model = $model;
<add> $this->key = $key;
<add> }
<add>} | 3 |
Go | Go | fix wildcard expansion after slash in filename | 309056648c8263aca388e679179464e59a9f52c8 | <ide><path>builder/dockerfile/internals.go
<ide> func (b *Builder) calcCopyInfo(cmdName, origPath string, allowLocalDecompression
<ide> return copyInfos, nil
<ide> }
<ide>
<del>func containsWildcards(name string) bool {
<del> for i := 0; i < len(name); i++ {
<del> ch := name[i]
<del> if ch == '\\' {
<del> i++
<del> } else if ch == '*' || ch == '?' || ch == '[' {
<del> return true
<del> }
<del> }
<del> return false
<del>}
<del>
<ide> func (b *Builder) processImageFrom(img builder.Image) error {
<ide> if img != nil {
<ide> b.image = img.ImageID()
<ide><path>builder/dockerfile/internals_unix.go
<ide> func normaliseDest(cmdName, workingDir, requested string) (string, error) {
<ide> }
<ide> return dest, nil
<ide> }
<add>
<add>func containsWildcards(name string) bool {
<add> for i := 0; i < len(name); i++ {
<add> ch := name[i]
<add> if ch == '\\' {
<add> i++
<add> } else if ch == '*' || ch == '?' || ch == '[' {
<add> return true
<add> }
<add> }
<add> return false
<add>}
<ide><path>builder/dockerfile/internals_windows.go
<ide> func normaliseDest(cmdName, workingDir, requested string) (string, error) {
<ide> }
<ide> return dest, nil
<ide> }
<add>
<add>func containsWildcards(name string) bool {
<add> for i := 0; i < len(name); i++ {
<add> ch := name[i]
<add> if ch == '*' || ch == '?' || ch == '[' {
<add> return true
<add> }
<add> }
<add> return false
<add>}
<ide><path>integration-cli/docker_cli_build_test.go
<ide> RUN [ $(cat "/test dir/test_file6") = 'test6' ]`,
<ide> }
<ide>
<ide> func (s *DockerSuite) TestBuildCopyWildcard(c *check.C) {
<del> testRequires(c, DaemonIsLinux) // Windows doesn't have httpserver image yet
<ide> name := "testcopywildcard"
<ide> server, err := fakeStorage(map[string]string{
<ide> "robots.txt": "hello",
<ide> func (s *DockerSuite) TestBuildCopyWildcard(c *check.C) {
<ide> ctx, err := fakeContext(fmt.Sprintf(`FROM busybox
<ide> COPY file*.txt /tmp/
<ide> RUN ls /tmp/file1.txt /tmp/file2.txt
<del> RUN mkdir /tmp1
<add> RUN [ "mkdir", "/tmp1" ]
<ide> COPY dir* /tmp1/
<ide> RUN ls /tmp1/dirt /tmp1/nested_file /tmp1/nested_dir/nest_nest_file
<del> RUN mkdir /tmp2
<add> RUN [ "mkdir", "/tmp2" ]
<ide> ADD dir/*dir %s/robots.txt /tmp2/
<ide> RUN ls /tmp2/nest_nest_file /tmp2/robots.txt
<ide> `, server.URL()), | 4 |
Javascript | Javascript | increase code coverage of source_text_module.js | 6c2f78e7048ed0383ce6f66322913dab0dd7b387 | <ide><path>test/parallel/test-vm-module-basic.js
<ide> const util = require('util');
<ide> );
<ide> assert.strictEqual(util.inspect(m, { depth: -1 }), '[SourceTextModule]');
<ide> }
<add>
<add>// Check dependencies getter returns same object every time
<add>{
<add> const m = new SourceTextModule('');
<add> const dep = m.dependencySpecifiers;
<add> assert.notStrictEqual(dep, undefined);
<add> assert.strictEqual(dep, m.dependencySpecifiers);
<add>} | 1 |
Javascript | Javascript | add code example for emberarray.filter | 1787f810d530e2707f68526f192668fbec873ce5 | <ide><path>packages/@ember/-internals/runtime/lib/mixins/array.js
<ide> const ArrayMixin = Mixin.create(Enumerable, {
<ide> mapBy,
<ide>
<ide> /**
<del> Returns an array with all of the items in the enumeration that the passed
<del> function returns true for. This method corresponds to `filter()` defined in
<del> JavaScript 1.6.
<add> Returns a new array with all of the items in the enumeration that the provided
<add> callback function returns true for. This method corresponds to [Array.prototype.filter()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/filter).
<ide>
<del> The callback method you provide should have the following signature (all
<del> parameters are optional):
<add> The callback method should have the following signature:
<ide>
<ide> ```javascript
<ide> function(item, index, array);
<ide> const ArrayMixin = Mixin.create(Enumerable, {
<ide> - `index` is the current index in the iteration.
<ide> - `array` is the array itself.
<ide>
<del> It should return `true` to include the item in the results, `false`
<del> otherwise.
<add> All parameters are optional. The function should return `true` to include the item
<add> in the results, and `false` otherwise.
<ide>
<del> Note that in addition to a callback, you can also pass an optional target
<del> object that will be set as `this` on the context. This is a good way
<del> to give your iterator function access to the current object.
<add> Example:
<add>
<add> ```javascript
<add> function isAdult(person) {
<add> return person.age > 18;
<add> };
<add>
<add> let people = Ember.A([{ name: 'John', age: 14 }, { name: 'Joan', age: 45 }]);
<add>
<add> people.filter(isAdult); // returns [{ name: 'Joan', age: 45 }];
<add> ```
<add>
<add> Note that in addition to a callback, you can pass an optional target object
<add> that will be set as `this` on the context. This is a good way to give your
<add> iterator function access to the current object.
<ide>
<ide> @method filter
<ide> @param {Function} callback The callback to execute | 1 |
Javascript | Javascript | reduce run time for test-benchmark-http | daead5a76731c9201daa32a146c2e89f3d522670 | <ide><path>test/sequential/test-benchmark-http.js
<ide> const env = Object.assign({}, process.env,
<ide>
<ide> const child = fork(runjs, ['--set', 'benchmarker=test-double',
<ide> '--set', 'c=1',
<add> '--set', 'chunkedEnc=true',
<ide> '--set', 'chunks=0',
<ide> '--set', 'dur=0.1',
<ide> '--set', 'key=""',
<ide> '--set', 'len=1',
<add> '--set', 'method=write',
<ide> '--set', 'n=1',
<add> '--set', 'res=normal',
<ide> 'http'],
<ide> {env});
<ide> child.on('exit', (code, signal) => { | 1 |
Python | Python | escape hyperlink urls on lookup | fe840a34ff79f6fd996219ff8325b5f07cc3f62b | <ide><path>rest_framework/relations.py
<ide> def to_internal_value(self, data):
<ide> if data.startswith(prefix):
<ide> data = '/' + data[len(prefix):]
<ide>
<del> data = uri_to_iri(data)
<add> data = uri_to_iri(parse.unquote(data))
<ide>
<ide> try:
<ide> match = resolve(data)
<ide><path>tests/test_relations.py
<ide> def setUp(self):
<ide> self.queryset = MockQueryset([
<ide> MockObject(pk=1, name='foobar'),
<ide> MockObject(pk=2, name='bazABCqux'),
<add> MockObject(pk=2, name='bazABC qux'),
<ide> ])
<ide> self.field = serializers.HyperlinkedRelatedField(
<ide> view_name='example',
<ide> def test_hyperlinked_related_lookup_url_encoded_exists(self):
<ide> instance = self.field.to_internal_value('http://example.org/example/baz%41%42%43qux/')
<ide> assert instance is self.queryset.items[1]
<ide>
<add> def test_hyperlinked_related_lookup_url_space_encoded_exists(self):
<add> instance = self.field.to_internal_value('http://example.org/example/bazABC%20qux/')
<add> assert instance is self.queryset.items[2]
<add>
<ide> def test_hyperlinked_related_lookup_does_not_exist(self):
<ide> with pytest.raises(serializers.ValidationError) as excinfo:
<ide> self.field.to_internal_value('http://example.org/example/doesnotexist/') | 2 |
Text | Text | remove placholder text | 6d9d0c992e682fa4398bd2114a8eeb6a676c57ed | <ide><path>curriculum/challenges/english/01-responsive-web-design/applied-visual-design/make-motion-more-natural-using-a-bezier-curve.english.md
<ide> Change value of the <code>animation-timing-function</code> of the element with t
<ide> ```yml
<ide> tests:
<ide> - text: 'The value of the <code>animation-timing-function</code> property for the element with the id <code>green</code> should be a <code>cubic-bezier</code> function with x1, y1, x2, y2 values as specified.'
<del> testString: 'assert($("#green").css("animation-timing-function") == "cubic-bezier(0.311, 0.441, 0.444, 1.649)", "The value of the <code>animation-timing-function</code> property for the element with the id <code>green</code> should be a <code>cubic-bezier</code> function with x1, y1, x2, #{{TEXT}}'
<add> testString: 'assert($("#green").css("animation-timing-function") == "cubic-bezier(0.311, 0.441, 0.444, 1.649)", "The value of the <code>animation-timing-function</code> property for the element with the id <code>green</code> should be a <code>cubic-bezier</code> function with x1, y1, x2, y2 values as specified.'
<ide>
<ide> ```
<ide> | 1 |
Python | Python | add multiple http statuses | 6bd25c09a642dea9918bd85a3c40671fbb8feb2d | <ide><path>rest_framework/status.py
<ide> def is_server_error(code):
<ide> HTTP_205_RESET_CONTENT = 205
<ide> HTTP_206_PARTIAL_CONTENT = 206
<ide> HTTP_207_MULTI_STATUS = 207
<add>HTTP_208_ALREADY_REPORTED = 208
<add>HTTP_226_IM_USED = 226
<ide> HTTP_300_MULTIPLE_CHOICES = 300
<ide> HTTP_301_MOVED_PERMANENTLY = 301
<ide> HTTP_302_FOUND = 302
<ide> def is_server_error(code):
<ide> HTTP_422_UNPROCESSABLE_ENTITY = 422
<ide> HTTP_423_LOCKED = 423
<ide> HTTP_424_FAILED_DEPENDENCY = 424
<add>HTTP_426_UPGRADE_REQUIRED = 426
<ide> HTTP_428_PRECONDITION_REQUIRED = 428
<ide> HTTP_429_TOO_MANY_REQUESTS = 429
<ide> HTTP_431_REQUEST_HEADER_FIELDS_TOO_LARGE = 431
<ide> def is_server_error(code):
<ide> HTTP_503_SERVICE_UNAVAILABLE = 503
<ide> HTTP_504_GATEWAY_TIMEOUT = 504
<ide> HTTP_505_HTTP_VERSION_NOT_SUPPORTED = 505
<add>HTTP_506_VARIANT_ALSO_NEGOTIATES = 506
<ide> HTTP_507_INSUFFICIENT_STORAGE = 507
<add>HTTP_508_LOOP_DETECTED = 508
<add>HTTP_509_BANDWIDTH_LIMIT_EXCEEDED = 509
<add>HTTP_510_NOT_EXTENDED = 510
<ide> HTTP_511_NETWORK_AUTHENTICATION_REQUIRED = 511 | 1 |
Text | Text | fix minor capitalzation typo | 25be6dc0279cc0348bbc7743a85c712f74dbf897 | <ide><path>CHANGELOG.md
<ide> Each of these changes will continue to work as before with a new warning until t
<ide>
<ide> * Upgrade Commoner so `require` statements are no longer relativized when passing through the transformer. This was a feature needed when building React, but doesn't translate well for other consumers of `bin/jsx`.
<ide> * Upgraded our dependencies on Commoner and Recast so they use a different directory for their cache.
<del>* Freeze our esprima dependency.
<add>* Freeze our Esprima dependency.
<ide>
<ide>
<ide> ## 0.3.2 (May 31, 2013) | 1 |
Ruby | Ruby | implement equality for `bindparam` | d36a769234911c8374e09069eb054d4c60eb1b99 | <ide><path>lib/arel/nodes/bind_param.rb
<ide> module Arel
<ide> module Nodes
<ide> class BindParam < Node
<add> def ==(other)
<add> other.is_a?(BindParam)
<add> end
<ide> end
<ide> end
<ide> end
<ide><path>test/nodes/test_bind_param.rb
<add>require 'helper'
<add>
<add>module Arel
<add> module Nodes
<add> describe 'BindParam' do
<add> it 'is equal to other bind params' do
<add> BindParam.new.must_equal(BindParam.new)
<add> end
<add>
<add> it 'is not equal to other nodes' do
<add> BindParam.new.wont_equal(Node.new)
<add> end
<add> end
<add> end
<add>end | 2 |
Python | Python | add pdf task + full dmg build for paver script | bc70602d5fb6adcf3cfe19883aeac72ef015d3ae | <ide><path>pavement.py
<ide> def html(options):
<ide> HTML_DESTDIR.rmtree()
<ide> builtdocs.copytree(HTML_DESTDIR)
<ide>
<add>def _latex_paths():
<add> """look up the options that determine where all of the files are."""
<add> opts = options
<add> docroot = paver.path.path(opts.get('docroot', 'docs'))
<add> if not docroot.exists():
<add> raise BuildFailure("Sphinx documentation root (%s) does not exist."
<add> % docroot)
<add> builddir = docroot / opts.get("builddir", ".build")
<add> builddir.mkdir()
<add> srcdir = docroot / opts.get("sourcedir", "")
<add> if not srcdir.exists():
<add> raise BuildFailure("Sphinx source file dir (%s) does not exist"
<add> % srcdir)
<add> latexdir = builddir / "latex"
<add> latexdir.mkdir()
<add> return Bunch(locals())
<add>
<add>@task
<add>def latex():
<add> """Build samplerate's documentation and install it into
<add> scikits/samplerate/docs"""
<add> paths = _latex_paths()
<add> sphinxopts = ['', '-b', 'latex', paths.srcdir, paths.latexdir]
<add> #dry("sphinx-build %s" % (" ".join(sphinxopts),), sphinx.main, sphinxopts)
<add> subprocess.check_call(["make", "latex"], cwd="doc")
<add>
<add>@task
<add>@needs('latex')
<add>def pdf():
<add> paths = _latex_paths()
<add> def build_latex():
<add> subprocess.check_call(["make", "all-pdf"], cwd=paths.latexdir)
<add> dry("Build pdf doc", build_latex)
<add>
<add> PDF_DESTDIR.rmtree()
<add> PDF_DESTDIR.makedirs()
<add>
<add> user = paths.latexdir / "numpy-user.pdf"
<add> user.copy(PDF_DESTDIR / "userguide.pdf")
<add> ref = paths.latexdir / "numpy-ref.pdf"
<add> ref.copy(PDF_DESTDIR / "reference.pdf")
<add>
<ide> @task
<ide> def sdist():
<ide> # To be sure to bypass paver when building sdist... paver + numpy.distutils
<ide> def bdist_mpkg():
<ide> sh("python setupegg.py bdist_mpkg")
<ide>
<ide> @task
<del>#@needs("bdist_mpkg", "doc")
<add>@needs("bdist_mpkg", "pdf")
<ide> def dmg():
<ide> pyver = ".".join([str(i) for i in sys.version_info[:2]])
<ide>
<ide> def dmg():
<ide> mpkg_source.copytree(content / mpkg_tn)
<ide>
<ide> # Copy docs into image source
<del> html_docs = HTML_DESTDIR
<del> html_docs.copytree(content / "Documentation" / "html")
<add>
<add> #html_docs = HTML_DESTDIR
<add> #html_docs.copytree(content / "Documentation" / "html")
<add>
<add> pdf_docs = DMG_CONTENT / "Documentation"
<add> pdf_docs.rmtree()
<add> pdf_docs.makedirs()
<add>
<add> user = PDF_DESTDIR / "userguide.pdf"
<add> user.copy(pdf_docs / "userguide.pdf")
<add> ref = PDF_DESTDIR / "reference.pdf"
<add> ref.copy(pdf_docs / "reference.pdf")
<ide>
<ide> # Build the dmg
<ide> cmd = ["./create-dmg", "--window-size", "500", "500", "--background", | 1 |
Text | Text | fix misplaced entries in test/common doc | 52491529ba940faa6d69b20bb5fe89ae3b3e137c | <ide><path>test/common/README.md
<ide> Tests whether `name`, `expected`, and `code` are part of a raised warning. If
<ide> an expected warning does not have a code then `common.noWarnCode` can be used
<ide> to indicate this.
<ide>
<del>### noWarnCode
<del>See `common.expectWarning()` for usage.
<del>
<ide> ### fileExists(pathname)
<ide> * pathname [<string>]
<ide> * return [<boolean>]
<ide> consisting of all `ArrayBufferView` and an `ArrayBuffer`.
<ide>
<ide> Returns the file name and line number for the provided Function.
<ide>
<del>### runWithInvalidFD(func)
<del>* `func` [<Function>]
<add>### getTTYfd()
<ide>
<del>Runs `func` with an invalid file descriptor that is an unsigned integer and
<del>can be used to trigger `EBADF` as the first argument. If no such file
<del>descriptor could be generated, a skip message will be printed and the `func`
<del>will not be run.
<add>Attempts to get a valid TTY file descriptor. Returns `-1` if it fails.
<add>
<add>The TTY file descriptor is assumed to be capable of being writable.
<ide>
<ide> ### globalCheck
<ide> * [<boolean>]
<ide> Returns `true` if the exit code `exitCode` and/or signal name `signal` represent
<ide> the exit code and/or signal name of a node process that aborted, `false`
<ide> otherwise.
<ide>
<add>### noWarnCode
<add>See `common.expectWarning()` for usage.
<add>
<ide> ### opensslCli
<ide> * [<boolean>]
<ide>
<ide> original state after calling [`common.hijackStdOut()`][].
<ide>
<ide> Path to the 'root' directory. either `/` or `c:\\` (windows)
<ide>
<add>### runWithInvalidFD(func)
<add>* `func` [<Function>]
<add>
<add>Runs `func` with an invalid file descriptor that is an unsigned integer and
<add>can be used to trigger `EBADF` as the first argument. If no such file
<add>descriptor could be generated, a skip message will be printed and the `func`
<add>will not be run.
<add>
<ide> ### skip(msg)
<ide> * `msg` [<string>]
<ide>
<ide> The realpath of the testing temporary directory.
<ide>
<ide> Deletes and recreates the testing temporary directory.
<ide>
<del>### getTTYfd()
<del>
<del>Attempts to get a valid TTY file descriptor. Returns `-1` if it fails.
<del>
<del>The TTY file descriptor is assumed to be capable of being writable.
<del>
<ide> ## WPT Module
<ide>
<ide> The wpt.js module is a port of parts of | 1 |
PHP | PHP | correct all the assert orders | ed5b3dec0fee149be88a2826b34778866cab6725 | <ide><path>tests/TestCase/Utility/InflectorTest.php
<ide> public function tearDown()
<ide> */
<ide> public function testInflectingSingulars()
<ide> {
<del> $this->assertEquals(Inflector::singularize('categorias'), 'categoria');
<del> $this->assertEquals(Inflector::singularize('menus'), 'menu');
<del> $this->assertEquals(Inflector::singularize('news'), 'news');
<del> $this->assertEquals(Inflector::singularize('food_menus'), 'food_menu');
<del> $this->assertEquals(Inflector::singularize('Menus'), 'Menu');
<del> $this->assertEquals(Inflector::singularize('FoodMenus'), 'FoodMenu');
<del> $this->assertEquals(Inflector::singularize('houses'), 'house');
<del> $this->assertEquals(Inflector::singularize('powerhouses'), 'powerhouse');
<del> $this->assertEquals(Inflector::singularize('quizzes'), 'quiz');
<del> $this->assertEquals(Inflector::singularize('Buses'), 'Bus');
<del> $this->assertEquals(Inflector::singularize('buses'), 'bus');
<del> $this->assertEquals(Inflector::singularize('matrix_rows'), 'matrix_row');
<del> $this->assertEquals(Inflector::singularize('matrices'), 'matrix');
<del> $this->assertEquals(Inflector::singularize('vertices'), 'vertex');
<del> $this->assertEquals(Inflector::singularize('indices'), 'index');
<del> $this->assertEquals(Inflector::singularize('Aliases'), 'Alias');
<del> $this->assertEquals(Inflector::singularize('Alias'), 'Alias');
<del> $this->assertEquals(Inflector::singularize('Media'), 'Media');
<del> $this->assertEquals(Inflector::singularize('NodeMedia'), 'NodeMedia');
<del> $this->assertEquals(Inflector::singularize('alumni'), 'alumnus');
<del> $this->assertEquals(Inflector::singularize('bacilli'), 'bacillus');
<del> $this->assertEquals(Inflector::singularize('cacti'), 'cactus');
<del> $this->assertEquals(Inflector::singularize('foci'), 'focus');
<del> $this->assertEquals(Inflector::singularize('fungi'), 'fungus');
<del> $this->assertEquals(Inflector::singularize('nuclei'), 'nucleus');
<del> $this->assertEquals(Inflector::singularize('octopuses'), 'octopus');
<del> $this->assertEquals(Inflector::singularize('radii'), 'radius');
<del> $this->assertEquals(Inflector::singularize('stimuli'), 'stimulus');
<del> $this->assertEquals(Inflector::singularize('syllabi'), 'syllabus');
<del> $this->assertEquals(Inflector::singularize('termini'), 'terminus');
<del> $this->assertEquals(Inflector::singularize('viri'), 'virus');
<del> $this->assertEquals(Inflector::singularize('people'), 'person');
<del> $this->assertEquals(Inflector::singularize('gloves'), 'glove');
<del> $this->assertEquals(Inflector::singularize('doves'), 'dove');
<del> $this->assertEquals(Inflector::singularize('lives'), 'life');
<del> $this->assertEquals(Inflector::singularize('knives'), 'knife');
<del> $this->assertEquals(Inflector::singularize('wolves'), 'wolf');
<del> $this->assertEquals(Inflector::singularize('slaves'), 'slave');
<del> $this->assertEquals(Inflector::singularize('shelves'), 'shelf');
<del> $this->assertEquals(Inflector::singularize('taxis'), 'taxi');
<del> $this->assertEquals(Inflector::singularize('taxes'), 'tax');
<del> $this->assertEquals(Inflector::singularize('Taxes'), 'Tax');
<del> $this->assertEquals(Inflector::singularize('AwesomeTaxes'), 'AwesomeTax');
<del> $this->assertEquals(Inflector::singularize('faxes'), 'fax');
<del> $this->assertEquals(Inflector::singularize('waxes'), 'wax');
<del> $this->assertEquals(Inflector::singularize('niches'), 'niche');
<del> $this->assertEquals(Inflector::singularize('caves'), 'cave');
<del> $this->assertEquals(Inflector::singularize('graves'), 'grave');
<del> $this->assertEquals(Inflector::singularize('waves'), 'wave');
<del> $this->assertEquals(Inflector::singularize('bureaus'), 'bureau');
<del> $this->assertEquals(Inflector::singularize('genetic_analyses'), 'genetic_analysis');
<del> $this->assertEquals(Inflector::singularize('doctor_diagnoses'), 'doctor_diagnosis');
<del> $this->assertEquals(Inflector::singularize('parantheses'), 'paranthesis');
<del> $this->assertEquals(Inflector::singularize('Causes'), 'Cause');
<del> $this->assertEquals(Inflector::singularize('colossuses'), 'colossus');
<del> $this->assertEquals(Inflector::singularize('diagnoses'), 'diagnosis');
<del> $this->assertEquals(Inflector::singularize('bases'), 'basis');
<del> $this->assertEquals(Inflector::singularize('analyses'), 'analysis');
<del> $this->assertEquals(Inflector::singularize('curves'), 'curve');
<del> $this->assertEquals(Inflector::singularize('cafes'), 'cafe');
<del> $this->assertEquals(Inflector::singularize('roofs'), 'roof');
<del> $this->assertEquals(Inflector::singularize('foes'), 'foe');
<del> $this->assertEquals(Inflector::singularize('databases'), 'database');
<del> $this->assertEquals(Inflector::singularize('cookies'), 'cookie');
<del> $this->assertEquals(Inflector::singularize('thieves'), 'thief');
<del> $this->assertEquals(Inflector::singularize('potatoes'), 'potato');
<del> $this->assertEquals(Inflector::singularize('heroes'), 'hero');
<del> $this->assertEquals(Inflector::singularize('buffaloes'), 'buffalo');
<del> $this->assertEquals(Inflector::singularize('babies'), 'baby');
<del> $this->assertEquals(Inflector::singularize('teeth'), 'tooth');
<del> $this->assertEquals(Inflector::singularize('geese'), 'goose');
<del> $this->assertEquals(Inflector::singularize('feet'), 'foot');
<del> $this->assertEquals(Inflector::singularize('objectives'), 'objective');
<del> $this->assertEquals(Inflector::singularize('archives'), 'archive');
<del> $this->assertEquals(Inflector::singularize('briefs'), 'brief');
<del> $this->assertEquals(Inflector::singularize('quotas'), 'quota');
<del> $this->assertEquals(Inflector::singularize('curves'), 'curve');
<del> $this->assertEquals(Inflector::singularize('body_curves'), 'body_curve');
<del> $this->assertEquals(Inflector::singularize('metadata'), 'metadata');
<del> $this->assertEquals(Inflector::singularize('files_metadata'), 'files_metadata');
<del> $this->assertEquals(Inflector::singularize('addresses'), 'address');
<del> $this->assertEquals(Inflector::singularize(''), '');
<add> $this->assertEquals('categoria', Inflector::singularize('categorias'));
<add> $this->assertEquals('menu', Inflector::singularize('menus'));
<add> $this->assertEquals('news', Inflector::singularize('news'));
<add> $this->assertEquals('food_menu', Inflector::singularize('food_menus'));
<add> $this->assertEquals('Menu', Inflector::singularize('Menus'));
<add> $this->assertEquals('FoodMenu', Inflector::singularize('FoodMenus'));
<add> $this->assertEquals('house', Inflector::singularize('houses'));
<add> $this->assertEquals('powerhouse', Inflector::singularize('powerhouses'));
<add> $this->assertEquals('quiz', Inflector::singularize('quizzes'));
<add> $this->assertEquals('Bus', Inflector::singularize('Buses'));
<add> $this->assertEquals('bus', Inflector::singularize('buses'));
<add> $this->assertEquals('matrix_row', Inflector::singularize('matrix_rows'));
<add> $this->assertEquals('matrix', Inflector::singularize('matrices'));
<add> $this->assertEquals('vertex', Inflector::singularize('vertices'));
<add> $this->assertEquals('index', Inflector::singularize('indices'));
<add> $this->assertEquals('Alias', Inflector::singularize('Aliases'));
<add> $this->assertEquals('Alias', Inflector::singularize('Alias'));
<add> $this->assertEquals('Media', Inflector::singularize('Media'));
<add> $this->assertEquals('NodeMedia', Inflector::singularize('NodeMedia'));
<add> $this->assertEquals('alumnus', Inflector::singularize('alumni'));
<add> $this->assertEquals('bacillus', Inflector::singularize('bacilli'));
<add> $this->assertEquals('cactus', Inflector::singularize('cacti'));
<add> $this->assertEquals('focus', Inflector::singularize('foci'));
<add> $this->assertEquals('fungus', Inflector::singularize('fungi'));
<add> $this->assertEquals('nucleus', Inflector::singularize('nuclei'));
<add> $this->assertEquals('octopus', Inflector::singularize('octopuses'));
<add> $this->assertEquals('radius', Inflector::singularize('radii'));
<add> $this->assertEquals('stimulus', Inflector::singularize('stimuli'));
<add> $this->assertEquals('syllabus', Inflector::singularize('syllabi'));
<add> $this->assertEquals('terminus', Inflector::singularize('termini'));
<add> $this->assertEquals('virus', Inflector::singularize('viri'));
<add> $this->assertEquals('person', Inflector::singularize('people'));
<add> $this->assertEquals('glove', Inflector::singularize('gloves'));
<add> $this->assertEquals('dove', Inflector::singularize('doves'));
<add> $this->assertEquals('life', Inflector::singularize('lives'));
<add> $this->assertEquals('knife', Inflector::singularize('knives'));
<add> $this->assertEquals('wolf', Inflector::singularize('wolves'));
<add> $this->assertEquals('slave', Inflector::singularize('slaves'));
<add> $this->assertEquals('shelf', Inflector::singularize('shelves'));
<add> $this->assertEquals('taxi', Inflector::singularize('taxis'));
<add> $this->assertEquals('tax', Inflector::singularize('taxes'));
<add> $this->assertEquals('Tax', Inflector::singularize('Taxes'));
<add> $this->assertEquals('AwesomeTax', Inflector::singularize('AwesomeTaxes'));
<add> $this->assertEquals('fax', Inflector::singularize('faxes'));
<add> $this->assertEquals('wax', Inflector::singularize('waxes'));
<add> $this->assertEquals('niche', Inflector::singularize('niches'));
<add> $this->assertEquals('cave', Inflector::singularize('caves'));
<add> $this->assertEquals('grave', Inflector::singularize('graves'));
<add> $this->assertEquals('wave', Inflector::singularize('waves'));
<add> $this->assertEquals('bureau', Inflector::singularize('bureaus'));
<add> $this->assertEquals('genetic_analysis', Inflector::singularize('genetic_analyses'));
<add> $this->assertEquals('doctor_diagnosis', Inflector::singularize('doctor_diagnoses'));
<add> $this->assertEquals('paranthesis', Inflector::singularize('parantheses'));
<add> $this->assertEquals('Cause', Inflector::singularize('Causes'));
<add> $this->assertEquals('colossus', Inflector::singularize('colossuses'));
<add> $this->assertEquals('diagnosis', Inflector::singularize('diagnoses'));
<add> $this->assertEquals('basis', Inflector::singularize('bases'));
<add> $this->assertEquals('analysis', Inflector::singularize('analyses'));
<add> $this->assertEquals('curve', Inflector::singularize('curves'));
<add> $this->assertEquals('cafe', Inflector::singularize('cafes'));
<add> $this->assertEquals('roof', Inflector::singularize('roofs'));
<add> $this->assertEquals('foe', Inflector::singularize('foes'));
<add> $this->assertEquals('database', Inflector::singularize('databases'));
<add> $this->assertEquals('cookie', Inflector::singularize('cookies'));
<add> $this->assertEquals('thief', Inflector::singularize('thieves'));
<add> $this->assertEquals('potato', Inflector::singularize('potatoes'));
<add> $this->assertEquals('hero', Inflector::singularize('heroes'));
<add> $this->assertEquals('buffalo', Inflector::singularize('buffaloes'));
<add> $this->assertEquals('baby', Inflector::singularize('babies'));
<add> $this->assertEquals('tooth', Inflector::singularize('teeth'));
<add> $this->assertEquals('goose', Inflector::singularize('geese'));
<add> $this->assertEquals('foot', Inflector::singularize('feet'));
<add> $this->assertEquals('objective', Inflector::singularize('objectives'));
<add> $this->assertEquals('archive', Inflector::singularize('archives'));
<add> $this->assertEquals('brief', Inflector::singularize('briefs'));
<add> $this->assertEquals('quota', Inflector::singularize('quotas'));
<add> $this->assertEquals('curve', Inflector::singularize('curves'));
<add> $this->assertEquals('body_curve', Inflector::singularize('body_curves'));
<add> $this->assertEquals('metadata', Inflector::singularize('metadata'));
<add> $this->assertEquals('files_metadata', Inflector::singularize('files_metadata'));
<add> $this->assertEquals('address', Inflector::singularize('addresses'));
<add> $this->assertEquals('', Inflector::singularize(''));
<ide> }
<ide>
<ide> /**
<ide> public function testInflectingSingulars()
<ide> */
<ide> public function testInflectingPlurals()
<ide> {
<del> $this->assertEquals(Inflector::pluralize('axman'), 'axmen');
<del> $this->assertEquals(Inflector::pluralize('man'), 'men');
<del> $this->assertEquals(Inflector::pluralize('woman'), 'women');
<del> $this->assertEquals(Inflector::pluralize('human'), 'humans');
<del> $this->assertEquals(Inflector::pluralize('axman'), 'axmen');
<del> $this->assertEquals(Inflector::pluralize('man'), 'men');
<del> $this->assertEquals(Inflector::pluralize('woman'), 'women');
<del> $this->assertEquals(Inflector::pluralize('human'), 'humans');
<del> $this->assertEquals(Inflector::pluralize('categoria'), 'categorias');
<del> $this->assertEquals(Inflector::pluralize('house'), 'houses');
<del> $this->assertEquals(Inflector::pluralize('powerhouse'), 'powerhouses');
<del> $this->assertEquals(Inflector::pluralize('Bus'), 'Buses');
<del> $this->assertEquals(Inflector::pluralize('bus'), 'buses');
<del> $this->assertEquals(Inflector::pluralize('menu'), 'menus');
<del> $this->assertEquals(Inflector::pluralize('news'), 'news');
<del> $this->assertEquals(Inflector::pluralize('food_menu'), 'food_menus');
<del> $this->assertEquals(Inflector::pluralize('Menu'), 'Menus');
<del> $this->assertEquals(Inflector::pluralize('FoodMenu'), 'FoodMenus');
<del> $this->assertEquals(Inflector::pluralize('quiz'), 'quizzes');
<del> $this->assertEquals(Inflector::pluralize('matrix_row'), 'matrix_rows');
<del> $this->assertEquals(Inflector::pluralize('matrix'), 'matrices');
<del> $this->assertEquals(Inflector::pluralize('vertex'), 'vertices');
<del> $this->assertEquals(Inflector::pluralize('index'), 'indices');
<del> $this->assertEquals(Inflector::pluralize('Alias'), 'Aliases');
<del> $this->assertEquals(Inflector::pluralize('Aliases'), 'Aliases');
<del> $this->assertEquals(Inflector::pluralize('Media'), 'Media');
<del> $this->assertEquals(Inflector::pluralize('NodeMedia'), 'NodeMedia');
<del> $this->assertEquals(Inflector::pluralize('alumnus'), 'alumni');
<del> $this->assertEquals(Inflector::pluralize('bacillus'), 'bacilli');
<del> $this->assertEquals(Inflector::pluralize('cactus'), 'cacti');
<del> $this->assertEquals(Inflector::pluralize('focus'), 'foci');
<del> $this->assertEquals(Inflector::pluralize('fungus'), 'fungi');
<del> $this->assertEquals(Inflector::pluralize('nucleus'), 'nuclei');
<del> $this->assertEquals(Inflector::pluralize('octopus'), 'octopuses');
<del> $this->assertEquals(Inflector::pluralize('radius'), 'radii');
<del> $this->assertEquals(Inflector::pluralize('stimulus'), 'stimuli');
<del> $this->assertEquals(Inflector::pluralize('syllabus'), 'syllabi');
<del> $this->assertEquals(Inflector::pluralize('terminus'), 'termini');
<del> $this->assertEquals(Inflector::pluralize('virus'), 'viri');
<del> $this->assertEquals(Inflector::pluralize('person'), 'people');
<del> $this->assertEquals(Inflector::pluralize('people'), 'people');
<del> $this->assertEquals(Inflector::pluralize('glove'), 'gloves');
<del> $this->assertEquals(Inflector::pluralize('crisis'), 'crises');
<del> $this->assertEquals(Inflector::pluralize('tax'), 'taxes');
<del> $this->assertEquals(Inflector::pluralize('wave'), 'waves');
<del> $this->assertEquals(Inflector::pluralize('bureau'), 'bureaus');
<del> $this->assertEquals(Inflector::pluralize('cafe'), 'cafes');
<del> $this->assertEquals(Inflector::pluralize('roof'), 'roofs');
<del> $this->assertEquals(Inflector::pluralize('foe'), 'foes');
<del> $this->assertEquals(Inflector::pluralize('cookie'), 'cookies');
<del> $this->assertEquals(Inflector::pluralize('wolf'), 'wolves');
<del> $this->assertEquals(Inflector::pluralize('thief'), 'thieves');
<del> $this->assertEquals(Inflector::pluralize('potato'), 'potatoes');
<del> $this->assertEquals(Inflector::pluralize('hero'), 'heroes');
<del> $this->assertEquals(Inflector::pluralize('buffalo'), 'buffaloes');
<del> $this->assertEquals(Inflector::pluralize('tooth'), 'teeth');
<del> $this->assertEquals(Inflector::pluralize('goose'), 'geese');
<del> $this->assertEquals(Inflector::pluralize('foot'), 'feet');
<del> $this->assertEquals(Inflector::pluralize('objective'), 'objectives');
<del> $this->assertEquals(Inflector::pluralize('brief'), 'briefs');
<del> $this->assertEquals(Inflector::pluralize('quota'), 'quotas');
<del> $this->assertEquals(Inflector::pluralize('curve'), 'curves');
<del> $this->assertEquals(Inflector::pluralize('body_curve'), 'body_curves');
<del> $this->assertEquals(Inflector::pluralize('metadata'), 'metadata');
<del> $this->assertEquals(Inflector::pluralize('files_metadata'), 'files_metadata');
<del> $this->assertEquals(Inflector::pluralize('stadia'), 'stadia');
<del> $this->assertEquals(Inflector::pluralize('Address'), 'Addresses');
<del> $this->assertEquals(Inflector::pluralize(''), '');
<add> $this->assertEquals('axmen', Inflector::pluralize('axman'));
<add> $this->assertEquals('men', Inflector::pluralize('man'));
<add> $this->assertEquals('women', Inflector::pluralize('woman'));
<add> $this->assertEquals('humans', Inflector::pluralize('human'));
<add> $this->assertEquals('axmen', Inflector::pluralize('axman'));
<add> $this->assertEquals('men', Inflector::pluralize('man'));
<add> $this->assertEquals('women', Inflector::pluralize('woman'));
<add> $this->assertEquals('humans', Inflector::pluralize('human'));
<add> $this->assertEquals('categorias', Inflector::pluralize('categoria'));
<add> $this->assertEquals('houses', Inflector::pluralize('house'));
<add> $this->assertEquals('powerhouses', Inflector::pluralize('powerhouse'));
<add> $this->assertEquals('Buses', Inflector::pluralize('Bus'));
<add> $this->assertEquals('buses', Inflector::pluralize('bus'));
<add> $this->assertEquals('menus', Inflector::pluralize('menu'));
<add> $this->assertEquals('news', Inflector::pluralize('news'));
<add> $this->assertEquals('food_menus', Inflector::pluralize('food_menu'));
<add> $this->assertEquals('Menus', Inflector::pluralize('Menu'));
<add> $this->assertEquals('FoodMenus', Inflector::pluralize('FoodMenu'));
<add> $this->assertEquals('quizzes', Inflector::pluralize('quiz'));
<add> $this->assertEquals('matrix_rows', Inflector::pluralize('matrix_row'));
<add> $this->assertEquals('matrices', Inflector::pluralize('matrix'));
<add> $this->assertEquals('vertices', Inflector::pluralize('vertex'));
<add> $this->assertEquals('indices', Inflector::pluralize('index'));
<add> $this->assertEquals('Aliases', Inflector::pluralize('Alias'));
<add> $this->assertEquals('Aliases', Inflector::pluralize('Aliases'));
<add> $this->assertEquals('Media', Inflector::pluralize('Media'));
<add> $this->assertEquals('NodeMedia', Inflector::pluralize('NodeMedia'));
<add> $this->assertEquals('alumni', Inflector::pluralize('alumnus'));
<add> $this->assertEquals('bacilli', Inflector::pluralize('bacillus'));
<add> $this->assertEquals('cacti', Inflector::pluralize('cactus'));
<add> $this->assertEquals('foci', Inflector::pluralize('focus'));
<add> $this->assertEquals('fungi', Inflector::pluralize('fungus'));
<add> $this->assertEquals('nuclei', Inflector::pluralize('nucleus'));
<add> $this->assertEquals('octopuses', Inflector::pluralize('octopus'));
<add> $this->assertEquals('radii', Inflector::pluralize('radius'));
<add> $this->assertEquals('stimuli', Inflector::pluralize('stimulus'));
<add> $this->assertEquals('syllabi', Inflector::pluralize('syllabus'));
<add> $this->assertEquals('termini', Inflector::pluralize('terminus'));
<add> $this->assertEquals('viri', Inflector::pluralize('virus'));
<add> $this->assertEquals('people', Inflector::pluralize('person'));
<add> $this->assertEquals('people', Inflector::pluralize('people'));
<add> $this->assertEquals('gloves', Inflector::pluralize('glove'));
<add> $this->assertEquals('crises', Inflector::pluralize('crisis'));
<add> $this->assertEquals('taxes', Inflector::pluralize('tax'));
<add> $this->assertEquals('waves', Inflector::pluralize('wave'));
<add> $this->assertEquals('bureaus', Inflector::pluralize('bureau'));
<add> $this->assertEquals('cafes', Inflector::pluralize('cafe'));
<add> $this->assertEquals('roofs', Inflector::pluralize('roof'));
<add> $this->assertEquals('foes', Inflector::pluralize('foe'));
<add> $this->assertEquals('cookies', Inflector::pluralize('cookie'));
<add> $this->assertEquals('wolves', Inflector::pluralize('wolf'));
<add> $this->assertEquals('thieves', Inflector::pluralize('thief'));
<add> $this->assertEquals('potatoes', Inflector::pluralize('potato'));
<add> $this->assertEquals('heroes', Inflector::pluralize('hero'));
<add> $this->assertEquals('buffaloes', Inflector::pluralize('buffalo'));
<add> $this->assertEquals('teeth', Inflector::pluralize('tooth'));
<add> $this->assertEquals('geese', Inflector::pluralize('goose'));
<add> $this->assertEquals('feet', Inflector::pluralize('foot'));
<add> $this->assertEquals('objectives', Inflector::pluralize('objective'));
<add> $this->assertEquals('briefs', Inflector::pluralize('brief'));
<add> $this->assertEquals('quotas', Inflector::pluralize('quota'));
<add> $this->assertEquals('curves', Inflector::pluralize('curve'));
<add> $this->assertEquals('body_curves', Inflector::pluralize('body_curve'));
<add> $this->assertEquals('metadata', Inflector::pluralize('metadata'));
<add> $this->assertEquals('files_metadata', Inflector::pluralize('files_metadata'));
<add> $this->assertEquals('stadia', Inflector::pluralize('stadia'));
<add> $this->assertEquals('Addresses', Inflector::pluralize('Address'));
<add> $this->assertEquals('', Inflector::pluralize(''));
<ide> }
<ide>
<ide> /**
<ide> public function testInflectorSlugWithMapOverridingDefault()
<ide> */
<ide> public function testInflectorUnderscore()
<ide> {
<del> $this->assertSame(Inflector::underscore('TestThing'), 'test_thing');
<del> $this->assertSame(Inflector::underscore('testThing'), 'test_thing');
<del> $this->assertSame(Inflector::underscore('TestThingExtra'), 'test_thing_extra');
<del> $this->assertSame(Inflector::underscore('testThingExtra'), 'test_thing_extra');
<add> $this->assertSame('test_thing', Inflector::underscore('TestThing'));
<add> $this->assertSame('test_thing', Inflector::underscore('testThing'));
<add> $this->assertSame('test_thing_extra', Inflector::underscore('TestThingExtra'));
<add> $this->assertSame('test_thing_extra', Inflector::underscore('testThingExtra'));
<ide>
<ide> // Identical checks test the cache code path.
<del> $this->assertSame(Inflector::underscore('TestThing'), 'test_thing');
<del> $this->assertSame(Inflector::underscore('testThing'), 'test_thing');
<del> $this->assertSame(Inflector::underscore('TestThingExtra'), 'test_thing_extra');
<del> $this->assertSame(Inflector::underscore('testThingExtra'), 'test_thing_extra');
<add> $this->assertSame('test_thing', Inflector::underscore('TestThing'));
<add> $this->assertSame('test_thing', Inflector::underscore('testThing'));
<add> $this->assertSame('test_thing_extra', Inflector::underscore('TestThingExtra'));
<add> $this->assertSame('test_thing_extra', Inflector::underscore('testThingExtra'));
<ide>
<ide> // Test stupid values
<del> $this->assertSame(Inflector::underscore(''), '');
<del> $this->assertSame(Inflector::underscore(0), '0');
<del> $this->assertSame(Inflector::underscore(false), '');
<add> $this->assertSame('', Inflector::underscore(''));
<add> $this->assertSame('0', Inflector::underscore(0));
<add> $this->assertSame('', Inflector::underscore(false));
<ide> }
<ide>
<ide> /**
<ide> public function testDasherized()
<ide> */
<ide> public function testVariableNaming()
<ide> {
<del> $this->assertEquals(Inflector::variable('test_field'), 'testField');
<del> $this->assertEquals(Inflector::variable('test_fieLd'), 'testFieLd');
<del> $this->assertEquals(Inflector::variable('test field'), 'testField');
<del> $this->assertEquals(Inflector::variable('Test_field'), 'testField');
<add> $this->assertEquals('testField', Inflector::variable('test_field'));
<add> $this->assertEquals('testFieLd', Inflector::variable('test_fieLd'));
<add> $this->assertEquals('testField', Inflector::variable('test field'));
<add> $this->assertEquals('testField', Inflector::variable('Test_field'));
<ide> }
<ide>
<ide> /**
<ide> public function testVariableNaming()
<ide> */
<ide> public function testClassNaming()
<ide> {
<del> $this->assertEquals(Inflector::classify('artists_genres'), 'ArtistsGenre');
<del> $this->assertEquals(Inflector::classify('file_systems'), 'FileSystem');
<del> $this->assertEquals(Inflector::classify('news'), 'News');
<del> $this->assertEquals(Inflector::classify('bureaus'), 'Bureau');
<add> $this->assertEquals('ArtistsGenre', Inflector::classify('artists_genres'));
<add> $this->assertEquals('FileSystem', Inflector::classify('file_systems'));
<add> $this->assertEquals('News', Inflector::classify('news'));
<add> $this->assertEquals('Bureau', Inflector::classify('bureaus'));
<ide> }
<ide>
<ide> /**
<ide> public function testClassNaming()
<ide> */
<ide> public function testTableNaming()
<ide> {
<del> $this->assertEquals(Inflector::tableize('ArtistsGenre'), 'artists_genres');
<del> $this->assertEquals(Inflector::tableize('FileSystem'), 'file_systems');
<del> $this->assertEquals(Inflector::tableize('News'), 'news');
<del> $this->assertEquals(Inflector::tableize('Bureau'), 'bureaus');
<add> $this->assertEquals('artists_genres', Inflector::tableize('ArtistsGenre'));
<add> $this->assertEquals('file_systems', Inflector::tableize('FileSystem'));
<add> $this->assertEquals('news', Inflector::tableize('News'));
<add> $this->assertEquals('bureaus', Inflector::tableize('Bureau'));
<ide> }
<ide>
<ide> /**
<ide> public function testTableNaming()
<ide> */
<ide> public function testHumanization()
<ide> {
<del> $this->assertEquals(Inflector::humanize('posts'), 'Posts');
<del> $this->assertEquals(Inflector::humanize('posts_tags'), 'Posts Tags');
<del> $this->assertEquals(Inflector::humanize('file_systems'), 'File Systems');
<add> $this->assertEquals('Posts', Inflector::humanize('posts'));
<add> $this->assertEquals('Posts Tags', Inflector::humanize('posts_tags'));
<add> $this->assertEquals('File Systems', Inflector::humanize('file_systems'));
<ide> $this->assertSame('', Inflector::humanize(null));
<ide> $this->assertSame('', Inflector::humanize(false));
<ide> }
<ide> public function testCustomPluralRule()
<ide> Inflector::rules('plural', ['/^(custom)$/i' => '\1izables']);
<ide> Inflector::rules('uninflected', ['uninflectable']);
<ide>
<del> $this->assertEquals(Inflector::pluralize('custom'), 'customizables');
<del> $this->assertEquals(Inflector::pluralize('uninflectable'), 'uninflectable');
<add> $this->assertEquals('customizables', Inflector::pluralize('custom'));
<add> $this->assertEquals('uninflectable', Inflector::pluralize('uninflectable'));
<ide>
<ide> Inflector::rules('plural', ['/^(alert)$/i' => '\1ables']);
<ide> Inflector::rules('irregular', ['amaze' => 'amazable', 'phone' => 'phonezes']);
<ide> Inflector::rules('uninflected', ['noflect', 'abtuse']);
<del> $this->assertEquals(Inflector::pluralize('noflect'), 'noflect');
<del> $this->assertEquals(Inflector::pluralize('abtuse'), 'abtuse');
<del> $this->assertEquals(Inflector::pluralize('alert'), 'alertables');
<del> $this->assertEquals(Inflector::pluralize('amaze'), 'amazable');
<del> $this->assertEquals(Inflector::pluralize('phone'), 'phonezes');
<add> $this->assertEquals('noflect', Inflector::pluralize('noflect'));
<add> $this->assertEquals('abtuse', Inflector::pluralize('abtuse'));
<add> $this->assertEquals('alertables', Inflector::pluralize('alert'));
<add> $this->assertEquals('amazable', Inflector::pluralize('amaze'));
<add> $this->assertEquals('phonezes', Inflector::pluralize('phone'));
<ide> }
<ide>
<ide> /**
<ide> public function testCustomSingularRule()
<ide> Inflector::rules('uninflected', ['singulars']);
<ide> Inflector::rules('singular', ['/(eple)r$/i' => '\1', '/(jente)r$/i' => '\1']);
<ide>
<del> $this->assertEquals(Inflector::singularize('epler'), 'eple');
<del> $this->assertEquals(Inflector::singularize('jenter'), 'jente');
<add> $this->assertEquals('eple', Inflector::singularize('epler'));
<add> $this->assertEquals('jente', Inflector::singularize('jenter'));
<ide>
<ide> Inflector::rules('singular', ['/^(bil)er$/i' => '\1', '/^(inflec|contribu)tors$/i' => '\1ta']);
<ide> Inflector::rules('irregular', ['spinor' => 'spins']);
<ide>
<del> $this->assertEquals(Inflector::singularize('spins'), 'spinor');
<del> $this->assertEquals(Inflector::singularize('inflectors'), 'inflecta');
<del> $this->assertEquals(Inflector::singularize('contributors'), 'contributa');
<del> $this->assertEquals(Inflector::singularize('singulars'), 'singulars');
<add> $this->assertEquals('spinor', Inflector::singularize('spins'));
<add> $this->assertEquals('inflecta', Inflector::singularize('inflectors'));
<add> $this->assertEquals('contributa', Inflector::singularize('contributors'));
<add> $this->assertEquals('singulars', Inflector::singularize('singulars'));
<ide> }
<ide>
<ide> /**
<ide> public function testCustomSingularRule()
<ide> */
<ide> public function testCustomTransliterationRule()
<ide> {
<del> $this->assertEquals(Inflector::slug('Testing æ ø å'), 'Testing-ae-o-a');
<add> $this->assertEquals('Testing-ae-o-a', Inflector::slug('Testing æ ø å'));
<ide>
<ide> Inflector::rules('transliteration', ['å' => 'aa', 'ø' => 'oe']);
<del> $this->assertEquals(Inflector::slug('Testing æ ø å'), 'Testing-ae-oe-aa');
<add> $this->assertEquals('Testing-ae-oe-aa', Inflector::slug('Testing æ ø å'));
<ide>
<ide> Inflector::rules('transliteration', ['æ' => 'ae', 'å' => 'aa'], true);
<del> $this->assertEquals(Inflector::slug('Testing æ ø å'), 'Testing-ae-ø-aa');
<add> $this->assertEquals('Testing-ae-ø-aa', Inflector::slug('Testing æ ø å'));
<ide> }
<ide>
<ide> /**
<ide> public function testCustomTransliterationRule()
<ide> */
<ide> public function testRulesClearsCaches()
<ide> {
<del> $this->assertEquals(Inflector::singularize('Bananas'), 'Banana');
<del> $this->assertEquals(Inflector::tableize('Banana'), 'bananas');
<del> $this->assertEquals(Inflector::pluralize('Banana'), 'Bananas');
<add> $this->assertEquals('Banana', Inflector::singularize('Bananas'));
<add> $this->assertEquals('bananas', Inflector::tableize('Banana'));
<add> $this->assertEquals('Bananas', Inflector::pluralize('Banana'));
<ide>
<ide> Inflector::rules('singular', ['/(.*)nas$/i' => '\1zzz']);
<ide> $this->assertEquals('Banazzz', Inflector::singularize('Bananas'), 'Was inflected with old rules.');
<ide>
<ide> Inflector::rules('plural', ['/(.*)na$/i' => '\1zzz']);
<ide> Inflector::rules('irregular', ['corpus' => 'corpora']);
<del> $this->assertEquals(Inflector::pluralize('Banana'), 'Banazzz', 'Was inflected with old rules.');
<del> $this->assertEquals(Inflector::pluralize('corpus'), 'corpora', 'Was inflected with old irregular form.');
<add> $this->assertEquals('Banazzz', Inflector::pluralize('Banana'), 'Was inflected with old rules.');
<add> $this->assertEquals('corpora', Inflector::pluralize('corpus'), 'Was inflected with old irregular form.');
<ide> }
<ide>
<ide> /**
<ide> public function testCustomRuleWithReset()
<ide> Inflector::rules('uninflected', $uninflected, true);
<ide> Inflector::rules('irregular', $pluralIrregular, true);
<ide>
<del> $this->assertEquals(Inflector::pluralize('Alcool'), 'Alcoois');
<del> $this->assertEquals(Inflector::pluralize('Atlas'), 'Atlas');
<del> $this->assertEquals(Inflector::singularize('Alcoois'), 'Alcool');
<del> $this->assertEquals(Inflector::singularize('Atlas'), 'Atlas');
<add> $this->assertEquals('Alcoois', Inflector::pluralize('Alcool'));
<add> $this->assertEquals('Atlas', Inflector::pluralize('Atlas'));
<add> $this->assertEquals('Alcool', Inflector::singularize('Alcoois'));
<add> $this->assertEquals('Atlas', Inflector::singularize('Atlas'));
<ide> }
<ide> } | 1 |
Text | Text | capitalize headings to match style guidelines | d49a3abc019732c35c2f1fdc7687572ddd168dd8 | <ide><path>guides/source/action_cable_overview.md
<ide> connections as you have workers. The default worker pool size is set to 4, so
<ide> that means you have to make at least 4 database connections available.
<ide> You can change that in `config/database.yml` through the `pool` attribute.
<ide>
<del>### Client-side logging
<add>### Client-side Logging
<ide>
<ide> Client-side logging is disabled by default. You can enable this by setting the `ActionCable.logger.enabled` to true.
<ide>
<ide><path>guides/source/action_controller_overview.md
<ide> When this form is submitted, the value of `params[:client]` will be `{ "name" =>
<ide>
<ide> The `params` object acts like a Hash, but lets you use symbols and strings interchangeably as keys.
<ide>
<del>### JSON parameters
<add>### JSON Parameters
<ide>
<ide> If you're writing a web service application, you might find yourself more comfortable accepting parameters in JSON format. If the "Content-Type" header of your request is set to "application/json", Rails will automatically load your parameters into the `params` hash, which you can access as you would normally.
<ide>
<ide> If you use the cookie session store, this would apply to the `session` and
<ide>
<ide> [`cookies`]: https://api.rubyonrails.org/classes/ActionController/Cookies.html#method-i-cookies
<ide>
<del>Rendering XML and JSON data
<add>Rendering XML and JSON Data
<ide> ---------------------------
<ide>
<ide> ActionController makes it extremely easy to render `XML` or `JSON` data. If you've generated a controller using scaffolding, it would look something like this:
<ide> NOTE: Certain exceptions are only rescuable from the `ApplicationController` cla
<ide>
<ide> [`rescue_from`]: https://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html#method-i-rescue_from
<ide>
<del>Force HTTPS protocol
<add>Force HTTPS Protocol
<ide> --------------------
<ide>
<ide> If you'd like to ensure that communication to your controller is only possible
<ide><path>guides/source/action_mailbox_basics.md
<ide> your job queue being able to hold jobs for that long.)
<ide>
<ide> [`config.action_mailbox.incinerate_after`]: configuring.html#config-action-mailbox-incinerate-after
<ide>
<del>## Working with Action Mailbox in development
<add>## Working with Action Mailbox in Development
<ide>
<ide> It's helpful to be able to test incoming emails in development without actually
<ide> sending and receiving real emails. To accomplish this, there's a conductor
<ide> controller mounted at `/rails/conductor/action_mailbox/inbound_emails`,
<ide> which gives you an index of all the InboundEmails in the system, their
<ide> state of processing, and a form to create a new InboundEmail as well.
<ide>
<del>## Testing mailboxes
<add>## Testing Mailboxes
<ide>
<ide> Example:
<ide>
<ide><path>guides/source/action_mailer_basics.md
<ide> What is Action Mailer?
<ide> Action Mailer allows you to send emails from your application using mailer classes
<ide> and views.
<ide>
<del>### Mailers are similar to controllers
<add>### Mailers are Similar to Controllers
<ide>
<ide> They inherit from [`ActionMailer::Base`][] and live in `app/mailers`. Mailers also work
<ide> very similarly to controllers. Some examples of similarities are enumerated below.
<ide> access it with the [`message`][] method on the `ActionMailer::MessageDelivery` o
<ide> [`message`]: https://api.rubyonrails.org/classes/ActionMailer/MessageDelivery.html#method-i-message
<ide> [`with`]: https://api.rubyonrails.org/classes/ActionMailer/Parameterized/ClassMethods.html#method-i-with
<ide>
<del>### Auto encoding header values
<add>### Auto Encoding Header Values
<ide>
<ide> Action Mailer handles the auto encoding of multibyte characters inside of
<ide> headers and bodies.
<ide> Action Mailer 3.0 makes inline attachments, which involved a lot of hacking in p
<ide> <%= image_tag attachments['image.jpg'].url, alt: 'My Photo', class: 'photos' %>
<ide> ```
<ide>
<del>#### Sending Email To Multiple Recipients
<add>#### Sending Email to Multiple Recipients
<ide>
<ide> It is possible to send email to one or more recipients in one email (e.g.,
<ide> informing all admins of a new signup) by setting the list of emails to the `:to`
<ide> end
<ide> The same format can be used to set carbon copy (Cc:) and blind carbon copy
<ide> (Bcc:) recipients, by using the `:cc` and `:bcc` keys respectively.
<ide>
<del>#### Sending Email With Name
<add>#### Sending Email with Name
<ide>
<ide> Sometimes you wish to show the name of the person instead of just their email
<ide> address when they receive the email. You can use [`email_address_with_name`][] for
<ide> You can also consider using the [`append_view_path`][] method.
<ide> [`append_view_path`]: https://api.rubyonrails.org/classes/ActionView/ViewPaths/ClassMethods.html#method-i-append_view_path
<ide> [`prepend_view_path`]: https://api.rubyonrails.org/classes/ActionView/ViewPaths/ClassMethods.html#method-i-prepend_view_path
<ide>
<del>#### Caching mailer view
<add>#### Caching Mailer View
<ide>
<ide> You can perform fragment caching in mailer views like in application views using the [`cache`][] method.
<ide>
<ide> NOTE: non-`GET` links require [rails-ujs](https://github.com/rails/rails/blob/ma
<ide> [jQuery UJS](https://github.com/rails/jquery-ujs), and won't work in mailer templates.
<ide> They will result in normal `GET` requests.
<ide>
<del>### Adding images in Action Mailer Views
<add>### Adding Images in Action Mailer Views
<ide>
<ide> Unlike controllers, the mailer instance doesn't have any context about the
<ide> incoming request so you'll need to provide the `:asset_host` parameter yourself.
<ide><path>guides/source/action_text_overview.md
<ide> RichText model that's associated with any existing Active Record model in the ap
<ide> Any embedded images (or other attachments) are automatically stored using
<ide> Active Storage and associated with the included RichText model.
<ide>
<del>## Trix compared to other rich text editors
<add>## Trix Compared to Other Rich Text Editors
<ide>
<ide> Most WYSIWYG editors are wrappers around HTML’s `contenteditable` and `execCommand` APIs,
<ide> designed by Microsoft to support live editing of web pages in Internet Explorer 5.5,
<ide> After the installation is complete, a Rails app should have the following change
<ide>
<ide> 2. The `trix` stylesheet will be included together with Action Text styles in your `application.css` file.
<ide>
<del>## Creating Rich Text content
<add>## Creating Rich Text Content
<ide>
<ide> Add a rich text field to an existing model:
<ide>
<ide> end
<ide>
<ide> [`rich_text_area`]: https://api.rubyonrails.org/classes/ActionView/Helpers/FormHelper.html#method-i-rich_text_area
<ide>
<del>## Rendering Rich Text content
<add>## Rendering Rich Text Content
<ide>
<ide> By default, Action Text will render rich text content inside an element with the
<ide> `.trix-content` class:
<ide> By default, all `ActiveRecord::Base` descendants mix-in
<ide>
<ide> [global-id]: https://github.com/rails/globalid#usage
<ide>
<del>## Avoid N+1 queries
<add>## Avoid N+1 Queries
<ide>
<ide> If you wish to preload the dependent `ActionText::RichText` model, assuming your rich text field is named `content`, you can use the named scope:
<ide>
<ide> Message.all.with_rich_text_content # Preload the body without attachments.
<ide> Message.all.with_rich_text_content_and_embeds # Preload both body and attachments.
<ide> ```
<ide>
<del>## API / Backend development
<add>## API / Backend Development
<ide>
<ide> 1. A backend API (for example, using JSON) needs a separate endpoint for uploading files that creates an `ActiveStorage::Blob` and returns its `attachable_sgid`:
<ide>
<ide><path>guides/source/action_view_helpers.md
<ide> After reading this guide, you will know:
<ide>
<ide> --------------------------------------------------------------------------------
<ide>
<del>Overview of helpers provided by Action View
<add>Overview of Helpers Provided by Action View
<ide> -------------------------------------------
<ide>
<ide> WIP: Not all the helpers are listed here. For a full list see the [API documentation](https://api.rubyonrails.org/classes/ActionView/Helpers.html)
<ide><path>guides/source/action_view_overview.md
<ide> This will render a file named `_menu.html.erb` at that point within the view tha
<ide>
<ide> That code will pull in the partial from `app/views/shared/_menu.html.erb`.
<ide>
<del>#### Using Partials to simplify Views
<add>#### Using Partials to Simplify Views
<ide>
<ide> One way to use partials is to treat them as the equivalent of subroutines; a way to move details out of a view so that you can grasp what's going on more easily. For example, you might have a view that looks like this:
<ide>
<ide> One way to use partials is to treat them as the equivalent of subroutines; a way
<ide>
<ide> Here, the `_ad_banner.html.erb` and `_footer.html.erb` partials could contain content that is shared among many pages in your application. You don't need to see the details of these sections when you're concentrating on a particular page.
<ide>
<del>#### `render` without `partial` and `locals` options
<add>#### `render` without `partial` and `locals` Options
<ide>
<ide> In the above example, `render` takes 2 options: `partial` and `locals`. But if
<ide> these are the only options you want to pass, you can skip using these options.
<ide> You can also do:
<ide> <%= render "product", product: @product %>
<ide> ```
<ide>
<del>#### The `as` and `object` options
<add>#### The `as` and `object` Options
<ide>
<ide> By default `ActionView::Partials::PartialRenderer` has its object in a local variable with the same name as the template. So, given:
<ide>
<ide> views are located. By default, it only looks inside the `app/views` directory.
<ide> We can add other locations and give them certain precedence when resolving
<ide> paths using the `prepend_view_path` and `append_view_path` methods.
<ide>
<del>### Prepend view path
<add>### Prepend View Path
<ide>
<ide> This can be helpful for example when we want to put views inside a different
<ide> directory for subdomains.
<ide> prepend_view_path "app/views/#{request.subdomain}"
<ide>
<ide> Then Action View will look first in this directory when resolving views.
<ide>
<del>### Append view path
<add>### Append View Path
<ide>
<ide> Similarly, we can append paths:
<ide>
<ide><path>guides/source/active_job_basics.md
<ide> class ApplicationJob < ActiveJob::Base
<ide> end
<ide> ```
<ide>
<del>### Available callbacks
<add>### Available Callbacks
<ide>
<ide> * [`before_enqueue`][]
<ide> * [`around_enqueue`][]
<ide> UserMailer.welcome(@user).deliver_later # Email will be localized to Esperanto.
<ide> ```
<ide>
<ide>
<del>Supported types for arguments
<add>Supported Types for Arguments
<ide> ----------------------------
<ide>
<ide> ActiveJob supports the following types of arguments by default:
<ide> If an exception from a job is not rescued, then the job is referred to as "faile
<ide>
<ide> [`rescue_from`]: https://api.rubyonrails.org/classes/ActiveSupport/Rescuable/ClassMethods.html#method-i-rescue_from
<ide>
<del>### Retrying or Discarding failed jobs
<add>### Retrying or Discarding Failed Jobs
<ide>
<ide> A failed job will not be retried, unless configured otherwise.
<ide>
<ide><path>guides/source/active_model_basics.md
<ide> class Person
<ide> end
<ide> ```
<ide>
<del>#### Querying object directly for its list of all changed attributes.
<add>#### Querying an Object Directly for its List of All Changed Attributes
<ide>
<ide> ```irb
<ide> irb> person = Person.new
<ide> irb> person.changes
<ide> => {"first_name"=>[nil, "First Name"]}
<ide> ```
<ide>
<del>#### Attribute-based accessor methods
<add>#### Attribute-based Accessor Methods
<ide>
<ide> Track whether the particular attribute has been changed or not.
<ide>
<ide><path>guides/source/active_record_encryption.md
<ide> But, under the hood, the executed SQL looks like this:
<ide> INSERT INTO `articles` (`title`) VALUES ('{\"p\":\"n7J0/ol+a7DRMeaE\",\"h\":{\"iv\":\"DXZMDWUKfp3bg/Yu\",\"at\":\"X1/YjMHbHD4talgF9dt61A==\"}}')
<ide> ```
<ide>
<del>#### Important: About storage and column size
<add>#### Important: About Storage and Column Size
<ide>
<ide> Encryption requires extra space because of Base64 encoding and the metadata stored along with the encrypted payloads. When using the built-in envelope encryption key provider, you can estimate the worst-case overhead at around 255 bytes. This overhead is negligible at larger sizes. Not only because it gets diluted but because the library uses compression by default, which can offer up to 30% storage savings over the unencrypted version for larger payloads.
<ide>
<ide><path>guides/source/active_record_multiple_databases.md
<ide> The following features are not (yet) supported:
<ide>
<ide> * Load balancing replicas
<ide>
<del>## Setting up your application
<add>## Setting up Your Application
<ide>
<ide> While Rails tries to do most of the work for you there are still some steps you'll
<ide> need to do to get your application ready for multiple databases.
<ide> $ bin/rails generate scaffold Dog name:string --database animals --parent Animal
<ide> This will skip generating `AnimalsRecord` since you've indicated to Rails that you want to
<ide> use a different parent class.
<ide>
<del>## Activating automatic role switching
<add>## Activating Automatic Role Switching
<ide>
<ide> Finally, in order to use the read-only replica in your application, you'll need to activate
<ide> the middleware for automatic switching.
<ide> config.active_record.database_resolver = ActiveRecord::Middleware::DatabaseSelec
<ide> config.active_record.database_resolver_context = MyCookieResolver
<ide> ```
<ide>
<del>## Using manual connection switching
<add>## Using Manual Connection Switching
<ide>
<ide> There are some cases where you may want your application to connect to a writer or a replica
<ide> and the automatic connection switching isn't adequate. For example, you may know that for a
<ide> ActiveRecord::Base.connected_to(role: :reading, prevent_writes: true) do
<ide> end
<ide> ```
<ide>
<del>## Horizontal sharding
<add>## Horizontal Sharding
<ide>
<ide> Horizontal sharding is when you split up your database to reduce the number of rows on each
<ide> database server, but maintain the same schema across "shards". This is commonly called "multi-tenant"
<ide> ActiveRecord::Base.connected_to(role: :reading, shard: :shard_one) do
<ide> end
<ide> ```
<ide>
<del>## Activating automatic shard switching
<add>## Activating Automatic Shard Switching
<ide>
<ide> Applications are able to automatically switch shards per request using the provided
<ide> middleware.
<ide> end
<ide> `ActiveRecord::Base.connected_to` maintains the ability to switch
<ide> connections globally.
<ide>
<del>### Handling associations with joins across databases
<add>### Handling Associations with Joins across Databases
<ide>
<ide> As of Rails 7.0+, Active Record has an option for handling associations that would perform
<ide> a join across multiple databases. If you have a has many through or a has one through association
<ide><path>guides/source/active_record_postgresql.md
<ide> irb> Article.count
<ide> NOTE: This application only cares about non-archived `Articles`. A view also
<ide> allows for conditions so we can exclude the archived `Articles` directly.
<ide>
<del>Structure dumps
<add>Structure Dumps
<ide> --------------
<ide>
<ide> If your `config.active_record.schema_format` is `:sql`, Rails will call `pg_dump` to generate a
<ide><path>guides/source/active_record_querying.md
<ide> FROM orders
<ide> GROUP BY created_at
<ide> ```
<ide>
<del>### Total of grouped items
<add>### Total of Grouped Items
<ide>
<ide> To get the total of grouped items on a single query, call [`count`][] after the `group`.
<ide>
<ide> end
<ide>
<ide> [`scope`]: https://api.rubyonrails.org/classes/ActiveRecord/Scoping/Named/ClassMethods.html#method-i-scope
<ide>
<del>### Passing in arguments
<add>### Passing in Arguments
<ide>
<ide> Your scope can take arguments:
<ide>
<ide> These methods will still be accessible on the association objects:
<ide> irb> author.books.costs_more_than(100.10)
<ide> ```
<ide>
<del>### Using conditionals
<add>### Using Conditionals
<ide>
<ide> Your scope can utilize conditionals:
<ide>
<ide> end
<ide>
<ide> However, there is one important caveat: A scope will always return an `ActiveRecord::Relation` object, even if the conditional evaluates to `false`, whereas a class method, will return `nil`. This can cause `NoMethodError` when chaining class methods with conditionals, if any of the conditionals return `false`.
<ide>
<del>### Applying a default scope
<add>### Applying a Default Scope
<ide>
<ide> If we wish for a scope to be applied across all queries to the model we can use the
<ide> [`default_scope`][] method within the model itself.
<ide> irb> Book.new
<ide>
<ide> [`default_scope`]: https://api.rubyonrails.org/classes/ActiveRecord/Scoping/Default/ClassMethods.html#method-i-default_scope
<ide>
<del>### Merging of scopes
<add>### Merging of Scopes
<ide>
<ide> Just like `where` clauses, scopes are merged using `AND` conditions.
<ide>
<ide> There are some examples below. This guide won't cover all the possibilities, jus
<ide> When an Active Record method is called, the query is not immediately generated and sent to the database.
<ide> The query is sent only when the data is actually needed. So each example below generates a single query.
<ide>
<del>### Retrieving filtered data from multiple tables
<add>### Retrieving Filtered Data from Multiple Tables
<ide>
<ide> ```ruby
<ide> Customer
<ide> INNER JOIN reviews
<ide> WHERE (reviews.created_at > '2019-01-08')
<ide> ```
<ide>
<del>### Retrieving specific data from multiple tables
<add>### Retrieving Specific Data from Multiple Tables
<ide>
<ide> ```ruby
<ide> Book
<ide><path>guides/source/active_record_validations.md
<ide> conditions in a shorter way.
<ide> validates :password, confirmation: true, unless: -> { password.blank? }
<ide> ```
<ide>
<del>### Grouping Conditional validations
<add>### Grouping Conditional Validations
<ide>
<ide> Sometimes it is useful to have multiple validations use one condition. It can
<ide> be easily achieved using [`with_options`][].
<ide> irb> person.errors[:name]
<ide> => ["can't be blank", "is too short (minimum is 3 characters)"]
<ide> ```
<ide>
<del>### `errors.where` and error object
<add>### `errors.where` and Error Object
<ide>
<ide> Sometimes we may need more information about each error beside its message. Each error is encapsulated as an `ActiveModel::Error` object, and [`where`][] method is the most common way of access.
<ide>
<ide><path>guides/source/active_storage_overview.md
<ide> generated URLs are hard to guess, but permanent by design. If your files
<ide> require a higher level of protection consider implementing
<ide> [Authenticated Controllers](#authenticated-controllers).
<ide>
<del>### Redirect mode
<add>### Redirect Mode
<ide>
<ide> To generate a permanent URL for a blob, you can pass the blob to the
<ide> [`url_for`][ActionView::RoutingUrlFor#url_for] view helper. This generates a
<ide> Rails.application.routes.url_helpers.rails_blob_path(user.avatar, only_path: tru
<ide> [ActionView::RoutingUrlFor#url_for]: https://api.rubyonrails.org/classes/ActionView/RoutingUrlFor.html#method-i-url_for
<ide> [ActiveStorage::Blob#signed_id]: https://api.rubyonrails.org/classes/ActiveStorage/Blob.html#method-i-signed_id
<ide>
<del>### Proxy mode
<add>### Proxy Mode
<ide>
<ide> Optionally, files can be proxied instead. This means that your application servers will download file data from the storage service in response to requests. This can be useful for serving files from a CDN.
<ide>
<ide> Or if you want to explicitly proxy specific attachments there are URL helpers yo
<ide> <%= image_tag rails_storage_proxy_path(@user.avatar) %>
<ide> ```
<ide>
<del>#### Putting a CDN in front of Active Storage
<add>#### Putting a CDN in Front of Active Storage
<ide>
<ide> Additionally, in order to use a CDN for Active Storage attachments, you will need to generate URLs with proxy mode so that they are served by your app and the CDN will cache the attachment without any extra configuration. This works out of the box because the default Active Storage proxy controller sets an HTTP header indicating to the CDN to cache the response.
<ide>
<ide> directly from the client to the cloud.
<ide>
<ide> 4. That's it! Uploads begin upon form submission.
<ide>
<del>### Cross-Origin Resource Sharing (CORS) configuration
<add>### Cross-Origin Resource Sharing (CORS) Configuration
<ide>
<ide> To make direct uploads to a third-party service work, you’ll need to configure the service to allow cross-origin requests from your app. Consult the CORS documentation for your service:
<ide>
<ide> Take care to allow:
<ide>
<ide> No CORS configuration is required for the Disk service since it shares your app’s origin.
<ide>
<del>#### Example: S3 CORS configuration
<add>#### Example: S3 CORS Configuration
<ide>
<ide> ```json
<ide> [
<ide> No CORS configuration is required for the Disk service since it shares your app
<ide> ]
<ide> ```
<ide>
<del>#### Example: Google Cloud Storage CORS configuration
<add>#### Example: Google Cloud Storage CORS Configuration
<ide>
<ide> ```json
<ide> [
<ide> No CORS configuration is required for the Disk service since it shares your app
<ide> ]
<ide> ```
<ide>
<del>#### Example: Azure Storage CORS configuration
<add>#### Example: Azure Storage CORS Configuration
<ide>
<ide> ```xml
<ide> <Cors>
<ide> No CORS configuration is required for the Disk service since it shares your app
<ide> </Cors>
<ide> ```
<ide>
<del>### Direct upload JavaScript events
<add>### Direct Upload JavaScript Events
<ide>
<ide> | Event name | Event target | Event data (`event.detail`) | Description |
<ide> | --- | --- | --- | --- |
<ide> end
<ide>
<ide> [`fixture_file_upload`]: https://api.rubyonrails.org/classes/ActionDispatch/TestProcess/FixtureFile.html
<ide>
<del>### Discarding files created during tests
<add>### Discarding Files Created During Tests
<ide>
<del>#### System tests
<add>#### System Tests
<ide>
<ide> System tests clean up test data by rolling back a transaction. Because `destroy`
<ide> is never called on an object, the attached files are never cleaned up. If you
<ide> config.active_job.queue_adapter = :inline
<ide>
<ide> [parallel tests]: testing.html#parallel-testing
<ide>
<del>#### Integration tests
<add>#### Integration Tests
<ide>
<ide> Similarly to System Tests, files uploaded during Integration Tests will not be
<ide> automatically cleaned up. If you want to clear the files, you can do it in an
<ide> end
<ide>
<ide> [parallel tests]: testing.html#parallel-testing
<ide>
<del>### Adding attachments to fixtures
<add>### Adding Attachments to Fixtures
<ide>
<ide> You can add attachments to your existing [fixtures][]. First, you'll want to create a separate storage service:
<ide>
<ide> class UserTest < ActiveSupport::TestCase
<ide> end
<ide> ```
<ide>
<del>#### Cleaning up fixtures
<add>#### Cleaning up Fixtures
<ide>
<ide> While files uploaded in tests are cleaned up [at the end of each test](#discarding-files-created-during-tests),
<ide> you only need to clean up fixture files once: when all your tests complete.
<ide><path>guides/source/active_support_core_extensions.md
<ide> NOTE: Defined in `active_support/core_ext/object/with_options.rb`.
<ide>
<ide> [Object#with_options]: https://api.rubyonrails.org/classes/Object.html#method-i-with_options
<ide>
<del>### JSON support
<add>### JSON Support
<ide>
<ide> Active Support provides a better implementation of `to_json` than the `json` gem ordinarily provides for Ruby objects. This is because some classes, like `Hash` and `Process::Status` need special handling in order to provide a proper JSON representation.
<ide>
<ide> NOTE: Defined in `active_support/core_ext/hash/deep_merge.rb`.
<ide> [Hash#deep_merge!]: https://api.rubyonrails.org/classes/Hash.html#method-i-deep_merge-21
<ide> [Hash#deep_merge]: https://api.rubyonrails.org/classes/Hash.html#method-i-deep_merge
<ide>
<del>### Deep duplicating
<add>### Deep Duplicating
<ide>
<ide> The method [`Hash#deep_dup`][Hash#deep_dup] duplicates itself and all keys and values
<ide> inside recursively with Active Support method `Object#deep_dup`. It works like `Enumerator#each_with_object` with sending `deep_dup` method to each pair inside.
<ide> NOTE: Defined in `active_support/core_ext/date/calculations.rb`.
<ide> [DateAndTime::Calculations#on_weekend?]: https://api.rubyonrails.org/classes/DateAndTime/Calculations.html#method-i-on_weekend-3F
<ide> [DateAndTime::Calculations#past?]: https://api.rubyonrails.org/classes/DateAndTime/Calculations.html#method-i-past-3F
<ide>
<del>#### Named dates
<add>#### Named Dates
<ide>
<ide> ##### `beginning_of_week`, `end_of_week`
<ide>
<ide><path>guides/source/active_support_instrumentation.md
<ide> After reading this guide, you will know:
<ide>
<ide> --------------------------------------------------------------------------------
<ide>
<del>Introduction to instrumentation
<add>Introduction to Instrumentation
<ide> -------------------------------
<ide>
<ide> The instrumentation API provided by Active Support allows developers to provide hooks which other developers may hook into. There are several of these within the [Rails framework](#rails-framework-hooks). With this API, developers can choose to be notified when certain events occur inside their application or another piece of Ruby code.
<ide> For example, there is a hook provided within Active Record that is called every
<ide>
<ide> You are even able to [create your own events](#creating-custom-events) inside your application which you can later subscribe to.
<ide>
<del>Subscribing to an event
<add>Subscribing to an Event
<ide> -----------------------
<ide>
<ide> Subscribing to an event is easy. Use `ActiveSupport::Notifications.subscribe` with a block to
<ide> information about it.
<ide> | `:exception` | An array of two elements. Exception class name and the message |
<ide> | `:exception_object` | The exception object |
<ide>
<del>Creating custom events
<add>Creating Custom Events
<ide> ----------------------
<ide>
<ide> Adding your own events is easy as well. `ActiveSupport::Notifications` will take care of
<ide><path>guides/source/api_app.md
<ide> If you're building a Rails application that will be an API server first and
<ide> foremost, you can start with a more limited subset of Rails and add in features
<ide> as needed.
<ide>
<del>### Creating a new application
<add>### Creating a New Application
<ide>
<ide> You can generate a new api Rails app:
<ide>
<ide> This will do three main things for you:
<ide> - Configure the generators to skip generating views, helpers, and assets when
<ide> you generate a new resource.
<ide>
<del>### Changing an existing application
<add>### Changing an Existing Application
<ide>
<ide> If you want to take an existing application and make it an API one, read the
<ide> following steps.
<ide><path>guides/source/asset_pipeline.md
<ide> NOTE: You will need an [ExecJS](https://github.com/rails/execjs#readme)
<ide> supported runtime in order to use `terser`. If you are using macOS or
<ide> Windows you have a JavaScript runtime installed in your operating system.
<ide>
<del>### GZipping your assets
<add>### GZipping Your Assets
<ide>
<ide> By default, gzipped version of compiled assets will be generated, along with
<ide> the non-gzipped version of assets. Gzipped assets help reduce the transmission
<ide><path>guides/source/autoloading_and_reloading_constants.md
<ide> However, you cannot autoload from the autoload paths, which are managed by the `
<ide>
<ide> Why? Initializers only run once, when the application boots. If you reboot the server, they run again in a new process, but reloading does not reboot the server, and initializers don't run again. Let's see the two main use cases.
<ide>
<del>### Use case 1: During boot, load reloadable code
<add>### Use Case 1: During Boot, Load Reloadable Code
<ide>
<del>#### Autoload on boot and on each reload
<add>#### Autoload on Boot and on Each Reload
<ide>
<ide> Let's imagine `ApiGateway` is a reloadable class from `app/services` managed by the `main` autoloader and you need to configure its endpoint while the application boots:
<ide>
<ide> end
<ide>
<ide> NOTE: For historical reasons, this callback may run twice. The code it executes must be idempotent.
<ide>
<del>#### Autoload on boot only
<add>#### Autoload on Boot Only
<ide>
<ide> Reloadable classes and modules can be autoloaded in `after_initialize` blocks too. These run on boot, but do not run again on reload. In some exceptional cases this may be what you want.
<ide>
<ide> Rails.application.config.after_initialize do
<ide> end
<ide> ```
<ide>
<del>### Use case 2: During boot, load code that remains cached
<add>### Use Case 2: During Boot, Load Code that Remains Cached
<ide>
<ide> Some configurations take a class or module object, and they store it in a place that is not reloaded.
<ide>
<ide><path>guides/source/caching_with_rails.md
<ide> If you want to cache a fragment under certain conditions, you can use
<ide> <% end %>
<ide> ```
<ide>
<del>#### Collection caching
<add>#### Collection Caching
<ide>
<ide> The `render` helper can also cache individual templates rendered for a collection.
<ide> It can even one up the previous example with `each` by reading all cache
<ide> render(partial: 'hotels/hotel.html.erb', collection: @hotels, cached: true)
<ide>
<ide> Will load a file named `hotels/hotel.html.erb` in any file MIME type, for example you could include this partial in a JavaScript file.
<ide>
<del>### Managing dependencies
<add>### Managing Dependencies
<ide>
<ide> In order to correctly invalidate the cache, you need to properly define the
<ide> caching dependencies. Rails is clever enough to handle common cases so you don't
<ide> have to specify anything. However, sometimes, when you're dealing with custom
<ide> helpers for instance, you need to explicitly define them.
<ide>
<del>#### Implicit dependencies
<add>#### Implicit Dependencies
<ide>
<ide> Most template dependencies can be derived from calls to `render` in the template
<ide> itself. Here are some examples of render calls that `ActionView::Digestor` knows
<ide> to:
<ide> render partial: "documents/document", collection: @project.documents.where(published: true)
<ide> ```
<ide>
<del>#### Explicit dependencies
<add>#### Explicit Dependencies
<ide>
<ide> Sometimes you'll have template dependencies that can't be derived at all. This
<ide> is typically the case when rendering happens in helpers. Here's an example:
<ide> comment format anywhere in the template, like:
<ide> <% end %>
<ide> ```
<ide>
<del>#### External dependencies
<add>#### External Dependencies
<ide>
<ide> If you use a helper method, for example, inside a cached block and you then update
<ide> that helper, you'll have to bump the cache as well. It doesn't really matter how
<ide> end
<ide>
<ide> NOTE: Notice that in this example we used the `cache_key_with_version` method, so the resulting cache key will be something like `products/233-20140225082222765838000/competing_price`. `cache_key_with_version` generates a string based on the model's class name, `id`, and `updated_at` attributes. This is a common convention and has the benefit of invalidating the cache whenever the product is updated. In general, when you use low-level caching, you need to generate a cache key.
<ide>
<del>#### Avoid caching instances of Active Record objects
<add>#### Avoid Caching Instances of Active Record Objects
<ide>
<ide> Consider this example, which stores a list of Active Record objects representing superusers in the cache:
<ide>
<ide> values with `Rails.cache` and then try to pull them out with the `dalli` gem.
<ide> However, you also don't need to worry about exceeding the memcached size limit or
<ide> violating syntax rules.
<ide>
<del>Conditional GET support
<add>Conditional GET Support
<ide> -----------------------
<ide>
<ide> Conditional GETs are a feature of the HTTP specification that provide a way for web servers to tell browsers that the response to a GET request hasn't changed since the last request and can be safely pulled from the browser cache.
<ide><path>guides/source/classic_to_zeitwerk_howto.md
<ide> If for whatever reason you find a situation you don't know how to resolve, don't
<ide> How to Activate `zeitwerk` Mode
<ide> -------------------------------
<ide>
<del>### Applications running Rails 5.x or Less
<add>### Applications Running Rails 5.x or Less
<ide>
<ide> In applications running a Rails version previous to 6.0, `zeitwerk` mode is not available. You need to be at least in Rails 6.0.
<ide>
<del>### Applications running Rails 6.x
<add>### Applications Running Rails 6.x
<ide>
<ide> In applications running Rails 6.x there are two scenarios.
<ide>
<ide> If your application uses `Concerns` as namespace, you have two options:
<ide> delete("#{Rails.root}/app/models/concerns")
<ide> ```
<ide>
<del>### Having `app` in the autoload paths
<add>### Having `app` in the Autoload Paths
<ide>
<ide> Some projects want something like `app/api/base.rb` to define `API::Base`, and add `app` to the autoload paths to accomplish that.
<ide>
<ide> won't work, child objects like `Hotel::Pricing` won't be found.
<ide>
<ide> This restriction only applies to explicit namespaces. Classes and modules not defining a namespace can be defined using those idioms.
<ide>
<del>### One file, one constant (at the same top-level)
<add>### One File, One Constant (at the Same Top-level)
<ide>
<ide> In `classic` mode you could technically define several constants at the same top-level and have them all reloaded. For example, given
<ide>
<ide> Starting with Rails 7, newly generated applications are configured that way by d
<ide>
<ide> If your project does not have continuous integration, you can still eager load in the test suite by calling `Rails.application.eager_load!`:
<ide>
<del>#### minitest
<add>#### Minitest
<ide>
<ide> ```ruby
<ide> require "test_helper"
<ide> RSpec.describe "Zeitwerk compliance" do
<ide> end
<ide> ```
<ide>
<del>Delete any `require` calls
<add>Delete any `require` Calls
<ide> --------------------------
<ide>
<ide> In my experience, projects generally do not do this. But I've seen a couple, and have heard of a few others.
<ide> Please delete any `require` calls of that type.
<ide> New Features You Can Leverage
<ide> -----------------------------
<ide>
<del>### Delete `require_dependency` calls
<add>### Delete `require_dependency` Calls
<ide>
<ide> All known use cases of `require_dependency` have been eliminated with Zeitwerk. You should grep the project and delete them.
<ide>
<ide> If your application uses Single Table Inheritance, please see the [Single Table Inheritance section](autoloading_and_reloading_constants.html#single-table-inheritance) of the Autoloading and Reloading Constants (Zeitwerk Mode) guide.
<ide>
<del>### Qualified Names in Class and Module Definitions Are Now Possible
<add>### Qualified Names in Class and Module Definitions are Now Possible
<ide>
<ide> You can now robustly use constant paths in class and module definitions:
<ide>
<ide><path>guides/source/command_line.md
<ide> Any modifications you make will be rolled back on exit
<ide> irb(main):001:0>
<ide> ```
<ide>
<del>#### The app and helper objects
<add>#### The `app` and `helper` Objects
<ide>
<ide> Inside the `bin/rails console` you have access to the `app` and `helper` instances.
<ide>
<ide><path>guides/source/configuring.md
<ide> NOTE: There is no guarantee that your initializers will run after all the gem
<ide> initializers, so any initialization code that depends on a given gem having been
<ide> initialized should go into a `config.after_initialize` block.
<ide>
<del>Initialization events
<add>Initialization Events
<ide> ---------------------
<ide>
<ide> Rails has 5 initialization events which can be hooked into (listed in the order that they are run):
<ide> Below is a comprehensive list of all the initializers found in Rails in the orde
<ide>
<ide> * `disable_dependency_loading`: Disables the automatic dependency loading if the `config.eager_load` is set to `true`.
<ide>
<del>Database pooling
<add>Database Pooling
<ide> ----------------
<ide>
<ide> Active Record database connections are managed by `ActiveRecord::ConnectionAdapters::ConnectionPool` which ensures that a connection pool synchronizes the amount of thread access to a limited number of database connections. This limit defaults to 5 and can be configured in `database.yml`.
<ide> connection pool by incrementing the `pool` option in `database.yml`
<ide> NOTE. If you are running in a multi-threaded environment, there could be a chance that several threads may be accessing multiple connections simultaneously. So depending on your current request load, you could very well have multiple threads contending for a limited number of connections.
<ide>
<ide>
<del>Custom configuration
<add>Custom Configuration
<ide> --------------------
<ide>
<ide> You can configure your own code through the Rails configuration object with
<ide><path>guides/source/contributing_to_ruby_on_rails.md
<ide> Ensure the changesets you introduced are included. Fill in some details about
<ide> your potential patch, using the pull request template provided. When finished, click "Create
<ide> pull request".
<ide>
<del>### Get some Feedback
<add>### Get Some Feedback
<ide>
<ide> Most pull requests will go through a few iterations before they get merged.
<ide> Different contributors will sometimes have different opinions, and often
<ide><path>guides/source/debugging_rails_applications.md
<ide> noticeable with large amounts of logging, but it's a good practice to employ.
<ide> INFO: This section was written by [Jon Cairns at a StackOverflow answer](https://stackoverflow.com/questions/16546730/logging-in-rails-is-there-any-performance-hit/16546935#16546935)
<ide> and it is licensed under [cc by-sa 4.0](https://creativecommons.org/licenses/by-sa/4.0/).
<ide>
<del>Debugging with the `debug` gem
<add>Debugging with the `debug` Gem
<ide> ------------------------------
<ide>
<ide> When your code is behaving in unexpected ways, you can try printing to logs or
<ide> Besides direct evaluation, debugger also helps you collect rich amount of inform
<ide> - `backtrace` (or `bt`) - Backtrace (with additional information).
<ide> - `outline` (or `o`, `ls`) - Available methods, constants, local variables, and instance variables in the current scope.
<ide>
<del>#### The info command
<add>#### The `info` Command
<ide>
<ide> It'll give you an overview of the values of local and instance variables that are visible from the current frame.
<ide>
<ide> It'll give you an overview of the values of local and instance variables that ar
<ide> @rendered_format = nil
<ide> ```
<ide>
<del>#### The backtrace command
<add>#### The `backtrace` Command
<ide>
<ide> When used without any options, it lists all the frames on the stack:
<ide>
<ide> Don't worry, the `backtrace` command provides 2 options to help you filter frame
<ide>
<ide> It's also possible to use these options together: `backtrace [num] /pattern/`.
<ide>
<del>#### The outline command
<add>#### The `outline` Command
<ide>
<ide> This command is similar to `pry` and `irb`'s `ls` command. It will show you what's accessible from the current scope, including:
<ide>
<ide> And to remove them, you can use:
<ide> - `delete` - delete all breakpoints
<ide> - `delete <num>` - delete the breakpoint with id `num`
<ide>
<del>#### The break command
<add>#### The `break` Command
<ide>
<ide> **Set a breakpoint on a specified line number - e.g. `b 28`**
<ide>
<ide> Stop by #0 BP - Line /Users/st0012/projects/rails-guide-example/app/controller
<ide> Stop by #0 BP - Method @post.save at /Users/st0012/.rbenv/versions/3.0.1/lib/ruby/gems/3.0.0/gems/activerecord-7.0.0.alpha2/lib/active_record/suppressor.rb:43
<ide> ```
<ide>
<del>#### The catch command
<add>#### The `catch` Command
<ide>
<ide> **Stop when an exception is raised - e.g. `catch ActiveRecord::RecordInvalid`**
<ide>
<ide> Stop by #0 BP - Method @post.save at /Users/st0012/.rbenv/versions/3.0.1/lib/r
<ide> Stop by #1 BP - Catch "ActiveRecord::RecordInvalid"
<ide> ```
<ide>
<del>#### The watch command
<add>#### The `watch` Command
<ide>
<ide> **Stop when the instance variable is changed - e.g. `watch @_response_body`**
<ide>
<ide> Stop by #0 BP - Watch #<PostsController:0x00007fce69ca5320> @_response_body =
<ide> (rdbg)
<ide> ```
<ide>
<del>#### Breakpoint options
<add>#### Breakpoint Options
<ide>
<ide> In addition to different types of breakpoints, you can also specify options to achieve more advanced debugging workflow. Currently, the debugger supports 4 options:
<ide>
<ide> Please also note that the first 3 options: `do:`, `pre:` and `if:` are also avai
<ide> @rendered_format = nil
<ide> ```
<ide>
<del>#### Program your debugging workflow
<add>#### Program Your Debugging Workflow
<ide>
<ide> With those options, you can script your debugging workflow in one line like:
<ide>
<ide> This technique can save you from repeated manual input and make the debugging ex
<ide>
<ide> You can find more commands and configuration options from its [documentation](https://github.com/ruby/debug).
<ide>
<del>Debugging with the `web-console` gem
<add>Debugging with the `web-console` Gem
<ide> ------------------------------------
<ide>
<ide> Web Console is a bit like `debug`, but it runs in the browser. You can request a console in the context of a view or a controller on any page. The console would be rendered next to your HTML content.
<ide><path>guides/source/development_dependencies_install.md
<ide> NOTE: Using the Rake task to create the test databases ensures they have the cor
<ide>
<ide> If you're using another database, check the file `activerecord/test/config.yml` or `activerecord/test/config.example.yml` for default connection information. You can edit `activerecord/test/config.yml` to provide different credentials on your machine, but you should not push any of those changes back to Rails.
<ide>
<del>### Install JavaScript dependencies
<add>### Install JavaScript Dependencies
<ide>
<ide> If you installed Yarn, you will need to install the JavaScript dependencies:
<ide>
<ide> ```bash
<ide> $ yarn install
<ide> ```
<ide>
<del>### Installing gem dependencies
<add>### Installing Gem Dependencies
<ide>
<ide> Gems are installed with [Bundler](https://bundler.io/) which ships by default with Ruby.
<ide>
<ide><path>guides/source/engines.md
<ide> module MyApp
<ide> end
<ide> ```
<ide>
<del>#### Reopening existing classes using `class_eval`
<add>#### Reopening Existing Classes Using `class_eval`
<ide>
<ide> For example, in order to override the engine model
<ide>
<ide> end
<ide>
<ide> It is very important that the override _reopens_ the class or module. Using the `class` or `module` keywords would define them if they were not already in memory, which would be incorrect because the definition lives in the engine. Using `class_eval` as shown above ensures you are reopening.
<ide>
<del>#### Reopening existing classes using ActiveSupport::Concern
<add>#### Reopening Existing Classes Using ActiveSupport::Concern
<ide>
<ide> Using `Class#class_eval` is great for simple adjustments, but for more complex
<ide> class modifications, you might want to consider using [`ActiveSupport::Concern`]
<ide> Rails code can often be referenced on load of an application. Rails is responsib
<ide>
<ide> Load and configuration hooks are the API that allow you to hook into this initialization process without violating the load contract with Rails. This will also mitigate boot performance degradation and avoid conflicts.
<ide>
<del>### Avoid loading Rails Frameworks
<add>### Avoid Loading Rails Frameworks
<ide>
<ide> Since Ruby is a dynamic language, some code will cause different Rails frameworks to load. Take this snippet for instance:
<ide>
<ide> This new snippet will only include `MyActiveRecordHelper` when `ActiveRecord::Ba
<ide>
<ide> In the Rails framework these hooks are called when a specific library is loaded. For example, when `ActionController::Base` is loaded, the `:action_controller_base` hook is called. This means that all `ActiveSupport.on_load` calls with `:action_controller_base` hooks will be called in the context of `ActionController::Base` (that means `self` will be an `ActionController::Base`).
<ide>
<del>### Modifying Code to use Load Hooks
<add>### Modifying Code to Use Load Hooks
<ide>
<ide> Modifying code is generally straightforward. If you have a line of code that refers to a Rails framework such as `ActiveRecord::Base` you can wrap that code in a load hook.
<ide>
<ide><path>guides/source/error_reporting.md
<ide> Note: The Rails error-reporter will always call registered subscribers, regardle
<ide>
<ide> There are three ways you can use the error reporter:
<ide>
<del>#### Reporting and swallowing errors
<add>#### Reporting and Swallowing Errors
<ide> [`Rails.error.handle`](https://api.rubyonrails.org/classes/ActiveSupport/ErrorReporter.html#method-i-handle) will report any error raised within the block. It will then **swallow** the error, and the rest of your code outside the block will continue as normal.
<ide>
<ide> ```ruby
<ide> user = Rails.error.handle(fallback: -> { User.anonymous }) do
<ide> end
<ide> ```
<ide>
<del>#### Reporting and re-raising errors
<add>#### Reporting and Re-raising Errors
<ide> [`Rails.error.record`](https://api.rubyonrails.org/classes/ActiveSupport/ErrorReporter.html#method-i-record) will report errors to all registered subscribers and then re-raise the error, meaning that the rest of your code won't execute.
<ide>
<ide> ```ruby
<ide> end
<ide>
<ide> If no error is raised in the block, `Rails.error.record` will return the result of the block.
<ide>
<del>#### Manually reporting errors
<add>#### Manually Reporting Errors
<ide> You can also manually report errors by calling [`Rails.error.report`](https://api.rubyonrails.org/classes/ActiveSupport/ErrorReporter.html#method-i-report):
<ide>
<ide> ```ruby
<ide> end
<ide>
<ide> Any options you pass will be passed on the error subscribers.
<ide>
<del>### Error-reporting options
<add>### Error-reporting Options
<ide>
<ide> All 3 reporting APIs (`#handle`, `#record`, and `#report`) support the following options, which are then passed along to all registered subscribers:
<ide>
<ide><path>guides/source/form_helpers.md
<ide> form_with model: [:admin, :management, @article]
<ide>
<ide> For more information on Rails' routing system and the associated conventions, please see [Rails Routing from the Outside In](routing.html) guide.
<ide>
<del>### How do forms with PATCH, PUT, or DELETE methods work?
<add>### How do Forms with PATCH, PUT, or DELETE Methods Work?
<ide>
<ide> The Rails framework encourages RESTful design of your applications, which means you'll be making a lot of "PATCH", "PUT", and "DELETE" requests (besides "GET" and "POST"). However, most browsers _don't support_ methods other than "GET" and "POST" when it comes to submitting forms.
<ide>
<ide> As a convenience you can instead pass the symbol `:all_blank` which will create
<ide>
<ide> Rather than rendering multiple sets of fields ahead of time you may wish to add them only when a user clicks on an "Add new address" button. Rails does not provide any built-in support for this. When generating new sets of fields you must ensure the key of the associated array is unique - the current JavaScript date (milliseconds since the [epoch](https://en.wikipedia.org/wiki/Unix_time)) is a common choice.
<ide>
<del>Using Tag Helpers Without a Form Builder
<add>Using Tag Helpers without a Form Builder
<ide> ----------------------------------------
<ide>
<ide> In case you need to render form fields outside of the context of a form builder, Rails provides tag helpers for common form elements. For example, [`check_box_tag`](https://api.rubyonrails.org/classes/ActionView/Helpers/FormTagHelper.html#method-i-check_box_tag):
<ide><path>guides/source/getting_started.md
<ide> Hello, Rails!
<ide> To begin with, let's get some text up on screen quickly. To do this, you need to
<ide> get your Rails application server running.
<ide>
<del>### Starting up the Web Server
<add>### Starting Up the Web Server
<ide>
<ide> You actually have a functional Rails application already. To see it, you need to
<ide> start a web server on your development machine. You can do this by running the
<ide><path>guides/source/i18n.md
<ide> The translation denoted as `:one` is regarded as singular, and the `:other` is u
<ide>
<ide> If the lookup for the key does not return a Hash suitable for pluralization, an `I18n::InvalidPluralizationData` exception is raised.
<ide>
<del>#### Locale-specific rules
<add>#### Locale-specific Rules
<ide>
<ide> The I18n gem provides a Pluralization backend that can be used to enable locale-specific rules. Include it
<ide> to the Simple backend, then add the localized pluralization algorithms to translation store, as `i18n.plural.rule`.
<ide> I18n.t :short, scope: [:date, :formats]
<ide>
<ide> Generally we recommend using YAML as a format for storing translations. There are cases, though, where you want to store Ruby lambdas as part of your locale data, e.g. for special date formats.
<ide>
<del>Customize your I18n Setup
<add>Customize Your I18n Setup
<ide> -------------------------
<ide>
<ide> ### Using Different Backends
<ide><path>guides/source/layouts_and_rendering.md
<ide> render js: "alert('Hello Rails');"
<ide>
<ide> This will send the supplied string to the browser with a MIME type of `text/javascript`.
<ide>
<del>#### Rendering raw body
<add>#### Rendering Raw Body
<ide>
<ide> You can send a raw content back to the browser, without setting any content
<ide> type, by using the `:body` option to `render`:
<ide> time.
<ide> NOTE: Unless overridden, your response returned from this render option will be
<ide> `text/plain`, as that is the default content type of Action Dispatch response.
<ide>
<del>#### Rendering raw file
<add>#### Rendering Raw File
<ide>
<ide> Rails can render a raw file from an absolute path. This is useful for
<ide> conditionally rendering static files like error pages.
<ide> since an attacker could use this action to access security sensitive files in yo
<ide>
<ide> TIP: `send_file` is often a faster and better option if a layout isn't required.
<ide>
<del>#### Rendering objects
<add>#### Rendering Objects
<ide>
<ide> Rails can render objects responding to `:render_in`.
<ide>
<ide> end
<ide>
<ide> This would detect that there are no books with the specified ID, populate the `@books` instance variable with all the books in the model, and then directly render the `index.html.erb` template, returning it to the browser with a flash alert message to tell the user what happened.
<ide>
<del>### Using `head` To Build Header-Only Responses
<add>### Using `head` to Build Header-Only Responses
<ide>
<ide> The [`head`][] method can be used to send responses with only headers to the browser. The `head` method accepts a number or symbol (see [reference table](#the-status-option)) representing an HTTP status code. The options argument is interpreted as a hash of header names and values. For example, you can return only an error header:
<ide>
<ide><path>guides/source/plugins.md
<ide> Setup
<ide> Currently, Rails plugins are built as gems, _gemified plugins_. They can be shared across
<ide> different Rails applications using RubyGems and Bundler if desired.
<ide>
<del>### Generate a gemified plugin.
<add>### Generate a Gemified Plugin
<ide>
<ide> Rails ships with a `rails plugin new` command which creates a
<ide> skeleton for developing any kind of Rails extension with the ability
<ide><path>guides/source/rails_on_rack.md
<ide> To find out more about different `rackup` options, you can run:
<ide> $ rackup --help
<ide> ```
<ide>
<del>### Development and auto-reloading
<add>### Development and Auto-reloading
<ide>
<ide> Middlewares are loaded once and are not monitored for changes. You will have to restart the server for changes to be reflected in the running application.
<ide>
<ide><path>guides/source/routing.md
<ide> end
<ide>
<ide> Both the `matches?` method and the lambda gets the `request` object as an argument.
<ide>
<del>#### Constraints in a block form
<add>#### Constraints in a Block Form
<ide>
<ide> You can specify constraints in a block form. This is useful for when you need to apply the same rule to several routes. For example:
<ide>
<ide> video = Video.find_by(identifier: "Roman-Holiday")
<ide> edit_video_path(video) # => "/videos/Roman-Holiday/edit"
<ide> ```
<ide>
<del>Breaking up *very* large route file into multiple small ones:
<add>Breaking Up *Very* Large Route File into Multiple Small Ones:
<ide> -------------------------------------------------------
<ide>
<ide> If you work in a large application with thousands of routes, a single `config/routes.rb` file can become cumbersome and hard to read.
<ide> You can use the normal routing DSL inside the `admin.rb` routing file, but you *
<ide>
<ide> [`draw`]: https://api.rubyonrails.org/classes/ActionDispatch/Routing/Mapper/Resources.html#method-i-draw
<ide>
<del>### Don't use this feature unless you really need it
<add>### Don't Use This Feature Unless You Really Need It
<ide>
<ide> Having multiple routing files makes discoverability and understandability harder. For most applications - even those with a few hundred routes - it's easier for developers to have a single routing file. The Rails routing DSL already offers a way to break routes in an organized manner with `namespace` and `scope`.
<ide>
<ide><path>guides/source/security.md
<ide> INFO: _Injection is a class of attacks that introduce malicious code or paramete
<ide>
<ide> Injection is very tricky, because the same code or parameter can be malicious in one context, but totally harmless in another. A context can be a scripting, query, or programming language, the shell, or a Ruby/Rails method. The following sections will cover all important contexts where injection attacks may happen. The first section, however, covers an architectural decision in connection with Injection.
<ide>
<del>### Permitted lists versus Restricted lists
<add>### Permitted Lists Versus Restricted Lists
<ide>
<ide> NOTE: _When sanitizing, protecting, or verifying something, prefer permitted lists over restricted lists._
<ide>
<ide> system("/bin/echo","hello; rm *")
<ide> # prints "hello; rm *" and does not delete files
<ide> ```
<ide>
<del>#### Kernel#open's vulnerability
<add>#### Kernel#open's Vulnerability
<ide>
<ide> `Kernel#open` executes OS command if the argument starts with a vertical bar (`|`).
<ide>
<ide><path>guides/source/testing.md
<ide> After reading this guide, you will know:
<ide>
<ide> --------------------------------------------------------------------------------
<ide>
<del>Why Write Tests for your Rails Applications?
<add>Why Write Tests for Your Rails Applications?
<ide> --------------------------------------------
<ide>
<ide> Rails makes it super easy to write your tests. It starts by producing skeleton test code while you are creating your models and controllers.
<ide> Each environment's configuration can be modified similarly. In this case, we can
<ide>
<ide> NOTE: Your tests are run under `RAILS_ENV=test`.
<ide>
<del>### Rails meets Minitest
<add>### Rails Meets Minitest
<ide>
<ide> If you remember, we used the `bin/rails generate model` command in the
<ide> [Getting Started with Rails](getting_started.html) guide. We created our first
<ide> An assertion is a line of code that evaluates an object (or expression) for expe
<ide>
<ide> Every test may contain one or more assertions, with no restriction as to how many assertions are allowed. Only when all the assertions are successful will the test pass.
<ide>
<del>#### Your first failing test
<add>#### Your First Failing Test
<ide>
<ide> To see how a test failure is reported, you can add a failing test to the `article_test.rb` test case.
<ide>
<ide> By default, every Rails application has three environments: development, test, a
<ide>
<ide> A dedicated test database allows you to set up and interact with test data in isolation. This way your tests can mangle test data with confidence, without worrying about the data in the development or production databases.
<ide>
<del>### Maintaining the test database schema
<add>### Maintaining the Test Database Schema
<ide>
<ide> In order to run your tests, your test database will need to have the current
<ide> structure. The test helper checks whether your test database has any pending
<ide> Notice the `category` key of the `first` Article found in `fixtures/articles.yml
<ide>
<ide> NOTE: For associations to reference one another by name, you can use the fixture name instead of specifying the `id:` attribute on the associated fixtures. Rails will auto assign a primary key to be consistent between runs. For more information on this association behavior please read the [Fixtures API documentation](https://api.rubyonrails.org/classes/ActiveRecord/FixtureSet.html).
<ide>
<del>#### File attachment fixtures
<add>#### File Attachment Fixtures
<ide>
<ide> Like other Active Record-backed models, Active Storage attachment records
<ide> inherit from ActiveRecord::Base instances and can therefore be populated by
<ide> default. Loading involves three steps:
<ide>
<ide> TIP: In order to remove existing data from the database, Rails tries to disable referential integrity triggers (like foreign keys and check constraints). If you are getting annoying permission errors on running tests, make sure the database user has privilege to disable these triggers in testing environment. (In PostgreSQL, only superusers can disable all triggers. Read more about PostgreSQL permissions [here](http://blog.endpoint.com/2012/10/postgres-system-triggers-error.html)).
<ide>
<del>#### Fixtures are Active Record objects
<add>#### Fixtures are Active Record Objects
<ide>
<ide> Fixtures are instances of Active Record. As mentioned in point #3 above, you can access the object directly because it is automatically available as a method whose scope is local of the test case. For example:
<ide>
<ide> By default, system tests are run with the Selenium driver, using the Chrome
<ide> browser, and a screen size of 1400x1400. The next section explains how to
<ide> change the default settings.
<ide>
<del>### Changing the default settings
<add>### Changing the Default Settings
<ide>
<ide> Rails makes changing the default settings for system tests very simple. All
<ide> the setup is abstracted away so you can focus on writing your tests.
<ide> send a POST request to create the new article in the database.
<ide> We will be redirected back to the articles index page and there we assert
<ide> that the text from the new article's title is on the articles index page.
<ide>
<del>#### Testing for multiple screen sizes
<add>#### Testing for Multiple Screen Sizes
<ide>
<ide> If you want to test for mobile sizes on top of testing for desktop,
<ide> you can create another class that inherits from SystemTestCase and use in your
<ide> class PostsTest < MobileSystemTestCase
<ide> end
<ide> ```
<ide>
<del>#### Taking it further
<add>#### Taking It Further
<ide>
<ide> The beauty of system testing is that it is similar to integration testing in
<ide> that it tests the user's interaction with your controller, model, and view, but
<ide> When performing requests, we will have [`ActionDispatch::Integration::RequestHel
<ide>
<ide> If we need to modify the session, or state of our integration test, take a look at [`ActionDispatch::Integration::Session`](https://api.rubyonrails.org/classes/ActionDispatch/Integration/Session.html) to help.
<ide>
<del>### Implementing an integration test
<add>### Implementing an Integration Test
<ide>
<ide> Let's add an integration test to our blog application. We'll start with a basic workflow of creating a new blog article, to verify that everything is working properly.
<ide>
<ide> We will take a look at `assert_select` to query the resulting HTML of a request
<ide>
<ide> When we visit our root path, we should see `welcome/index.html.erb` rendered for the view. So this assertion should pass.
<ide>
<del>#### Creating articles integration
<add>#### Creating Articles Integration
<ide>
<ide> How about testing our ability to create a new article in our blog and see the resulting article.
<ide>
<ide> NOTE: Don't forget to call `follow_redirect!` if you plan to make subsequent req
<ide>
<ide> Finally we can assert that our response was successful and our new article is readable on the page.
<ide>
<del>#### Taking it further
<add>#### Taking It Further
<ide>
<ide> We were able to successfully test a very small workflow for visiting our blog and creating a new article. If we wanted to take this further we could add tests for commenting, removing articles, or editing comments. Integration tests are a great place to experiment with all kinds of use cases for our applications.
<ide>
<ide> Functional Tests for Your Controllers
<ide>
<ide> In Rails, testing the various actions of a controller is a form of writing functional tests. Remember your controllers handle the incoming web requests to your application and eventually respond with a rendered view. When writing functional tests, you are testing how your actions handle the requests and the expected result or response, in some cases an HTML view.
<ide>
<del>### What to include in your Functional Tests
<add>### What to Include in Your Functional Tests
<ide>
<ide> You should test for things such as:
<ide>
<ide> All of request types have equivalent methods that you can use. In a typical C.R.
<ide>
<ide> NOTE: Functional tests do not verify whether the specified request type is accepted by the action, we're more concerned with the result. Request tests exist for this use case to make your tests more purposeful.
<ide>
<del>### Testing XHR (AJAX) requests
<add>### Testing XHR (AJAX) Requests
<ide>
<ide> To test AJAX requests, you can specify the `xhr: true` option to `get`, `post`,
<ide> `patch`, `put`, and `delete` methods. For example:
<ide> class ArticlesControllerTest < ActionDispatch::IntegrationTest
<ide> end
<ide> ```
<ide>
<del>### Setting Headers and CGI variables
<add>### Setting Headers and CGI Variables
<ide>
<ide> [HTTP headers](https://tools.ietf.org/search/rfc2616#section-5.3)
<ide> and
<ide> get articles_url, headers: { "Content-Type": "text/plain" } # simulate the reque
<ide> get articles_url, headers: { "HTTP_REFERER": "http://example.com/home" } # simulate the request with custom env variable
<ide> ```
<ide>
<del>### Testing `flash` notices
<add>### Testing `flash` Notices
<ide>
<ide> If you remember from earlier, one of the Three Hashes of the Apocalypse was `flash`.
<ide>
<ide> Finished in 0.081972s, 12.1993 runs/s, 48.7972 assertions/s.
<ide> 1 runs, 4 assertions, 0 failures, 0 errors, 0 skips
<ide> ```
<ide>
<del>### Putting it together
<add>### Putting It Together
<ide>
<ide> At this point our Articles controller tests the `:index` as well as `:new` and `:create` actions. What about dealing with existing data?
<ide>
<ide> end
<ide>
<ide> Similar to other callbacks in Rails, the `setup` and `teardown` methods can also be used by passing a block, lambda, or method name as a symbol to call.
<ide>
<del>### Test helpers
<add>### Test Helpers
<ide>
<ide> To avoid code duplication, you can add your own test helpers.
<ide> Sign in helper can be a good example:
<ide> Starting with Rails 7, newly generated applications are configured that way by d
<ide>
<ide> If your project does not have continuous integration, you can still eager load in the test suite by calling `Rails.application.eager_load!`:
<ide>
<del>#### minitest
<add>#### Minitest
<ide>
<ide> ```ruby
<ide> require "test_helper"
<ide><path>guides/source/threading_and_code_execution.md
<ide> The Executor consists of two callbacks: `to_run` and `to_complete`. The Run
<ide> callback is called before the application code, and the Complete callback is
<ide> called after.
<ide>
<del>### Default callbacks
<add>### Default Callbacks
<ide>
<ide> In a default Rails application, the Executor callbacks are used to:
<ide>
<ide> directly wrapping code with methods like
<ide> `ActiveRecord::Base.connection_pool.with_connection`. The Executor replaces
<ide> these with a single more abstract interface.
<ide>
<del>### Wrapping application code
<add>### Wrapping Application Code
<ide>
<ide> If you're writing a library or component that will invoke application code, you
<ide> should wrap it with a call to the executor:
<ide><path>guides/source/webpacker.md
<ide> What Is Webpacker?
<ide>
<ide> Webpacker is a Rails wrapper around the [webpack](https://webpack.js.org) build system that provides a standard webpack configuration and reasonable defaults.
<ide>
<del>### What is webpack?
<add>### What is Webpack?
<ide>
<ide> The goal of webpack, or any front-end build system, is to allow you to write your front-end code in a way that is convenient for developers and then package that code in a way that is convenient for browsers. With webpack, you can manage JavaScript, CSS, and static assets like images or fonts. Webpack will allow you to write your code, reference other code in your application, transform your code, and combine your code into easily downloadable packs.
<ide>
<ide><path>guides/source/working_with_javascript_in_rails.md
<ide> your JavaScript.
<ide>
<ide> --------------------------------------------------------------------------------
<ide>
<del>Import maps
<add>Import Maps
<ide> -----------
<ide>
<ide> [Import maps](https://github.com/rails/importmap-rails) let you import JavaScript modules using | 41 |
Text | Text | update russian translation | bbb4de225ceea393e6b8d0dc439738f09c7a0ef2 | <ide><path>guide/russian/php/composer/index.md
<ide> localeTitle: Composer
<ide> ---
<ide> ## Composer
<ide>
<del>Composer - менеджер пакетов прикладного уровня для языка программирования PHP что обеспечивает стандартный формат для управления зависимостями в программном обеспечении и необходимыми библиотеками. Вы используете файл `composer.json` для настройки пакетов для проекта PHP, аналогично файлу `package.json` в проектах NodeJS.
<ide>
<del>### Установка Composer
<add>Composer - это менеджер пакетов для пакетов PHP. Используйте файл `composer.json` для настройки зависимостей проекта PHP, аналогично файлу `package.json` в проектах NodeJS.
<ide>
<del>Чтобы установить Composer, вам сначала нужно загрузить его с [getcomposer.org](https://getcomposer.org/download/) .
<add>### Установить Composer
<add>
<add>
<add>Чтобы установить Composer, вам сначала нужно загрузить его с [getcomposer.org](https://getcomposer.org/download/).
<ide>
<ide> Затем вы можете установить Composer локально или глобально.
<ide>
<ide> ### Установка пакетов
<ide>
<del>Установите пакеты с `composer install`. Composer установит пакеты, перечисленные в файле `composer.json` в папку поставщика /.
<add>
<add>Установите пакеты с `composer install` . Composer установит пакеты, перечисленные в файле `composer.json` в папку поставщика `vendor/`.
<ide>
<ide> ```shell
<ide> composer install
<ide> Composer - менеджер пакетов прикладного уровня
<ide> ### Дополнительная информация:
<ide>
<ide> * Веб-сайт Composer: [getcomposer.org](https://getcomposer.org/)
<del>* Composer GitHub: [композитор / getcomposer](https://github.com/composer/getcomposer.org)
<del>* Популярный репозиторий PHP-пакетов, который использует Composer: [Packagist](https://packagist.org/)
<add>* Composer на GitHub: [Composer / getcomposer](https://github.com/composer/getcomposer.org)
<add>* Популярный репозиторий PHP-пакетов, который Composer использует для поиска пакетов: [Packagist](https://packagist.org/)
<add> | 1 |
Ruby | Ruby | fix key names in abstract_controller_test.rb | f94d715ad5064947e06f6185a314ccebd3f6475f | <ide><path>actionview/test/actionpack/abstract/abstract_controller_test.rb
<ide> def _prefixes
<ide>
<ide> def render(options = {})
<ide> if options.is_a?(String)
<del> options = { _template_name: options }
<add> options = { action: options }
<ide> end
<ide> super
<ide> end
<ide> def render(options = {})
<ide>
<ide> class Me2 < RenderingController
<ide> def index
<del> render "index.erb"
<add> render "index"
<ide> end
<ide>
<ide> def index_to_string
<ide> def index_to_string
<ide>
<ide> def action_with_ivars
<ide> @my_ivar = "Hello"
<del> render "action_with_ivars.erb"
<add> render "action_with_ivars"
<ide> end
<ide>
<ide> def naked_render
<ide> def self.layout(formats)
<ide> end
<ide>
<ide> def render_to_body(options = {})
<del> options[:_layout] = options[:layout] || _default_layout({})
<add> options[:layout] = options[:layout] || _default_layout({})
<ide> super
<ide> end
<ide> end | 1 |
Python | Python | use default tf layout i.e. 'channels_first' | 1fec906ab0b047546fffd5ae4d67a688eac7158b | <ide><path>examples/mnist_acgan.py
<ide> import numpy as np
<ide>
<ide> np.random.seed(1337)
<del>
<del>K.set_image_data_format('channels_first')
<del>
<ide> num_classes = 10
<ide>
<ide>
<ide> def build_generator(latent_size):
<ide> # we will map a pair of (z, L), where z is a latent vector and L is a
<del> # label drawn from P_c, to image space (..., 1, 28, 28)
<add> # label drawn from P_c, to image space (..., 28, 28, 1)
<ide> cnn = Sequential()
<ide>
<ide> cnn.add(Dense(1024, input_dim=latent_size, activation='relu'))
<ide> cnn.add(Dense(128 * 7 * 7, activation='relu'))
<del> cnn.add(Reshape((128, 7, 7)))
<add> cnn.add(Reshape((7, 7, 128)))
<ide>
<del> # upsample to (..., 14, 14)
<add> # upsample to (14, 14, ...)
<ide> cnn.add(UpSampling2D(size=(2, 2)))
<ide> cnn.add(Conv2D(256, 5, padding='same',
<ide> activation='relu',
<ide> kernel_initializer='glorot_normal'))
<ide>
<del> # upsample to (..., 28, 28)
<add> # upsample to (28, 28, ...)
<ide> cnn.add(UpSampling2D(size=(2, 2)))
<ide> cnn.add(Conv2D(128, 5, padding='same',
<ide> activation='relu',
<ide> def build_discriminator():
<ide> cnn = Sequential()
<ide>
<ide> cnn.add(Conv2D(32, 3, padding='same', strides=2,
<del> input_shape=(1, 28, 28)))
<add> input_shape=(28, 28, 1)))
<ide> cnn.add(LeakyReLU())
<ide> cnn.add(Dropout(0.3))
<ide>
<ide> def build_discriminator():
<ide>
<ide> cnn.add(Flatten())
<ide>
<del> image = Input(shape=(1, 28, 28))
<add> image = Input(shape=(28, 28, 1))
<ide>
<ide> features = cnn(image)
<ide>
<ide> def build_discriminator():
<ide> loss=['binary_crossentropy', 'sparse_categorical_crossentropy']
<ide> )
<ide>
<del> # get our mnist data, and force it to be of shape (..., 1, 28, 28) with
<add> # get our mnist data, and force it to be of shape (..., 28, 28, 1) with
<ide> # range [-1, 1]
<ide> (x_train, y_train), (x_test, y_test) = mnist.load_data()
<ide> x_train = (x_train.astype(np.float32) - 127.5) / 127.5 | 1 |
Text | Text | fix spelling and punctuation | 4e9cfa940428c4a0258ae5630174769e1a5f8595 | <ide><path>guide/english/blockchain/types/index.md
<ide> Example : Bitcoin, Ethereum, Litecoin
<ide>
<ide> In this type of blockchain the problem of the private blockchain is solved. The issue of sole autonomy by vesting the power in the hands of an individual is tackeled by making more than one person in charge.
<ide>
<del>A group of people come together to form a consortium or federation. Which in turn works together for the common good.
<add>A group of people come together to form a consortium or federation, which in turn works together for the common good.
<ide>
<ide> Example : [Energy Web Foundation](http://energyweb.org/)
<ide> | 1 |
Ruby | Ruby | remove unused variable | ea84b0c6183949750f0801db50d22e58d62ec024 | <ide><path>activesupport/lib/active_support/core_ext/date_and_time/calculations.rb
<ide> def end_of_month
<ide> # Returns a new date/time representing the end of the year.
<ide> # DateTime objects will have a time set to 23:59:59.
<ide> def end_of_year
<del> result = change(:month => 12).end_of_month
<add> change(:month => 12).end_of_month
<ide> end
<ide> alias :at_end_of_year :end_of_year
<ide> | 1 |
Ruby | Ruby | convert preflight test to spec | 451e7c3aebf77be1d353712d202e2e915b8962f0 | <add><path>Library/Homebrew/cask/spec/cask/dsl/preflight_spec.rb
<del><path>Library/Homebrew/cask/test/cask/dsl/preflight_test.rb
<del>require "test_helper"
<add>require "spec_helper"
<ide>
<ide> describe Hbc::DSL::Preflight do
<ide> let(:cask) { Hbc::CaskLoader.load_from_file(TEST_FIXTURE_DIR/"cask/Casks/basic-cask.rb") } | 1 |
Javascript | Javascript | fix typo in reactdomfiberentry | 15d3f2852f589bc20d3679089c59ae9eb4d365d1 | <ide><path>src/renderers/dom/fiber/ReactDOMFiberEntry.js
<ide> var ReactDOMFiber = {
<ide> callback: ?Function,
<ide> ) {
<ide> if (ReactFeatureFlags.disableNewFiberFeatures) {
<del> // Top-level check occurs here instead of inside child reconciler because
<add> // Top-level check occurs here instead of inside child reconciler
<ide> // because requirements vary between renderers. E.g. React Art
<ide> // allows arrays.
<ide> if (!isValidElement(element)) { | 1 |
Java | Java | remove use of cast where avoidable | 9277b470404ce1dce790fa82e684c9d8220940e7 | <ide><path>spring-web/src/main/java/org/springframework/web/server/handler/ExceptionHandlingWebHandler.java
<ide> /*
<del> * Copyright 2002-2019 the original author or authors.
<add> * Copyright 2002-2020 the original author or authors.
<ide> *
<ide> * Licensed under the Apache License, Version 2.0 (the "License");
<ide> * you may not use this file except in compliance with the License.
<ide> public Mono<Void> handle(ServerWebExchange exchange, Throwable ex) {
<ide> String query = StringUtils.hasText(rawQuery) ? "?" + rawQuery : "";
<ide> HttpMethod httpMethod = request.getMethod();
<ide> String description = "HTTP " + httpMethod + " \"" + request.getPath() + query + "\"";
<del> return Mono.error(ex).checkpoint(description + " [ExceptionHandlingWebHandler]").cast(Void.class);
<add> return Mono.<Void>error(ex).checkpoint(description + " [ExceptionHandlingWebHandler]");
<ide> }
<ide> }
<ide>
<ide><path>spring-web/src/main/java/org/springframework/web/server/session/InMemoryWebSessionStore.java
<ide> public Mono<WebSession> createWebSession() {
<ide> Instant now = this.clock.instant();
<ide> this.expiredSessionChecker.checkIfNecessary(now);
<ide>
<del> return Mono.fromSupplier(() -> new InMemoryWebSession(now))
<del> .subscribeOn(Schedulers.boundedElastic())
<del> .cast(WebSession.class);
<add> return Mono.<WebSession>fromSupplier(() -> new InMemoryWebSession(now))
<add> .subscribeOn(Schedulers.boundedElastic());
<ide> }
<ide>
<ide> @Override | 2 |
PHP | PHP | apply fixes from styleci | d0ed106a5d5ecebaf7ded27153d630222284422a | <ide><path>tests/Integration/Routing/FluentRoutingTest.php
<ide> public function test_middleware_run_when_registered_as_array_or_params()
<ide> }
<ide> }
<ide>
<del>
<ide> class Middleware
<ide> {
<ide> public function handle($request, $next) | 1 |
Ruby | Ruby | destructure use_react_native! parameters and doc | 79a37e5a88e179090ade7145a453a46719c87b3f | <ide><path>scripts/react_native_pods.rb
<ide>
<ide> $START_TIME = Time.now.to_i
<ide>
<del>def use_react_native! (options={})
<add># Function that setup all the react native dependencies
<add>#
<add># Parameters
<add># - path: path to react_native installation.
<add># - fabric_enabled: whether fabric should be enabled or not.
<add># - new_arch_enabled: whether the new architecture should be enabled or not.
<add># - production: whether the dependencies must be installed to target a Debug or a Release build.
<add># - hermes_enabled: whether Hermes should be enabled or not.
<add># - flipper_configuration: The configuration to use for flipper.
<add># - app_path: path to the React Native app. Required by the New Architecture.
<add># - config_file_dir: directory of the `package.json` file, required by the New Architecture.
<add>def use_react_native! (
<add> path: "../node_modules/react-native",
<add> fabric_enabled: false,
<add> new_arch_enabled: ENV['RCT_NEW_ARCH_ENABLED'] == '1',
<add> production: false,
<add> hermes_enabled: true,
<add> flipper_configuration: FlipperConfiguration.disabled,
<add> app_path: '..',
<add> config_file_dir: '')
<add>
<add> prefix = path
<add>
<ide> # The version of folly that must be used
<ide> folly_version = '2021.07.22.00'
<ide>
<del> # The prefix to react-native
<del> prefix = options[:path] ||= "../node_modules/react-native"
<del>
<del> # Include Fabric dependencies
<del> fabric_enabled = options[:fabric_enabled] ||= false
<del>
<del> # New arch enabled
<del> new_arch_enabled = ENV['RCT_NEW_ARCH_ENABLED'] == '1'
<del>
<del> # Include DevSupport dependency
<del> production = options[:production] ||= false
<del>
<del> # Include Hermes dependencies
<del> hermes_enabled = options[:hermes_enabled] != nil ? options[:hermes_enabled] : true
<del>
<del> flipper_configuration = options[:flipper_configuration] ||= FlipperConfiguration.disabled
<del>
<ide> ReactNativePodsUtils.warn_if_not_on_arm64()
<ide>
<ide> # The Pods which should be included in all projects
<ide> def use_react_native! (options={})
<ide> pod 'RCT-Folly', :podspec => "#{prefix}/third-party-podspecs/RCT-Folly.podspec", :modular_headers => true
<ide>
<ide> run_codegen!(
<del> options[:app_path],
<del> options[:config_file_dir],
<add> app_path,
<add> config_file_dir,
<ide> :new_arch_enabled => new_arch_enabled,
<ide> :disable_codegen => ENV['DISABLE_CODEGEN'] == '1',
<ide> :react_native_path => prefix,
<ide> def use_react_native! (options={})
<ide> end
<ide> end
<ide>
<add># It returns the default flags.
<ide> def get_default_flags()
<ide> return ReactNativePodsUtils.get_default_flags()
<ide> end
<ide>
<add># It installs the flipper dependencies into the project.
<add>#
<add># Parameters
<add># - versions: a dictionary of Flipper Library -> Versions that can be used to customize which version of Flipper to install.
<add># - configurations: an array of configuration where to install the dependencies.
<ide> def use_flipper!(versions = {}, configurations: ['Debug'])
<ide> Pod::UI.warn "use_flipper is deprecated, use the flipper_configuration option in the use_react_native function"
<ide> use_flipper_pods(versions, :configurations => configurations)
<ide> end
<ide>
<add># Function that executes after React Native has been installed to configure some flags and build settings.
<add>#
<add># Parameters
<add># - installer: the Cocoapod object that allows to customize the project.
<add># - react_native_path: path to React Native.
<add># - mac_catalyst_enabled: whether we are running the Pod on a Mac Catalyst project or not.
<ide> def react_native_post_install(installer, react_native_path = "../node_modules/react-native", mac_catalyst_enabled: false)
<ide> ReactNativePodsUtils.apply_mac_catalyst_patches(installer) if mac_catalyst_enabled
<ide> | 1 |
PHP | PHP | unskip required attribute tests for input() | 450d3c172d46c5ed8562e28dacce7e82a9cd44b0 | <ide><path>src/View/Helper/FormHelper.php
<ide> public function inputs($fields = null, $blacklist = null, $options = array()) {
<ide> * - `type` - Force the type of widget you want. e.g. `type => 'select'`
<ide> * - `label` - Either a string label, or an array of options for the label. See FormHelper::label().
<ide> * - `div` - Either `false` to disable the div, or an array of options for the div.
<del> * See HtmlHelper::div() for more options.
<ide> * - `options` - For widgets that take options e.g. radio, select.
<ide> * - `error` - Control the error message that is produced. Set to `false` to disable any kind of error reporting (field
<ide> * error and error messages).
<ide> * - `empty` - String or boolean to enable empty select box options.
<del> * - `before` - Content to place before the label + input.
<del> * - `after` - Content to place after the label + input.
<del> * - `between` - Content to place between the label + input.
<del> * - `format` - Format template for element order. Any element that is not in the array, will not be in the output.
<del> * - Default input format order: array('before', 'label', 'between', 'input', 'after', 'error')
<del> * - Default checkbox format order: array('before', 'input', 'between', 'label', 'after', 'error')
<del> * - Hidden input will not be formatted
<del> * - Radio buttons cannot have the order of input and label elements controlled with these settings.
<ide> *
<ide> * @param string $fieldName This should be "Modelname.fieldname"
<ide> * @param array $options Each type of input takes different options.
<ide> * @return string Completed form widget.
<ide> * @link http://book.cakephp.org/2.0/en/core-libraries/helpers/form.html#creating-form-elements
<ide> */
<del> public function input($fieldName, $options = array()) {
<add> public function input($fieldName, $options = []) {
<ide> $options += [
<ide> 'type' => null,
<ide> 'label' => null,
<ide> 'error' => null,
<ide> 'options' => null,
<add> 'required' => null,
<ide> 'templates' => []
<ide> ];
<ide> $options = $this->_parseOptions($fieldName, $options);
<ide> public function input($fieldName, $options = array()) {
<ide> $originalTemplates = $this->templates();
<ide> $this->templates($options['templates']);
<ide> unset($options['templates']);
<add>
<ide> $label = $this->_getLabel($fieldName, $options);
<ide> if ($options['type'] !== 'radio') {
<ide> unset($options['label']);
<ide> public function input($fieldName, $options = array()) {
<ide> $result = $this->formatTemplate($template, [
<ide> 'content' => $result,
<ide> 'error' => $error,
<del> 'required' => null,
<add> 'required' => $options['required'] ? ' required' : '',
<ide> 'type' => $options['type'],
<ide> ]);
<ide> }
<ide> protected function _optionsOptions($fieldName, $options) {
<ide> */
<ide> protected function _magicOptions($fieldName, $options, $allowOverride) {
<ide> $context = $this->_getContext();
<add>
<add> if (!isset($options['required'])) {
<add> $options['required'] = $context->isRequired($fieldName);
<add> }
<add>
<ide> $type = $context->type($fieldName);
<ide> $fieldDef = $context->attributes($fieldName);
<ide>
<ide><path>tests/TestCase/View/Helper/FormHelperTest.php
<ide> public function testHtml5InputException() {
<ide> }
<ide>
<ide> /**
<del> * Tests that the 'on' key validates as expected on create
<add> * Tests that formhelper sets required attributes.
<ide> *
<ide> * @return void
<ide> */
<del> public function testRequiredOnCreate() {
<del> $this->markTestIncomplete('Need to revisit once models work again.');
<del> $this->Form->create('Contact');
<del>
<del> $result = $this->Form->input('Contact.imrequiredonupdate');
<del> $expected = array(
<del> 'div' => array('class' => 'input text'),
<del> 'label' => array('for' => 'ContactImrequiredonupdate'),
<del> 'Imrequiredonupdate',
<del> '/label',
<del> 'input' => array(
<del> 'type' => 'text', 'name' => 'Contact[imrequiredonupdate]',
<del> 'id' => 'ContactImrequiredonupdate'
<del> ),
<del> '/div'
<del> );
<del> $this->assertTags($result, $expected);
<add> public function testRequiredAttribute() {
<add> $this->article['required'] = [
<add> 'title' => true,
<add> 'body' => false,
<add> ];
<add> $this->Form->create($this->article);
<ide>
<del> $result = $this->Form->input('Contact.imrequiredoncreate');
<add> $result = $this->Form->input('title');
<ide> $expected = array(
<ide> 'div' => array('class' => 'input text required'),
<del> 'label' => array('for' => 'ContactImrequiredoncreate'),
<del> 'Imrequiredoncreate',
<add> 'label' => array('for' => 'title'),
<add> 'Title',
<ide> '/label',
<ide> 'input' => array(
<del> 'type' => 'text', 'name' => 'Contact[imrequiredoncreate]',
<del> 'id' => 'ContactImrequiredoncreate',
<del> 'required' => 'required'
<add> 'type' => 'text',
<add> 'name' => 'title',
<add> 'id' => 'title',
<add> 'required' => 'required',
<ide> ),
<ide> '/div'
<ide> );
<ide> $this->assertTags($result, $expected);
<ide>
<del> $result = $this->Form->input('Contact.imrequiredonboth');
<del> $expected = array(
<del> 'div' => array('class' => 'input text required'),
<del> 'label' => array('for' => 'ContactImrequiredonboth'),
<del> 'Imrequiredonboth',
<del> '/label',
<del> 'input' => array(
<del> 'type' => 'text', 'name' => 'Contact[imrequiredonboth]',
<del> 'id' => 'ContactImrequiredonboth',
<del> 'required' => 'required'
<del> ),
<del> '/div'
<del> );
<del> $this->assertTags($result, $expected);
<add> $result = $this->Form->input('title', ['required' => false]);
<add> $this->assertNotContains('required', $result);
<ide>
<del> $this->Form->inputDefaults(array('required' => false));
<del> $result = $this->Form->input('Contact.imrequired');
<add> $result = $this->Form->input('body');
<ide> $expected = array(
<ide> 'div' => array('class' => 'input text'),
<del> 'label' => array('for' => 'ContactImrequired'),
<del> 'Imrequired',
<add> 'label' => array('for' => 'body'),
<add> 'Body',
<ide> '/label',
<ide> 'input' => array(
<del> 'type' => 'text', 'name' => 'Contact[imrequired]',
<del> 'id' => 'ContactImrequired'
<del> ),
<del> '/div'
<del> );
<del> $this->assertTags($result, $expected);
<del>
<del> $result = $this->Form->input('Contact.imrequired', array('required' => false));
<del> $this->assertTags($result, $expected);
<del>
<del> $result = $this->Form->input('Contact.imrequired', array('required' => true));
<del> $expected = array(
<del> 'div' => array('class' => 'input text required'),
<del> 'label' => array('for' => 'ContactImrequired'),
<del> 'Imrequired',
<del> '/label',
<del> 'input' => array(
<del> 'required' => 'required', 'type' => 'text', 'name' => 'data[Contact][imrequired]',
<del> 'id' => 'ContactImrequired'
<add> 'type' => 'text',
<add> 'name' => 'body',
<add> 'id' => 'body',
<ide> ),
<ide> '/div'
<ide> );
<ide> $this->assertTags($result, $expected);
<ide>
<del> $result = $this->Form->input('Contact.imrequired', array('required' => null));
<del> $this->assertTags($result, $expected);
<add> $result = $this->Form->input('body', ['required' => true]);
<add> $this->assertContains('required', $result);
<ide> }
<ide>
<ide> /** | 2 |
Ruby | Ruby | replace compound `unless` with equivalent `if` | 02c207a9ec6ccec1899cf8428a05bb2682235f90 | <ide><path>Library/Homebrew/extend/git_repository.rb
<ide> def git?
<ide> # Gets the URL of the Git origin remote.
<ide> sig { returns(T.nilable(String)) }
<ide> def git_origin
<del> return unless git? && Utils::Git.available?
<add> return if !git? || !Utils::Git.available?
<ide>
<ide> Utils.popen_read("git", "config", "--get", "remote.origin.url", chdir: self).chomp.presence
<ide> end
<ide>
<ide> # Sets the URL of the Git origin remote.
<ide> sig { params(origin: String).returns(T.nilable(T::Boolean)) }
<ide> def git_origin=(origin)
<del> return unless git? && Utils::Git.available?
<add> return if !git? || !Utils::Git.available?
<ide>
<ide> safe_system "git", "remote", "set-url", "origin", origin, chdir: self
<ide> end
<ide>
<ide> # Gets the full commit hash of the HEAD commit.
<ide> sig { returns(T.nilable(String)) }
<ide> def git_head
<del> return unless git? && Utils::Git.available?
<add> return if !git? || !Utils::Git.available?
<ide>
<ide> Utils.popen_read("git", "rev-parse", "--verify", "-q", "HEAD", chdir: self).chomp.presence
<ide> end
<ide>
<ide> # Gets a short commit hash of the HEAD commit.
<ide> sig { params(length: T.nilable(Integer)).returns(T.nilable(String)) }
<del> def git_short_head(length: 4)
<del> return unless git? && Utils::Git.available?
<add> def git_short_head(length: nil)
<add> return if !git? || !Utils::Git.available?
<ide>
<del> Utils.popen_read("git", "rev-parse", "--short#{length&.to_s&.prepend("=")}",
<del> "--verify", "-q", "HEAD", chdir: self).chomp.presence
<add> short_arg = length&.to_s&.prepend("=")
<add> Utils.popen_read("git", "rev-parse", "--short#{short_arg}", "--verify", "-q", "HEAD", chdir: self).chomp.presence
<ide> end
<ide>
<ide> # Gets the relative date of the last commit, e.g. "1 hour ago"
<ide> sig { returns(T.nilable(String)) }
<ide> def git_last_commit
<del> return unless git? && Utils::Git.available?
<add> return if !git? || !Utils::Git.available?
<ide>
<ide> Utils.popen_read("git", "show", "-s", "--format=%cr", "HEAD", chdir: self).chomp.presence
<ide> end
<ide>
<ide> # Gets the name of the currently checked-out branch, or HEAD if the repository is in a detached HEAD state.
<ide> sig { returns(T.nilable(String)) }
<ide> def git_branch
<del> return unless git? && Utils::Git.available?
<add> return if !git? || !Utils::Git.available?
<ide>
<ide> Utils.popen_read("git", "rev-parse", "--abbrev-ref", "HEAD", chdir: self).chomp.presence
<ide> end
<ide>
<ide> # Gets the name of the default origin HEAD branch.
<ide> sig { returns(T.nilable(String)) }
<ide> def git_origin_branch
<del> return unless git? && Utils::Git.available?
<add> return if !git? || !Utils::Git.available?
<ide>
<ide> Utils.popen_read("git", "symbolic-ref", "-q", "--short", "refs/remotes/origin/HEAD", chdir: self)
<ide> .chomp.presence&.split("/")&.last
<ide> def git_default_origin_branch?
<ide> # Returns the date of the last commit, in YYYY-MM-DD format.
<ide> sig { returns(T.nilable(String)) }
<ide> def git_last_commit_date
<del> return unless git? && Utils::Git.available?
<add> return if !git? || !Utils::Git.available?
<ide>
<ide> Utils.popen_read("git", "show", "-s", "--format=%cd", "--date=short", "HEAD", chdir: self).chomp.presence
<ide> end
<ide>
<ide> # Gets the full commit message of the specified commit, or of the HEAD commit if unspecified.
<ide> sig { params(commit: String).returns(T.nilable(String)) }
<ide> def git_commit_message(commit = "HEAD")
<del> return unless git? && Utils::Git.available?
<add> return if !git? || !Utils::Git.available?
<ide>
<ide> Utils.popen_read("git", "log", "-1", "--pretty=%B", commit, "--", chdir: self, err: :out).strip.presence
<ide> end
<ide><path>Library/Homebrew/tap.rb
<ide> def git_head
<ide> def git_short_head
<ide> raise TapUnavailableError, name unless installed?
<ide>
<del> path.git_short_head
<add> path.git_short_head(length: 4)
<ide> end
<ide>
<ide> # Time since last git commit for this {Tap}. | 2 |
Text | Text | fix another typo in docker-build man page | f0c9818eba0a9220766b020612f3b1d63c7d9c80 | <ide><path>docs/man/docker-build.1.md
<ide> no hard rules here but it is best to give the names consideration.
<ide>
<ide> The **-t**/**--tag** flag is used to rename an image. Here are some examples:
<ide>
<del>Though it is not a good practice, image names can be arbtrary:
<add>Though it is not a good practice, image names can be arbitrary:
<ide>
<ide> docker build -t myimage .
<ide> | 1 |
PHP | PHP | fix cs warnings in function docblocks | 5f51f3aa09fef91cb51e043f12309ca0a90ad1e7 | <ide><path>src/Collection/functions.php
<ide>
<ide> if (!function_exists('collection')) {
<ide> /**
<del> * Returns a new Cake\Collection\Collection object wrapping the passed argument
<add> * Returns a new Cake\Collection\Collection object wrapping the passed argument.
<ide> *
<del> * @param \Traversable|array $items The items from which the collection will be built
<add> * @param \Traversable|array $items The items from which the collection will be built.
<ide> * @return \Cake\Collection\Collection
<ide> */
<ide> function collection($items)
<ide><path>src/Core/functions.php
<ide>
<ide> if (!defined('DS')) {
<ide> /**
<del> * Define DS as short form of DIRECTORY_SEPARATOR.
<del> */
<add> * Define DS as short form of DIRECTORY_SEPARATOR.
<add> */
<ide> define('DS', DIRECTORY_SEPARATOR);
<ide>
<ide> }
<ide> * @param string|array|object $text Text to wrap through htmlspecialchars. Also works with arrays, and objects.
<ide> * Arrays will be mapped and have all their elements escaped. Objects will be string cast if they
<ide> * implement a `__toString` method. Otherwise the class name will be used.
<del> * @param bool $double Encode existing html entities
<add> * @param bool $double Encode existing html entities.
<ide> * @param string $charset Character set to use when escaping. Defaults to config value in `mb_internal_encoding()`
<del> * or 'UTF-8'
<del> * @return string Wrapped text
<add> * or 'UTF-8'.
<add> * @return string Wrapped text.
<ide> * @link http://book.cakephp.org/3.0/en/core-libraries/global-constants-and-functions.html#h
<ide> */
<ide> function h($text, $double = true, $charset = null)
<ide> function h($text, $double = true, $charset = null)
<ide> * @param string $name The name you want to plugin split.
<ide> * @param bool $dotAppend Set to true if you want the plugin to have a '.' appended to it.
<ide> * @param string $plugin Optional default plugin to use if no plugin is found. Defaults to null.
<del> * @return array Array with 2 indexes. 0 => plugin name, 1 => class name
<add> * @return array Array with 2 indexes. 0 => plugin name, 1 => class name.
<ide> * @link http://book.cakephp.org/3.0/en/core-libraries/global-constants-and-functions.html#pluginSplit
<ide> */
<ide> function pluginSplit($name, $dotAppend = false, $plugin = null)
<ide> function pluginSplit($name, $dotAppend = false, $plugin = null)
<ide> /**
<ide> * Split the namespace from the classname.
<ide> *
<del> * Commonly used like `list($namespace, $className) = namespaceSplit($class);`
<add> * Commonly used like `list($namespace, $className) = namespaceSplit($class);`.
<ide> *
<del> * @param string $class The full class name, ie `Cake\Core\App`
<del> * @return array Array with 2 indexes. 0 => namespace, 1 => classname
<add> * @param string $class The full class name, ie `Cake\Core\App`.
<add> * @return array Array with 2 indexes. 0 => namespace, 1 => classname.
<ide> */
<ide> function namespaceSplit($class)
<ide> {
<ide> function namespaceSplit($class)
<ide>
<ide> if (!function_exists('pr')) {
<ide> /**
<del> * print_r() convenience function
<add> * print_r() convenience function.
<ide> *
<ide> * In terminals this will act similar to using print_r() directly, when not run on cli
<ide> * print_r() will also wrap <pre> tags around the output of given variable. Similar to debug().
<ide> *
<del> * @param mixed $var Variable to print out
<add> * @param mixed $var Variable to print out.
<ide> * @return void
<ide> * @see debug()
<ide> * @link http://book.cakephp.org/3.0/en/core-libraries/global-constants-and-functions.html#pr
<del> * @see debug()
<ide> */
<ide> function pr($var)
<ide> {
<ide> function pr($var)
<ide> * In terminals this will act similar to using json_encode() with JSON_PRETTY_PRINT directly, when not run on cli
<ide> * will also wrap <pre> tags around the output of given variable. Similar to pr().
<ide> *
<del> * @param mixed $var Variable to print out
<add> * @param mixed $var Variable to print out.
<ide> * @return void
<ide> * @see pr()
<ide> * @link http://book.cakephp.org/3.0/en/core-libraries/global-constants-and-functions.html#pj
<ide><path>src/I18n/functions.php
<ide> /**
<ide> * Returns a translated string if one is found; Otherwise, the submitted message.
<ide> *
<del> * @param string $singular Text to translate
<del> * @param mixed $args Array with arguments or multiple arguments in function
<del> * @return mixed translated string
<add> * @param string $singular Text to translate.
<add> * @param mixed $args Array with arguments or multiple arguments in function.
<add> * @return mixed Translated string.
<ide> * @link http://book.cakephp.org/3.0/en/core-libraries/global-constants-and-functions.html#__
<ide> */
<ide> function __($singular, $args = null)
<ide> function __($singular, $args = null)
<ide> * Returns correct plural form of message identified by $singular and $plural for count $count.
<ide> * Some languages have more than one form for plural messages dependent on the count.
<ide> *
<del> * @param string $singular Singular text to translate
<del> * @param string $plural Plural text
<del> * @param int $count Count
<del> * @param mixed $args Array with arguments or multiple arguments in function
<del> * @return mixed plural form of translated string
<add> * @param string $singular Singular text to translate.
<add> * @param string $plural Plural text.
<add> * @param int $count Count.
<add> * @param mixed $args Array with arguments or multiple arguments in function.
<add> * @return mixed Plural form of translated string.
<ide> * @link http://book.cakephp.org/3.0/en/core-libraries/global-constants-and-functions.html#__n
<ide> */
<ide> function __n($singular, $plural, $count, $args = null)
<ide> function __n($singular, $plural, $count, $args = null)
<ide> /**
<ide> * Allows you to override the current domain for a single message lookup.
<ide> *
<del> * @param string $domain Domain
<del> * @param string $msg String to translate
<del> * @param mixed $args Array with arguments or multiple arguments in function
<del> * @return string translated string
<add> * @param string $domain Domain.
<add> * @param string $msg String to translate.
<add> * @param mixed $args Array with arguments or multiple arguments in function.
<add> * @return string Translated string.
<ide> * @link http://book.cakephp.org/3.0/en/core-libraries/global-constants-and-functions.html#__d
<ide> */
<ide> function __d($domain, $msg, $args = null)
<ide> function __d($domain, $msg, $args = null)
<ide> * Returns correct plural form of message identified by $singular and $plural for count $count
<ide> * from domain $domain.
<ide> *
<del> * @param string $domain Domain
<del> * @param string $singular Singular string to translate
<del> * @param string $plural Plural
<del> * @param int $count Count
<del> * @param mixed $args Array with arguments or multiple arguments in function
<del> * @return string plural form of translated string
<add> * @param string $domain Domain.
<add> * @param string $singular Singular string to translate.
<add> * @param string $plural Plural.
<add> * @param int $count Count.
<add> * @param mixed $args Array with arguments or multiple arguments in function.
<add> * @return string Plural form of translated string.
<ide> * @link http://book.cakephp.org/3.0/en/core-libraries/global-constants-and-functions.html#__dn
<ide> */
<ide> function __dn($domain, $singular, $plural, $count, $args = null)
<ide> function __dn($domain, $singular, $plural, $count, $args = null)
<ide> * The context is a unique identifier for the translations string that makes it unique
<ide> * within the same domain.
<ide> *
<del> * @param string $context Context of the text
<del> * @param string $singular Text to translate
<del> * @param mixed $args Array with arguments or multiple arguments in function
<del> * @return mixed translated string
<add> * @param string $context Context of the text.
<add> * @param string $singular Text to translate.
<add> * @param mixed $args Array with arguments or multiple arguments in function.
<add> * @return mixed Translated string.
<ide> * @link http://book.cakephp.org/3.0/en/core-libraries/global-constants-and-functions.html#__
<ide> */
<ide> function __x($context, $singular, $args = null)
<ide> function __x($context, $singular, $args = null)
<ide> * The context is a unique identifier for the translations string that makes it unique
<ide> * within the same domain.
<ide> *
<del> * @param string $context Context of the text
<del> * @param string $singular Singular text to translate
<del> * @param string $plural Plural text
<del> * @param int $count Count
<del> * @param mixed $args Array with arguments or multiple arguments in function
<del> * @return mixed plural form of translated string
<add> * @param string $context Context of the text.
<add> * @param string $singular Singular text to translate.
<add> * @param string $plural Plural text.
<add> * @param int $count Count.
<add> * @param mixed $args Array with arguments or multiple arguments in function.
<add> * @return mixed Plural form of translated string.
<ide> * @link http://book.cakephp.org/3.0/en/core-libraries/global-constants-and-functions.html#__xn
<ide> */
<ide> function __xn($context, $singular, $plural, $count, $args = null)
<ide> function __xn($context, $singular, $plural, $count, $args = null)
<ide> * The context is a unique identifier for the translations string that makes it unique
<ide> * within the same domain.
<ide> *
<del> * @param string $domain Domain
<del> * @param string $context Context of the text
<del> * @param string $msg String to translate
<del> * @param mixed $args Array with arguments or multiple arguments in function
<del> * @return string translated string
<add> * @param string $domain Domain.
<add> * @param string $context Context of the text.
<add> * @param string $msg String to translate.
<add> * @param mixed $args Array with arguments or multiple arguments in function.
<add> * @return string Translated string.
<ide> * @link http://book.cakephp.org/3.0/en/core-libraries/global-constants-and-functions.html#__dx
<ide> */
<ide> function __dx($domain, $context, $msg, $args = null)
<ide> function __dx($domain, $context, $msg, $args = null)
<ide> * The context is a unique identifier for the translations string that makes it unique
<ide> * within the same domain.
<ide> *
<del> * @param string $domain Domain
<del> * @param string $context Context of the text
<del> * @param string $singular Singular text to translate
<del> * @param string $plural Plural text
<del> * @param int $count Count
<del> * @param mixed $args Array with arguments or multiple arguments in function
<del> * @return mixed plural form of translated string
<add> * @param string $domain Domain.
<add> * @param string $context Context of the text.
<add> * @param string $singular Singular text to translate.
<add> * @param string $plural Plural text.
<add> * @param int $count Count.
<add> * @param mixed $args Array with arguments or multiple arguments in function.
<add> * @return mixed Plural form of translated string.
<ide> * @link http://book.cakephp.org/3.0/en/core-libraries/global-constants-and-functions.html#__dxn
<ide> */
<ide> function __dxn($domain, $context, $singular, $plural, $count, $args = null) | 3 |
Go | Go | add testcase with api/errors/errors_test.go | f157b54606899057360be60ec0e4826d8382c12b | <ide><path>api/errors/errors_test.go
<add>package errors
<add>
<add>import (
<add> "fmt"
<add> "github.com/stretchr/testify/assert"
<add> "net/http"
<add> "testing"
<add>)
<add>
<add>func newError(errorname string) error {
<add>
<add> return fmt.Errorf("test%v", errorname)
<add>}
<add>
<add>func TestErrors(t *testing.T) {
<add> errmsg := newError("apiError")
<add> err := apiError{
<add> error: errmsg,
<add> statusCode: 0,
<add> }
<add> assert.Equal(t, err.HTTPErrorStatusCode(), err.statusCode)
<add>
<add> errmsg = newError("ErrorWithStatusCode")
<add> errcode := 1
<add> serr := NewErrorWithStatusCode(errmsg, errcode)
<add> apierr, ok := serr.(apiError)
<add> if !ok {
<add> t.Fatal("excepted err is apiError type")
<add> }
<add> assert.Equal(t, errcode, apierr.statusCode)
<add>
<add> errmsg = newError("NewBadRequestError")
<add> baderr := NewBadRequestError(errmsg)
<add> apierr, ok = baderr.(apiError)
<add> if !ok {
<add> t.Fatal("excepted err is apiError type")
<add> }
<add> assert.Equal(t, http.StatusBadRequest, apierr.statusCode)
<add>
<add> errmsg = newError("RequestForbiddenError")
<add> ferr := NewRequestForbiddenError(errmsg)
<add> apierr, ok = ferr.(apiError)
<add> if !ok {
<add> t.Fatal("excepted err is apiError type")
<add> }
<add> assert.Equal(t, http.StatusForbidden, apierr.statusCode)
<add>
<add> errmsg = newError("RequestNotFoundError")
<add> nerr := NewRequestNotFoundError(errmsg)
<add> apierr, ok = nerr.(apiError)
<add> if !ok {
<add> t.Fatal("excepted err is apiError type")
<add> }
<add> assert.Equal(t, http.StatusNotFound, apierr.statusCode)
<add>
<add> errmsg = newError("RequestConflictError")
<add> cerr := NewRequestConflictError(errmsg)
<add> apierr, ok = cerr.(apiError)
<add> if !ok {
<add> t.Fatal("excepted err is apiError type")
<add> }
<add> assert.Equal(t, http.StatusConflict, apierr.statusCode)
<add>
<add>} | 1 |
Javascript | Javascript | implement new error handling | 8272c225b12eb08f60d41791c68f3f1a28caec20 | <ide><path>lib/internal/errors.js
<ide> const kCode = Symbol('code');
<ide> const kInfo = Symbol('info');
<ide> const messages = new Map();
<add>const codes = {};
<ide>
<ide> var green = '';
<ide> var red = '';
<ide> function makeNodeError(Base) {
<ide> };
<ide> }
<ide>
<add>function makeNodeErrorWithCode(Base, key) {
<add> return class NodeError extends Base {
<add> constructor(...args) {
<add> super(message(key, args));
<add> }
<add>
<add> get name() {
<add> return `${super.name} [${key}]`;
<add> }
<add>
<add> set name(value) {
<add> defineProperty(this, 'name', {
<add> configurable: true,
<add> enumerable: true,
<add> value,
<add> writable: true
<add> });
<add> }
<add>
<add> get code() {
<add> return key;
<add> }
<add>
<add> set code(value) {
<add> defineProperty(this, 'code', {
<add> configurable: true,
<add> enumerable: true,
<add> value,
<add> writable: true
<add> });
<add> }
<add> };
<add>}
<add>
<add>// Utility function for registering the error codes. Only used here. Exported
<add>// *only* to allow for testing.
<add>function E(sym, val, def, ...otherClasses) {
<add> messages.set(sym, val);
<add> if (def === undefined) return;
<add> def = makeNodeErrorWithCode(def, sym);
<add> if (otherClasses.length !== 0) {
<add> otherClasses.forEach((clazz) => {
<add> def[clazz.name] = makeNodeErrorWithCode(clazz, sym);
<add> });
<add> }
<add> codes[sym] = def;
<add>}
<add>
<ide> function lazyBuffer() {
<ide> if (buffer === undefined)
<ide> buffer = require('buffer').Buffer;
<ide> function message(key, args) {
<ide> return String(fmt.apply(null, args));
<ide> }
<ide>
<del>// Utility function for registering the error codes. Only used here. Exported
<del>// *only* to allow for testing.
<del>function E(sym, val) {
<del> messages.set(sym, typeof val === 'function' ? val : String(val));
<del>}
<del>
<ide> /**
<ide> * This creates an error compatible with errors produced in the C++
<ide> * function UVException using a context object with data assembled in C++.
<ide> module.exports = exports = {
<ide> URIError: makeNodeError(URIError),
<ide> AssertionError,
<ide> SystemError,
<add> codes,
<ide> E, // This is exported only to facilitate testing.
<ide> errorCache: new Map() // This is in here only to facilitate testing.
<ide> }; | 1 |
Go | Go | fix errors due changed sockrequest signature | 27fccdbabb277e29488de88f9af2e7b83af93132 | <ide><path>integration-cli/docker_api_exec_resize_test.go
<ide> package main
<ide>
<ide> import (
<add> "net/http"
<ide> "os/exec"
<ide> "strings"
<ide> "testing"
<ide> func TestExecResizeApiHeightWidthNoInt(t *testing.T) {
<ide> cleanedContainerID := strings.TrimSpace(out)
<ide>
<ide> endpoint := "/exec/" + cleanedContainerID + "/resize?h=foo&w=bar"
<del> _, err = sockRequest("POST", endpoint, nil)
<add> status, _, err := sockRequest("POST", endpoint, nil)
<ide> if err == nil {
<ide> t.Fatal("Expected exec resize Request to fail")
<ide> }
<add> if status != http.StatusInternalServerError {
<add> t.Fatalf("Status expected %d, got %d", http.StatusInternalServerError, status)
<add> }
<ide>
<ide> logDone("container exec resize - height, width no int fail")
<ide> }
<ide><path>integration-cli/docker_api_resize_test.go
<ide> package main
<ide>
<ide> import (
<add> "net/http"
<ide> "os/exec"
<ide> "strings"
<ide> "testing"
<ide> func TestResizeApiHeightWidthNoInt(t *testing.T) {
<ide> cleanedContainerID := strings.TrimSpace(out)
<ide>
<ide> endpoint := "/containers/" + cleanedContainerID + "/resize?h=foo&w=bar"
<del> _, err = sockRequest("POST", endpoint, nil)
<add> status, _, err := sockRequest("POST", endpoint, nil)
<ide> if err == nil {
<ide> t.Fatal("Expected resize Request to fail")
<ide> }
<add> if status != http.StatusInternalServerError {
<add> t.Fatalf("Status expected %d, got %d", http.StatusInternalServerError, status)
<add> }
<ide>
<ide> logDone("container resize - height, width no int fail")
<ide> } | 2 |
PHP | PHP | add ttr option to beanstalk config | 9bf9c2eb42a0095d8dd9898ec8daa17ce7707702 | <ide><path>app/config/queue.php
<ide> 'driver' => 'beanstalkd',
<ide> 'host' => 'localhost',
<ide> 'queue' => 'default',
<add> 'ttr' => 60,
<ide> ),
<ide>
<ide> 'sqs' => array( | 1 |
Python | Python | fix typos in portuguese stop words | c2d48974bc49aa58719f878ae643cefe73f3dcf5 | <ide><path>spacy/pt/stop_words.py
<ide>
<ide>
<ide> STOP_WORDS = set("""
<del>à às acerca adeus agora ainda algmas algo algumas alguns ali além ambos ano
<add>à às acerca adeus agora ainda algo algumas alguns ali além ambos ano
<ide> anos antes ao aos apenas apoio apontar após aquela aquelas aquele aqueles aqui
<ide> aquilo area área as assim através atrás até aí
<ide>
<ide>
<ide> hoje horas há
<ide>
<del>iniciar inicio ir irá isso ista iste isto já
<add>iniciar inicio ir irá isso ista isto já
<ide>
<ide> lado ligado local logo longe lugar lá
<ide>
<ide> pôde põe põem
<ide>
<ide> qual qualquer quando quanto quarta quarto quatro que quem quer quero questão
<del>quieto quinta quinto quinze quê relação
<add>quieto quinta quinto quinze quê
<add>
<add>relação
<ide>
<ide> sabe saber se segunda segundo sei seis sem sempre ser seria sete seu seus sexta
<ide> sexto sim sistema sob sobre sois somente somos sou sua suas são sétima sétimo | 1 |
Go | Go | implement docker port with standalone client lib | eeee2eae8671ce05d863aadf289a0695ac62629b | <ide><path>api/client/port.go
<ide> package client
<ide>
<ide> import (
<del> "encoding/json"
<ide> "fmt"
<ide> "strings"
<ide>
<ide> func (cli *DockerCli) CmdPort(args ...string) error {
<ide>
<ide> cmd.ParseFlags(args, true)
<ide>
<del> serverResp, err := cli.call("GET", "/containers/"+cmd.Arg(0)+"/json", nil, nil)
<add> c, err := cli.client.ContainerInspect(cmd.Arg(0))
<ide> if err != nil {
<ide> return err
<ide> }
<ide>
<del> defer serverResp.body.Close()
<del>
<del> var c struct {
<del> NetworkSettings struct {
<del> Ports nat.PortMap
<del> }
<del> }
<del>
<del> if err := json.NewDecoder(serverResp.body).Decode(&c); err != nil {
<del> return err
<del> }
<del>
<ide> if cmd.NArg() == 2 {
<ide> var (
<ide> port = cmd.Arg(1) | 1 |
Python | Python | remove unused names from __all__ in arrayprint | 6471522ba64c7d165606f9aabaacf808dedb8c3c | <ide><path>numpy/core/arrayprint.py
<ide>
<ide> $Id: arrayprint.py,v 1.9 2005/09/13 13:58:44 teoliphant Exp $
<ide> """
<del>__all__ = ["set_summary", "summary_off", "set_precision", "set_line_width",
<del> "array2string"]
<add>__all__ = ["array2string", "set_printoptions", "get_printoptions"]
<ide>
<ide> #
<ide> # Written by Konrad Hinsen <hinsenk@ere.umontreal.ca> | 1 |
Ruby | Ruby | fix a couple of git-related checks | 9f4f23a7f44c152b876e7e045521b3ebca8ed7dc | <ide><path>Library/Homebrew/cmd/doctor.rb
<ide> def check_for_multiple_volumes
<ide> end
<ide>
<ide> def check_for_git
<del> git = `/usr/bin/which git`.chomp
<del> if git.empty?
<add> unless system "/usr/bin/which -s git"
<ide> puts <<-EOS.undent
<ide> "Git" was not found in your path.
<ide>
<ide> def check_for_git
<ide> end
<ide>
<ide> def check_git_newline_settings
<del> git = `/usr/bin/which git`.chomp
<del> return if git.empty?
<add> return unless system "/usr/bin/which -s git"
<ide>
<del> autocrlf=`git config --get core.autocrlf`
<del> safecrlf=`git config --get core.safecrlf`
<add> autocrlf = `git config --get core.autocrlf`.chomp
<add> safecrlf = `git config --get core.safecrlf`.chomp
<ide>
<del> if autocrlf=='input' and safecrlf=='true'
<add> if autocrlf == 'input' and safecrlf == 'true'
<ide> puts <<-EOS.undent
<ide> Suspicious Git newline settings found.
<ide>
<ide> The detected Git newline settings can cause checkout problems:
<del> core.autocrlf=#{autocrlf}
<del> core.safecrlf=#{safecrlf}
<add> core.autocrlf = #{autocrlf}
<add> core.safecrlf = #{safecrlf}
<ide>
<ide> If you are not routinely dealing with Windows-based projects,
<ide> consider removing these settings. | 1 |
Ruby | Ruby | log the exception from the threadconsumer | 4531ba74938985bd2343b27d1a30d099d5975b02 | <ide><path>railties/lib/rails/queueing.rb
<ide> def initialize(queue)
<ide> def start
<ide> @thread = Thread.new do
<ide> while job = @queue.pop
<del> job.run
<add> begin
<add> job.run
<add> rescue Exception => e
<add> Rails.logger.error "Job Error: #{e.message}\n#{e.backtrace.join("\n")}"
<add> end
<ide> end
<ide> end
<ide> self
<ide><path>railties/test/queueing/threaded_consumer_test.rb
<ide> def teardown
<ide>
<ide> assert_equal true, ran
<ide> end
<add>
<add> test "log job that raises an exception" do
<add> require "active_support/log_subscriber/test_helper"
<add> logger = ActiveSupport::LogSubscriber::TestHelper::MockLogger.new
<add> Rails.logger = logger
<add>
<add> job = Job.new(1) do
<add> raise "RuntimeError: Error!"
<add> end
<add>
<add> @queue.push job
<add> sleep 0.1
<add>
<add> assert_equal 1, logger.logged(:error).size
<add> assert_match /Job Error: RuntimeError: Error!/, logger.logged(:error).last
<add> end
<ide> end | 2 |
Text | Text | fix broken link in docs | a0ffa346c0371c6f2fd7c5ae7e9f5a26e36bfc76 | <ide><path>website/docs/usage/saving-loading.md
<ide> docs = list(doc_bin.get_docs(nlp.vocab))
<ide>
<ide> If `store_user_data` is set to `True`, the `Doc.user_data` will be serialized as
<ide> well, which includes the values of
<del>[extension attributes](/processing-pipelines#custom-components-attributes) (if
<add>[extension attributes](/usage/processing-pipelines#custom-components-attributes) (if
<ide> they're serializable with msgpack).
<ide>
<ide> <Infobox title="Important note on serializing extension attributes" variant="warning"> | 1 |
PHP | PHP | add exists as alias to has | 42255bedff7637bcf3c2ee1b60c613ef3cbf8888 | <ide><path>src/Illuminate/Http/Concerns/InteractsWithInput.php
<ide> public function bearerToken()
<ide> }
<ide> }
<ide>
<add> /**
<add> * Determine if the request contains a given input item key.
<add> *
<add> * @param string|array $key
<add> * @return bool
<add> */
<add> public function exists($key)
<add> {
<add> return $this->has($key);
<add> }
<add>
<ide> /**
<ide> * Determine if the request contains a given input item key.
<ide> * | 1 |
PHP | PHP | use the new google fonts api | c64061629eeb328e79d842051fa3506843c306cf | <ide><path>resources/views/welcome.blade.php
<ide> <title>Laravel</title>
<ide>
<ide> <!-- Fonts -->
<del> <link href="https://fonts.googleapis.com/css?family=Nunito:400,600,700" rel="stylesheet">
<add> <link href="https://fonts.googleapis.com/css2?family=Nunito:wght@400;600;700&display=swap" rel="stylesheet">
<ide>
<ide> <!-- Styles -->
<ide> <style> | 1 |
Javascript | Javascript | add missing whitespace | d7fde5fcb12ba0bcb8061ad349e22f3513cd6ec6 | <ide><path>src/ngMock/angular-mocks.js
<ide> angular.mock = {};
<ide> * that there are several helper methods available which can be used in tests.
<ide> */
<ide> angular.mock.$BrowserProvider = function() {
<del> this.$get = function(){
<add> this.$get = function() {
<ide> return new angular.mock.$Browser();
<ide> };
<ide> };
<ide> angular.mock.$LogProvider = function() {
<ide> (function() {
<ide> var R_ISO8061_STR = /^(\d{4})-?(\d\d)-?(\d\d)(?:T(\d\d)(?:\:?(\d\d)(?:\:?(\d\d)(?:\.(\d{3}))?)?)?(Z|([+-])(\d\d):?(\d\d)))?$/;
<ide>
<del> function jsonStringToDate(string){
<add> function jsonStringToDate(string) {
<ide> var match;
<ide> if (match = string.match(R_ISO8061_STR)) {
<ide> var date = new Date(0), | 1 |
Javascript | Javascript | add regression test for proxy as vm context | f9ea52e5ebebf4ad7058ff1d950c63ae18a7a3da | <ide><path>test/parallel/test-vm-context.js
<ide> assert.throws(function() {
<ide> }, function(err) {
<ide> return /expected-filename.js:33:130/.test(err.stack);
<ide> }, 'Expected appearance of proper offset in Error stack');
<add>
<add>// https://github.com/nodejs/node/issues/6158
<add>ctx = new Proxy({}, {});
<add>assert.strictEqual(typeof vm.runInNewContext('String', ctx), 'function'); | 1 |
Python | Python | remove todos and update docstrings | 1e0363290e71a3383c5fbdce3d1fecea5bf107df | <ide><path>spacy/language.py
<ide> def remove_pipe(self, name: str) -> Tuple[str, Callable[[Doc], Doc]]:
<ide>
<ide> def disable_pipe(self, name: str) -> None:
<ide> """Disable a pipeline component. The component will still exist on
<del> the nlp object, but it won't be run as part of the pipeline.
<add> the nlp object, but it won't be run as part of the pipeline. Does
<add> nothing if the component is already disabled.
<ide>
<ide> name (str): The name of the component to disable.
<ide> """
<ide> if name not in self._pipe_names:
<ide> raise ValueError(Errors.E001.format(name=name, opts=self._pipe_names))
<del> # TODO: should we raise if pipe is already disabled?
<ide> self._disabled.add(name)
<ide>
<ide> def enable_pipe(self, name: str) -> None:
<ide> """Enable a previously disabled pipeline component so it's run as part
<del> of the pipeline.
<add> of the pipeline. Does nothing if the component is already enabled.
<ide>
<ide> name (str): The name of the component to enable.
<ide> """
<ide> if name not in self._pipe_names:
<ide> raise ValueError(Errors.E001.format(name=name, opts=self._pipe_names))
<del> # TODO: should we raise if pipe is already enabled?
<ide> if name in self._disabled:
<ide> self._disabled.remove(name)
<ide> | 1 |
Text | Text | add link to @troydemonbreun’s contribution | 36734f4d374aefe64d93db0d30d8cb437a6a7970 | <ide><path>CHANGELOG.md
<ide>
<ide> ### React
<ide> - Add error codes to production invariants, with links to the view the full error text. ([@keyanzhang](https://github.com/keyanzhang) in [#6948](https://github.com/facebook/react/pull/6948))
<del>- Include component stack information in PropType validation warnings. ([@spicyj](https://github.com/spicyj) in [#6771](https://github.com/facebook/react/pull/6771))
<add>- Include component stack information in PropType validation warnings. ([@troydemonbreun](https://github.com/troydemonbreun) in [#6398](https://github.com/facebook/react/pull/6398), [@spicyj](https://github.com/spicyj) in [#6771](https://github.com/facebook/react/pull/6771))
<ide> - Include component stack information in key warnings. ([@keyanzhang](https://github.com/keyanzhang) in [#6799](https://github.com/facebook/react/pull/6799))
<ide> - Stop validating props at mount time, only validate at element creation. ([@keyanzhang](https://github.com/keyanzhang) in [#6823](https://github.com/facebook/react/pull/6823))
<ide> - New invariant providing actionable error in missing instance case. ([@yungsters](https://github.com/yungsters) in [#6990](https://github.com/facebook/react/pull/6990)) | 1 |
Javascript | Javascript | remove closest fallback | 63a2d027f4e6d6412a4ea309fb9d4dc2c47c39ef | <ide><path>src/js/player.js
<ide> class Player extends Component {
<ide>
<ide> // If language is not set, get the closest lang attribute
<ide> if (!options.language) {
<del> if (typeof tag.closest === 'function') {
<del> const closest = tag.closest('[lang]');
<add> const closest = tag.closest('[lang]');
<ide>
<del> if (closest && closest.getAttribute) {
<del> options.language = closest.getAttribute('lang');
<del> }
<del> } else {
<del> let element = tag;
<del>
<del> while (element && element.nodeType === 1) {
<del> if (Dom.getAttributes(element).hasOwnProperty('lang')) {
<del> options.language = element.getAttribute('lang');
<del> break;
<del> }
<del> element = element.parentNode;
<del> }
<add> if (closest) {
<add> options.language = closest.getAttribute('lang');
<ide> }
<ide> }
<ide> | 1 |
Javascript | Javascript | remove use strict from grunt | 72aada0a156881db8ad530a7543d58ea90e51e84 | <ide><path>grunt.js
<ide> /*jshint node: true */
<ide> /*global config:true, task:true, process:true*/
<ide>
<del>"use strict";
<del>
<ide> var child_process = require("child_process");
<ide>
<ide> module.exports = function( grunt ) { | 1 |
Python | Python | remove another generator | 10a6b61250f55dae30aabe9fc1463781bdcee64d | <ide><path>numpy/core/tests/test_multiarray.py
<ide> def testip_types(self):
<ide> tests = []
<ide> for val in [-100,0,15]:
<ide> for types in N.sctypes.itervalues():
<del> tests.extend((self.tst_basic,x.copy().astype(T),T,mask,val)
<del> for T in types if T not in unchecked_types)
<add> tests.extend([(self.tst_basic,x.copy().astype(T),T,mask,val)
<add> for T in types if T not in unchecked_types])
<ide> return tests
<ide>
<ide> def test_mask_size(self): | 1 |
Text | Text | add tip for the important jsx convention | 0ae09f4c70bd83483b242f3712a206d34da45bbc | <ide><path>docs/docs/getting-started.md
<ide> React.renderComponent(
<ide> document.getElementById('example')
<ide> );
<ide> ```
<add>> ```/** @jsx React.DOM */``` is must, or jsx would not convert.
<add>
<ide> Then reference it from `helloworld.html`:
<ide>
<ide> ```html{10} | 1 |
PHP | PHP | fix a styling issue | 841a28067b03979603e41dd80729cb8581a91e95 | <ide><path>src/Illuminate/Database/Schema/Builder.php
<ide>
<ide> use Closure;
<ide> use LogicException;
<add>use RuntimeException;
<ide> use Doctrine\DBAL\Types\Type;
<ide> use Illuminate\Database\Connection;
<del>use RuntimeException;
<ide>
<ide> class Builder
<ide> { | 1 |
Javascript | Javascript | fix a comment of the player's loadedmetadata event | ca2f5a3caeea9052bd84dda1bdd619430b7bdd1c | <ide><path>src/js/player.js
<ide> const TECH_EVENTS_RETRIGGER = [
<ide> * @type {EventTarget~Event}
<ide> */
<ide> /**
<del> * Retrigger the `stalled` event that was triggered by the {@link Tech}.
<add> * Retrigger the `loadedmetadata` event that was triggered by the {@link Tech}.
<ide> *
<ide> * @private
<ide> * @method Player#handleTechLoadedmetadata_ | 1 |
Python | Python | remove forgotten debug print | 7a661682c1d8209fe17daba26ed0cddc2cb5edf3 | <ide><path>doc/sphinxext/docscrape_sphinx.py
<ide> def _str_member_list(self, name):
<ide> out += self._str_indent(desc, n_indent)
<ide> out += [hdr]
<ide> out += ['']
<del> print "\n".join(out)
<ide> return out
<ide>
<ide> def _str_section(self, name): | 1 |
Python | Python | fix regression in np.ma.load in gh-10055 | 8671dc0dd968d33d99ad1a9c6ce7a29e84dff953 | <ide><path>numpy/ma/core.py
<ide> def load(F):
<ide> """
<ide> if not hasattr(F, 'readline'):
<ide> with open(F, 'r') as F:
<del> pickle.load(F)
<add> return pickle.load(F)
<ide> else:
<del> pickle.load(F)
<add> return pickle.load(F)
<ide>
<ide>
<ide> def loads(strg): | 1 |
Python | Python | use the short path for tpuclusterresolver | 9e23bab8b97e82d0dd253620673633eed44a2473 | <ide><path>official/mnist/mnist_tpu.py
<ide> def main(argv):
<ide> tpu_grpc_url = FLAGS.master
<ide> else:
<ide> tpu_cluster_resolver = (
<del> tf.contrib.cluster_resolver.python.training.TPUClusterResolver(
<add> tf.contrib.cluster_resolver.TPUClusterResolver(
<ide> tpu_names=[FLAGS.tpu_name],
<ide> zone=FLAGS.tpu_zone,
<ide> project=FLAGS.gcp_project)) | 1 |
Ruby | Ruby | flag any desc that starts with the formula name | a2e2553bd82ca130931072353a167be5d9e87a20 | <ide><path>Library/Homebrew/cmd/audit.rb
<ide> def audit_desc
<ide> problem "Description shouldn't start with an indefinite article (#{$1})"
<ide> end
<ide>
<del> if desc =~ /^#{formula.name} is\s/i
<add> if desc =~ /^#{formula.name}\s/i
<ide> problem "Description shouldn't include the formula name"
<ide> end
<ide> end | 1 |
Javascript | Javascript | use internal/errors.js in module.require | b21715403bc5e229c4171ed996e337526daeadf6 | <ide><path>lib/module.js
<ide> Module.prototype.load = function(filename) {
<ide>
<ide> // Loads a module at the given file path. Returns that module's
<ide> // `exports` property.
<del>Module.prototype.require = function(path) {
<del> assert(path, 'missing path');
<del> assert(typeof path === 'string', 'path must be a string');
<del> return Module._load(path, this, /* isMain */ false);
<add>Module.prototype.require = function(id) {
<add> if (typeof id !== 'string') {
<add> throw new errors.TypeError('ERR_INVALID_ARG_TYPE', 'id', 'string', id);
<add> }
<add> if (id === '') {
<add> throw new errors.Error('ERR_INVALID_ARG_VALUE',
<add> 'id', id, 'must be a non-empty string');
<add> }
<add> return Module._load(id, this, /* isMain */ false);
<ide> };
<ide>
<ide>
<ide><path>test/parallel/test-module-loading-error.js
<ide> assert.throws(
<ide> }
<ide> );
<ide>
<del>common.expectsError(
<del> require,
<del> {
<del> code: 'ERR_ASSERTION',
<del> message: /^missing path$/
<del> });
<add>const re = /^The "id" argument must be of type string\. Received type \w+$/;
<add>[1, false, null, undefined, {}].forEach((value) => {
<add> common.expectsError(
<add> () => { require(value); },
<add> {
<add> type: TypeError,
<add> code: 'ERR_INVALID_ARG_TYPE',
<add> message: re
<add> });
<add>});
<add>
<ide>
<ide> common.expectsError(
<del> () => { require({}); },
<add> () => { require(''); },
<ide> {
<del> code: 'ERR_ASSERTION',
<del> message: /^path must be a string$/
<add> type: Error,
<add> code: 'ERR_INVALID_ARG_VALUE',
<add> message: 'The argument \'id\' must be a non-empty string. Received \'\''
<ide> });
<ide><path>test/sequential/test-module-loading.js
<ide> try {
<ide> }
<ide>
<ide>
<del>// require() must take string, and must be truthy
<del>assert.throws(function() {
<del> console.error('require non-string');
<del> require({ foo: 'bar' });
<del>}, /path must be a string/);
<del>
<del>assert.throws(function() {
<del> console.error('require empty string');
<del> require('');
<del>}, /missing path/);
<del>
<ide> process.on('exit', function() {
<ide> assert.ok(a.A instanceof Function);
<ide> assert.strictEqual(a.A(), 'A done'); | 3 |
Python | Python | add mqtt output in json format | 63f0375362c7437a1d924a6831815c2764f75362 | <ide><path>glances/exports/glances_mqtt.py
<ide>
<ide> import socket
<ide> import string
<add>import json
<ide>
<ide> from glances.logger import logger
<ide> from glances.exports.glances_export import GlancesExport
<ide> def __init__(self, config=None, args=None):
<ide> # Load the MQTT configuration file
<ide> self.export_enable = self.load_conf('mqtt',
<ide> mandatories=['host', 'password'],
<del> options=['port', 'user', 'topic', 'tls'])
<add> options=['port', 'user', 'topic', 'tls', 'format'])
<ide> if not self.export_enable:
<ide> exit('Missing MQTT config')
<ide>
<ide> def __init__(self, config=None, args=None):
<ide> self.user = self.user or 'glances'
<ide> self.tls = (self.tls and self.tls.lower() == 'true')
<ide>
<add> self.format = self.format.lower() or 'simple'
<add> if self.format not in ['simple', 'json']:
<add> logger.critical("Format must be either 'simple' or 'json'.")
<add> return None
<add>
<ide> # Init the MQTT client
<ide> self.client = self.init()
<ide>
<ide> def whitelisted(s,
<ide> substitute=SUBSTITUTE):
<ide> return ''.join(c if c in whitelist else substitute for c in s)
<ide>
<del> for sensor, value in zip(columns, points):
<add> if self.format == 'simple':
<add> for sensor, value in zip(columns, points):
<add> try:
<add> sensor = [whitelisted(name) for name in sensor.split('.')]
<add> tobeexport = [self.topic, self.hostname, name]
<add> tobeexport.extend(sensor)
<add> topic = '/'.join(tobeexport)
<add>
<add> self.client.publish(topic, value)
<add> except Exception as e:
<add> logger.error("Can not export stats to MQTT server (%s)" % e)
<add> elif self.format == 'json':
<ide> try:
<del> sensor = [whitelisted(name) for name in sensor.split('.')]
<del> tobeexport = [self.topic, self.hostname, name]
<del> tobeexport.extend(sensor)
<del> topic = '/'.join(tobeexport)
<del>
<del> self.client.publish(topic, value)
<add> topic = '/'.join([self.topic, self.hostname, name])
<add> sensor_values = dict(zip(columns, points))
<add> json_values = json.dumps(sensor_values)
<add> self.client.publish(topic, json_values)
<ide> except Exception as e:
<ide> logger.error("Can not export stats to MQTT server (%s)" % e)
<add> | 1 |
Text | Text | set demo script to 'use strict' for embeddedapp | 48bcbecf8ca802610e24f46fc851be1dcc5f36b1 | <ide><path>docs/EmbeddedApp.md
<ide> $ touch index.ios.js
<ide> Copy & paste following starter code for **index.ios.js**.
<ide>
<ide> ```
<del>var React = require('react-native');
<add>'use strict';
<ide>
<add>var React = require('react-native');
<ide> var {
<ide> Text,
<ide> View | 1 |
Python | Python | add broadcast commands for start/stop actors | 66adb569261574e4a5f27bfaef4e6643725febf3 | <ide><path>celery/app/control.py
<ide> def registered(self, *taskinfoitems):
<ide>
<ide> def ping(self):
<ide> return self._request('ping')
<del>
<add>
<ide> def active_queues(self):
<ide> return self._request('active_queues')
<ide>
<ide> def conf(self):
<ide> return self._request('dump_conf')
<ide>
<del>
<ide> class Control(object):
<ide> Mailbox = Mailbox
<ide>
<ide> def ping(self, destination=None, timeout=1, **kwargs):
<ide> return self.broadcast('ping', reply=True, destination=destination,
<ide> timeout=timeout, **kwargs)
<ide>
<add> def start_actor(self, actor_name, destination = None, timeout = 1, **kwargs):
<add> return self.broadcast('start_actor', name = actor_name, reply=True, destination=destination,
<add> timeout=timeout, **kwargs)
<add>
<ide> def rate_limit(self, task_name, rate_limit, destination=None, **kwargs):
<ide> """Tell all (or specific) workers to set a new rate limit
<ide> for task by type.
<ide><path>celery/tests/worker/test_control.py
<ide> def test_pool_restart_relaod_modules(self):
<ide> self.assertTrue(consumer.controller.pool.restart.called)
<ide> self.assertTrue(_reload.called)
<ide> self.assertFalse(_import.called)
<add>
<ide><path>celery/worker/consumer.py
<ide> from celery.utils import timer2
<ide> from celery.utils.functional import noop
<ide> from celery.utils.log import get_logger
<add>from celery.utils.imports import instantiate
<ide> from celery.utils import text
<ide>
<ide> from . import state
<ide> class Consumer(object):
<ide> #: The consumer used to consume broadcast commands.
<ide> broadcast_consumer = None
<ide>
<add> #: Dictionary holding all active actors.
<add> actor_registry = {}
<add>
<ide> #: The process mailbox (kombu pidbox node).
<ide> pidbox_node = None
<ide> _pidbox_node_shutdown = None # used for greenlets
<ide> def __init__(self, ready_queue,
<ide> self.task_consumer = None
<ide> self.controller = controller
<ide> self.broadcast_consumer = None
<add> self.actor_registry = {}
<ide> self.ready_queue = ready_queue
<ide> self.send_events = send_events
<ide> self.init_callback = init_callback
<ide> def __init__(self, ready_queue,
<ide> self.pidbox_node = self.app.control.mailbox.Node(self.hostname,
<ide> state=pidbox_state,
<ide> handlers=Panel.data)
<add>
<ide> conninfo = self.app.connection()
<ide> self.connection_errors = conninfo.connection_errors
<ide> self.channel_errors = conninfo.channel_errors
<ide> def on_control(self, body, message):
<ide> error('Control command error: %r', exc, exc_info=True)
<ide> self.reset_pidbox_node()
<ide>
<add> def add_actor(self, actor_name):
<add> """Add actor to the actor registry and start the actor main method"""
<add> try:
<add> actor = instantiate(actor_name, connection = self.connection)
<add> consumer = actor.Consumer(self.connection.channel())
<add> consumer.consume()
<add> self.actor_registry[actor.id] = consumer
<add> print 'Register actor in the actor registry: %s' % actor_name
<add> return actor.id
<add> except Exception as exc:
<add> error('Start actor error: %r', exc, exc_info=True)
<add>
<add> def stop_all_actors(self):
<add> for _, consumer in self.actor_registry.items():
<add> self.maybe_conn_error(consumer.cancel)
<add> self.actor_registry.clear()
<add>
<add> def reset_actor_nodes(self):
<add> for actor, consumer in self.actor_registry:
<add> self.maybe_conn_error(consumer.cancel)
<add> consumer.consume()
<add>
<add> def stop_actor(self, actor_id):
<add> if actor_id in self.actor_registry:
<add> consumer = self.actor_registry.pop(actor_id)
<add> self.maybe_conn_error(consumer.cancel)
<add>
<ide> def apply_eta_task(self, task):
<ide> """Method called by the timer to apply a task with an
<ide> ETA/countdown."""
<ide> def receive_message(self, body, message):
<ide> :param message: The kombu message object.
<ide>
<ide> """
<add> print 'I am in receive_message'
<ide> try:
<ide> name = body['task']
<ide> except (KeyError, TypeError):
<ide> def close_connection(self):
<ide> self.maybe_conn_error(self.task_consumer.close)
<ide>
<ide> self.stop_pidbox_node()
<del>
<add> self.stop_all_actors()
<add>
<ide> if connection:
<ide> debug('Closing broker connection...')
<ide> self.maybe_conn_error(connection.close)
<ide> def stop_consumers(self, close_connection=True):
<ide> debug('Shutting down event dispatcher...')
<ide> self.event_dispatcher = \
<ide> self.maybe_conn_error(self.event_dispatcher.close)
<del>
<add>
<add> self.stop_all_actors()
<add>
<ide> debug('Cancelling broadcast consumer...')
<ide> if self.broadcast_consumer:
<ide> self.maybe_conn_error(self.broadcast_consumer.cancel)
<ide> def reset_pidbox_node(self):
<ide> return self.pool.spawn_n(self._green_pidbox_node)
<ide> self.pidbox_node.channel = self.connection.channel()
<ide> self.broadcast_consumer = self.pidbox_node.listen(
<del> callback=self.on_control)
<del>
<add> callback=self.on_control)
<add>
<ide> def stop_pidbox_node(self):
<ide> if self._pidbox_node_stopped:
<ide> self._pidbox_node_shutdown.set()
<ide> def reset_connection(self):
<ide> debug('Connection established.')
<ide> self.task_consumer = self.app.amqp.TaskConsumer(self.connection,
<ide> on_decode_error=self.on_decode_error)
<add> self.reset_actor_nodes()
<ide> # QoS: Reset prefetch window.
<ide> self.qos = QoS(self.task_consumer, self.initial_prefetch_count)
<ide> self.qos.update()
<ide><path>celery/worker/control.py
<ide> from celery.utils.compat import UserDict
<ide> from celery.utils.log import get_logger
<ide> from celery.utils import jsonify
<add>from celery.utils.imports import instantiate
<ide>
<ide> from . import state
<ide> from .state import revoked
<ide> def _extract_info(task):
<ide>
<ide> @Panel.register
<ide> def ping(panel, **kwargs):
<del> return 'pong'
<add> return {'ok':'ihu-pong'}
<ide>
<ide>
<ide> @Panel.register
<ide> def active_queues(panel):
<ide> return [dict(queue.as_dict(recurse=True))
<ide> for queue in panel.consumer.task_consumer.queues]
<ide>
<del>
<ide> @Panel.register
<ide> def dump_conf(panel, **kwargs):
<ide> return jsonify(dict(panel.app.conf))
<add>
<add>@Panel.register
<add>def start_actor(panel, name):
<add> print name
<add> return panel.consumer.add_actor(name)
<add>
<add>@Panel.register
<add>def stop_actor(panel, id):
<add> #instantiate(name).stop()
<add> return panel.consumer.stop_actor(id) | 4 |
Text | Text | fix errors in web streams doc | 2be596604ac13a4dd68eb9cc38e54f2df438dd97 | <ide><path>doc/api/webstreams.md
<ide> added: v16.5.0
<ide> * `preventAbort` {boolean} When `true`, errors in this `ReadableStream`
<ide> will not cause `transform.writable` to be aborted.
<ide> * `preventCancel` {boolean} When `true`, errors in the destination
<del> `transform.writable` is not cause this `ReadableStream` to be
<add> `transform.writable` do not cause this `ReadableStream` to be
<ide> canceled.
<ide> * `preventClose` {boolean} When `true`, closing this `ReadableStream`
<del> will no cause `transform.writable` to be closed.
<add> does not cause `transform.writable` to be closed.
<ide> * `signal` {AbortSignal} Allows the transfer of data to be canceled
<ide> using an {AbortController}.
<ide> * Returns: {ReadableStream} From `transform.readable`.
<ide> added: v16.5.0
<ide> `ReadableStream`'s data will be written.
<ide> * `options` {Object}
<ide> * `preventAbort` {boolean} When `true`, errors in this `ReadableStream`
<del> will not cause `transform.writable` to be aborted.
<del> * `preventCancel` {boolean} When `true`, errors in the destination
<del> `transform.writable` is not cause this `ReadableStream` to be
<del> canceled.
<add> will not cause `destination` to be aborted.
<add> * `preventCancel` {boolean} When `true`, errors in the `destination`
<add> will not cause this `ReadableStream` to be canceled.
<ide> * `preventClose` {boolean} When `true`, closing this `ReadableStream`
<del> will no cause `transform.writable` to be closed.
<add> does not cause `destination` to be closed.
<ide> * `signal` {AbortSignal} Allows the transfer of data to be canceled
<ide> using an {AbortController}.
<ide> * Returns: A promise fulfilled with `undefined`
<ide> added: v16.5.0
<ide> * `options` {Object}
<ide> * `preventCancel` {boolean} When `true`, prevents the {ReadableStream}
<ide> from being closed when the async iterator abruptly terminates.
<del> **Defaults**: `false`
<add> **Default**: `false`.
<ide>
<ide> Creates and returns an async iterator usable for consuming this
<ide> `ReadableStream`'s data.
<ide> changes:
<ide> -->
<ide>
<ide> The `ReadableStreamBYOBReader` is an alternative consumer for
<del>byte-oriented {ReadableStream}'s (those that are created with
<add>byte-oriented {ReadableStream}s (those that are created with
<ide> `underlyingSource.type` set equal to `'bytes'` when the
<ide> `ReadableStream` was created).
<ide> | 1 |
Text | Text | fix version references | 32c7c3244ab6c84747ca055a244f111c8390c29c | <ide><path>threejs/lessons/fr/threejs-fundamentals.md
<ide> Tout d'abord, chargeons three.js :
<ide>
<ide> ```html
<ide> <script type="module">
<del>import * as THREE from './resources/threejs/r114/build/three.module.js';
<add>import * as THREE from './resources/threejs/r119/build/three.module.js';
<ide> </script>
<ide> ```
<ide>
<ide> dans le document html :
<ide>
<ide> ```html
<ide> <script type="module">
<del>import * as THREE from './resources/threejs/r114/build/three.module.js';
<add>import * as THREE from './resources/threejs/r119/build/three.module.js';
<ide>
<ide> +function main() {
<ide> + const canvas = document.querySelector('#c');
<ide> par le biais d'une balise <code><script type="module"></code>. Voici un ex
<ide> </p>
<ide> <pre class=prettyprint>
<ide> <script type="module">
<del>import * as THREE from './resources/threejs/r114/build/three.module.js';
<add>import * as THREE from './resources/threejs/r119/build/three.module.js';
<ide>
<ide> ...
<ide>
<ide><path>threejs/lessons/fr/threejs-prerequisites.md
<ide> ou en ligne via une balise `<script type="module">`. Voici un exemple des deux
<ide>
<ide> ```html
<ide> <script type="module">
<del>import * as THREE from './resources/threejs/r114/build/three.module.js';
<add>import * as THREE from './resources/threejs/r119/build/three.module.js';
<ide>
<ide> ...
<ide>
<ide><path>threejs/lessons/ja/threejs-fundamentals.md
<ide> Three.jsはcanvasに描画するため、canvasをthree.jsに渡す必要があ
<ide>
<ide> ```html
<ide> <script type="module">
<del>import * as THREE from './resources/threejs/r110/build/three.module.js';
<add>import * as THREE from './resources/threejs/r119/build/three.module.js';
<ide>
<ide> function main() {
<ide> const canvas = document.querySelector('#c');
<ide> es6モジュールはスクリプトのロードに <code>import</code> を使
<ide> </p>
<ide> <pre class=prettyprint>
<ide> <script type="module">
<del>import * as THREE from './resources/threejs/r110/build/three.module.js';
<add>import * as THREE from './resources/threejs/r119/build/three.module.js';
<ide>
<ide> ...
<ide>
<ide><path>threejs/lessons/ja/threejs-prerequisites.md
<ide> es6モジュールはスクリプトの中で `import` キーワード、また
<ide>
<ide> ```html
<ide> <script type="module">
<del>import * as THREE from './resources/threejs/r110/build/three.module.js';
<add>import * as THREE from './resources/threejs/r119/build/three.module.js';
<ide>
<ide> ...
<ide> | 4 |
Mixed | Ruby | add +capitalize+ option to inflector.humanize | c61544c7818f109c132fcad9db73d43216417535 | <ide><path>activesupport/CHANGELOG.md
<add>* Add `capitalize` option to Inflector.humanize, so strings can be humanized without being capitalized:
<add>
<add> 'employee_salary'.humanize # => "Employee salary"
<add> 'employee_salary'.humanize(capitalize: false) # => "employee salary"
<add>
<add> *claudiob*
<add>
<ide> * Fixed Object#as_json and Struct#as_json not working properly with options. They now take
<ide> the same options as Hash#as_json:
<ide>
<ide><path>activesupport/lib/active_support/core_ext/string/inflections.rb
<ide> def classify
<ide> ActiveSupport::Inflector.classify(self)
<ide> end
<ide>
<del> # Capitalizes the first word, turns underscores into spaces, and strips '_id'.
<add> # Capitalizes the first word, turns underscores into spaces, and strips a
<add> # trailing '_id' if present.
<ide> # Like +titleize+, this is meant for creating pretty output.
<ide> #
<del> # 'employee_salary'.humanize # => "Employee salary"
<del> # 'author_id'.humanize # => "Author"
<del> def humanize
<del> ActiveSupport::Inflector.humanize(self)
<add> # The capitalization of the first word can be turned off by setting the
<add> # optional parameter +capitalize+ to false.
<add> # By default, this parameter is true.
<add> #
<add> # 'employee_salary'.humanize # => "Employee salary"
<add> # 'author_id'.humanize # => "Author"
<add> # 'author_id'.humanize(capitalize: false) # => "author"
<add> def humanize(options = {})
<add> ActiveSupport::Inflector.humanize(self, options)
<ide> end
<ide>
<ide> # Creates a foreign key name from a class name.
<ide><path>activesupport/lib/active_support/inflector/methods.rb
<ide> def underscore(camel_cased_word)
<ide> word
<ide> end
<ide>
<del> # Capitalizes the first word and turns underscores into spaces and strips a
<del> # trailing "_id", if any. Like +titleize+, this is meant for creating pretty
<del> # output.
<del> #
<del> # 'employee_salary'.humanize # => "Employee salary"
<del> # 'author_id'.humanize # => "Author"
<del> def humanize(lower_case_and_underscored_word)
<add> # Capitalizes the first word, turns underscores into spaces, and strips a
<add> # trailing '_id' if present.
<add> # Like +titleize+, this is meant for creating pretty output.
<add> #
<add> # The capitalization of the first word can be turned off by setting the
<add> # optional parameter +capitalize+ to false.
<add> # By default, this parameter is true.
<add> #
<add> # humanize('employee_salary') # => "Employee salary"
<add> # humanize('author_id') # => "Author"
<add> # humanize('author_id', capitalize: false) # => "author"
<add> def humanize(lower_case_and_underscored_word, options = {})
<ide> result = lower_case_and_underscored_word.to_s.dup
<ide> inflections.humans.each { |(rule, replacement)| break if result.sub!(rule, replacement) }
<ide> result.gsub!(/_id$/, "")
<ide> result.tr!('_', ' ')
<del> result.gsub(/([a-z\d]*)/i) { |match|
<add> result.gsub!(/([a-z\d]*)/i) { |match|
<ide> "#{inflections.acronyms[match] || match.downcase}"
<del> }.gsub(/^\w/) { $&.upcase }
<add> }
<add> options.fetch(:capitalize, true) ? result.gsub(/^\w/) { $&.upcase } : result
<ide> end
<ide>
<ide> # Capitalizes all the words and replaces some characters in the string to
<ide><path>activesupport/test/core_ext/string_ext_test.rb
<ide> def test_humanize
<ide> end
<ide> end
<ide>
<add> def test_humanize_without_capitalize
<add> UnderscoreToHumanWithoutCapitalize.each do |underscore, human|
<add> assert_equal(human, underscore.humanize(capitalize: false))
<add> end
<add> end
<add>
<ide> def test_ord
<ide> assert_equal 97, 'a'.ord
<ide> assert_equal 97, 'abc'.ord
<ide> def test_truncate_multibyte
<ide> def test_truncate_should_not_be_html_safe
<ide> assert !"Hello World!".truncate(12).html_safe?
<ide> end
<del>
<add>
<ide> def test_remove
<ide> assert_equal "Summer", "Fast Summer".remove(/Fast /)
<ide> assert_equal "Summer", "Fast Summer".remove!(/Fast /)
<ide><path>activesupport/test/inflector_test.rb
<ide> def test_humanize
<ide> end
<ide> end
<ide>
<add> def test_humanize_without_capitalize
<add> UnderscoreToHumanWithoutCapitalize.each do |underscore, human|
<add> assert_equal(human, ActiveSupport::Inflector.humanize(underscore, capitalize: false))
<add> end
<add> end
<add>
<ide> def test_humanize_by_rule
<ide> ActiveSupport::Inflector.inflections do |inflect|
<ide> inflect.human(/_cnt$/i, '\1_count')
<ide><path>activesupport/test/inflector_test_cases.rb
<ide> module InflectorTestCases
<ide> "underground" => "Underground"
<ide> }
<ide>
<add> UnderscoreToHumanWithoutCapitalize = {
<add> "employee_salary" => "employee salary",
<add> "employee_id" => "employee",
<add> "underground" => "underground"
<add> }
<add>
<ide> MixtureToTitleCase = {
<ide> 'active_record' => 'Active Record',
<ide> 'ActiveRecord' => 'Active Record',
<ide><path>guides/source/active_support_core_extensions.md
<ide> The method `humanize` gives you a sensible name for display out of an attribute
<ide> "comments_count".humanize # => "Comments count"
<ide> ```
<ide>
<add>The capitalization of the first word can be turned off by setting the optional parameter `capitalize` to false:
<add>
<add>```ruby
<add>"author_id".humanize(capitalize: false) # => "author"
<add>```
<add>
<ide> The helper method `full_messages` uses `humanize` as a fallback to include attribute names:
<ide>
<ide> ```ruby | 7 |
Ruby | Ruby | use minitest/mock instead of mocha | 84854d9d05c10b091a03767359c3c6ccf4450f26 | <ide><path>actioncable/test/channel/base_test.rb
<ide> # frozen_string_literal: true
<ide>
<ide> require "test_helper"
<add>require "minitest/mock"
<ide> require "stubs/test_connection"
<ide> require "stubs/room"
<ide>
<ide> def rm_rf
<ide> events << ActiveSupport::Notifications::Event.new(*args)
<ide> end
<ide>
<del> @channel.stubs(:subscription_confirmation_sent?).returns(false)
<del> @channel.send(:transmit_subscription_confirmation)
<add> @channel.stub(:subscription_confirmation_sent?, false) do
<add> @channel.send(:transmit_subscription_confirmation)
<ide>
<del> assert_equal 1, events.length
<del> assert_equal "transmit_subscription_confirmation.action_cable", events[0].name
<del> assert_equal "ActionCable::Channel::BaseTest::ChatChannel", events[0].payload[:channel_class]
<add> assert_equal 1, events.length
<add> assert_equal "transmit_subscription_confirmation.action_cable", events[0].name
<add> assert_equal "ActionCable::Channel::BaseTest::ChatChannel", events[0].payload[:channel_class]
<add> end
<ide> ensure
<ide> ActiveSupport::Notifications.unsubscribe "transmit_subscription_confirmation.action_cable"
<ide> end
<ide><path>actioncable/test/channel/broadcasting_test.rb
<ide> # frozen_string_literal: true
<ide>
<ide> require "test_helper"
<add>require "active_support/testing/method_call_assertions"
<ide> require "stubs/test_connection"
<ide> require "stubs/room"
<ide>
<ide> class ActionCable::Channel::BroadcastingTest < ActiveSupport::TestCase
<add> include ActiveSupport::Testing::MethodCallAssertions
<add>
<ide> class ChatChannel < ActionCable::Channel::Base
<ide> end
<ide>
<ide> class ChatChannel < ActionCable::Channel::Base
<ide> end
<ide>
<ide> test "broadcasts_to" do
<del> ActionCable.stubs(:server).returns mock().tap { |m| m.expects(:broadcast).with("action_cable:channel:broadcasting_test:chat:Room#1-Campfire", "Hello World") }
<del> ChatChannel.broadcast_to(Room.new(1), "Hello World")
<add> assert_called_with(
<add> ActionCable.server,
<add> :broadcast,
<add> [
<add> "action_cable:channel:broadcasting_test:chat:Room#1-Campfire",
<add> "Hello World"
<add> ]
<add> ) do
<add> ChatChannel.broadcast_to(Room.new(1), "Hello World")
<add> end
<ide> end
<ide>
<ide> test "broadcasting_for with an object" do
<ide><path>actioncable/test/channel/periodic_timers_test.rb
<ide> require "stubs/test_connection"
<ide> require "stubs/room"
<ide> require "active_support/time"
<add>require "active_support/testing/method_call_assertions"
<ide>
<ide> class ActionCable::Channel::PeriodicTimersTest < ActiveSupport::TestCase
<add> include ActiveSupport::Testing::MethodCallAssertions
<add>
<ide> class ChatChannel < ActionCable::Channel::Base
<ide> # Method name arg
<ide> periodically :send_updates, every: 1
<ide> def ping
<ide> end
<ide>
<ide> test "timer start and stop" do
<del> @connection.server.event_loop.expects(:timer).times(3).returns(stub(shutdown: nil))
<del> channel = ChatChannel.new @connection, "{id: 1}", id: 1
<add> mock = Minitest::Mock.new
<add> 3.times { mock.expect(:shutdown, nil) }
<add>
<add> assert_called(
<add> @connection.server.event_loop,
<add> :timer,
<add> times: 3,
<add> returns: mock
<add> ) do
<add> channel = ChatChannel.new @connection, "{id: 1}", id: 1
<add>
<add> channel.subscribe_to_channel
<add> channel.unsubscribe_from_channel
<add> assert_equal [], channel.send(:active_periodic_timers)
<add> end
<ide>
<del> channel.subscribe_to_channel
<del> channel.unsubscribe_from_channel
<del> assert_equal [], channel.send(:active_periodic_timers)
<add> assert mock.verify
<ide> end
<ide> end
<ide><path>actioncable/test/channel/rejection_test.rb
<ide> # frozen_string_literal: true
<ide>
<ide> require "test_helper"
<add>require "minitest/mock"
<ide> require "stubs/test_connection"
<ide> require "stubs/room"
<ide>
<ide> def secret_action
<ide> end
<ide>
<ide> test "subscription rejection" do
<del> @connection.expects(:subscriptions).returns mock().tap { |m| m.expects(:remove_subscription).with instance_of(SecretChannel) }
<del> @channel = SecretChannel.new @connection, "{id: 1}", id: 1
<del> @channel.subscribe_to_channel
<add> subscriptions = Minitest::Mock.new
<add> subscriptions.expect(:remove_subscription, SecretChannel, [SecretChannel])
<ide>
<del> expected = { "identifier" => "{id: 1}", "type" => "reject_subscription" }
<del> assert_equal expected, @connection.last_transmission
<add> @connection.stub(:subscriptions, subscriptions) do
<add> @channel = SecretChannel.new @connection, "{id: 1}", id: 1
<add> @channel.subscribe_to_channel
<add>
<add> expected = { "identifier" => "{id: 1}", "type" => "reject_subscription" }
<add> assert_equal expected, @connection.last_transmission
<add> end
<add>
<add> assert subscriptions.verify
<ide> end
<ide>
<ide> test "does not execute action if subscription is rejected" do
<del> @connection.expects(:subscriptions).returns mock().tap { |m| m.expects(:remove_subscription).with instance_of(SecretChannel) }
<del> @channel = SecretChannel.new @connection, "{id: 1}", id: 1
<del> @channel.subscribe_to_channel
<add> subscriptions = Minitest::Mock.new
<add> subscriptions.expect(:remove_subscription, SecretChannel, [SecretChannel])
<ide>
<del> expected = { "identifier" => "{id: 1}", "type" => "reject_subscription" }
<del> assert_equal expected, @connection.last_transmission
<del> assert_equal 1, @connection.transmissions.size
<add> @connection.stub(:subscriptions, subscriptions) do
<add> @channel = SecretChannel.new @connection, "{id: 1}", id: 1
<add> @channel.subscribe_to_channel
<add>
<add> expected = { "identifier" => "{id: 1}", "type" => "reject_subscription" }
<add> assert_equal expected, @connection.last_transmission
<add> assert_equal 1, @connection.transmissions.size
<add>
<add> @channel.perform_action("action" => :secret_action)
<add> assert_equal 1, @connection.transmissions.size
<add> end
<ide>
<del> @channel.perform_action("action" => :secret_action)
<del> assert_equal 1, @connection.transmissions.size
<add> assert subscriptions.verify
<ide> end
<ide> end
<ide><path>actioncable/test/channel/stream_test.rb
<ide>
<ide> require "test_helper"
<ide> require "active_support/testing/method_call_assertions"
<add>require "minitest/mock"
<ide> require "stubs/test_connection"
<ide> require "stubs/room"
<ide>
<ide> class StreamTest < ActionCable::TestCase
<ide> test "streaming start and stop" do
<ide> run_in_eventmachine do
<ide> connection = TestConnection.new
<del> connection.pubsub.expects(:subscribe).with("test_room_1", kind_of(Proc), kind_of(Proc))
<del> channel = ChatChannel.new connection, "{id: 1}", id: 1
<del> channel.subscribe_to_channel
<add> pubsub = Minitest::Mock.new connection.pubsub
<ide>
<del> wait_for_async
<add> pubsub.expect(:subscribe, nil, ["test_room_1", Proc, Proc])
<add> pubsub.expect(:unsubscribe, nil, ["test_room_1", Proc])
<add>
<add> connection.stub(:pubsub, pubsub) do
<add> channel = ChatChannel.new connection, "{id: 1}", id: 1
<add> channel.subscribe_to_channel
<add>
<add> wait_for_async
<add> channel.unsubscribe_from_channel
<add> end
<ide>
<del> connection.pubsub.expects(:unsubscribe)
<del> channel.unsubscribe_from_channel
<add> assert pubsub.verify
<ide> end
<ide> end
<ide>
<ide> test "stream from non-string channel" do
<ide> run_in_eventmachine do
<ide> connection = TestConnection.new
<del> connection.pubsub.expects(:subscribe).with("channel", kind_of(Proc), kind_of(Proc))
<add> pubsub = Minitest::Mock.new connection.pubsub
<ide>
<del> channel = SymbolChannel.new connection, ""
<del> channel.subscribe_to_channel
<add> pubsub.expect(:subscribe, nil, ["channel", Proc, Proc])
<add> pubsub.expect(:unsubscribe, nil, ["channel", Proc])
<ide>
<del> wait_for_async
<add> connection.stub(:pubsub, pubsub) do
<add> channel = SymbolChannel.new connection, ""
<add> channel.subscribe_to_channel
<add>
<add> wait_for_async
<add>
<add> channel.unsubscribe_from_channel
<add> end
<ide>
<del> connection.pubsub.expects(:unsubscribe)
<del> channel.unsubscribe_from_channel
<add> assert pubsub.verify
<ide> end
<ide> end
<ide>
<ide><path>actioncable/test/connection/client_socket_test.rb
<ide> def on_error(message)
<ide>
<ide> # Internal hax = :(
<ide> client = connection.websocket.send(:websocket)
<del> client.instance_variable_get("@stream").expects(:write).raises("foo")
<add> client.instance_variable_get("@stream").stub(:write, proc { raise "foo" }) do
<ide>
<del> assert_not_called(client, :client_gone) do
<del> client.write("boo")
<add> assert_not_called(client, :client_gone) do
<add> client.write("boo")
<add> end
<ide> end
<ide> assert_equal %w[ foo ], connection.errors
<ide> end
<ide><path>actioncable/test/connection/stream_test.rb
<ide>
<ide> require "test_helper"
<ide> require "active_support/testing/method_call_assertions"
<add>require "minitest/mock"
<ide> require "stubs/test_server"
<ide>
<ide> class ActionCable::Connection::StreamTest < ActionCable::TestCase
<ide> def on_error(message)
<ide>
<ide> # Internal hax = :(
<ide> client = connection.websocket.send(:websocket)
<del> client.instance_variable_get("@stream").instance_variable_get("@rack_hijack_io").expects(:write).raises(closed_exception, "foo")
<del>
<del> assert_called(client, :client_gone) do
<del> client.write("boo")
<add> rack_hijack_io = client.instance_variable_get("@stream").instance_variable_get("@rack_hijack_io")
<add> rack_hijack_io.stub(:write, proc { raise(closed_exception, "foo") }) do
<add> assert_called(client, :client_gone) do
<add> client.write("boo")
<add> end
<ide> end
<ide> assert_equal [], connection.errors
<ide> end
<ide><path>actioncable/test/stubs/test_connection.rb
<ide> require "stubs/user"
<ide>
<ide> class TestConnection
<del> attr_reader :identifiers, :logger, :current_user, :server, :transmissions
<add> attr_reader :identifiers, :logger, :current_user, :server, :subscriptions, :transmissions
<ide>
<ide> delegate :pubsub, to: :server
<ide> | 8 |
Javascript | Javascript | condense the duration.fn a little bit | b8eb4db499e679e3502d16c49b4ad2658681ed31 | <ide><path>moment.js
<ide> }
<ide>
<ide> return output;
<del> }
<del> };
<add> },
<ide>
<del> moment.duration.fn.lang = moment.fn.lang;
<add> lang : moment.fn.lang
<add> };
<ide>
<ide> function makeDurationGetter(name) {
<ide> moment.duration.fn[name] = function () { | 1 |
Python | Python | fix typo in unit tests | b97dbab998640479e8ba0dfbe8fa1759908195df | <ide><path>spacy/tests/tokenizer/test_whitespace.py
<ide> def test_tokenizer_splits_double_space(tokenizer, text):
<ide>
<ide>
<ide> @pytest.mark.parametrize("text", ["lorem ipsum "])
<del>def test_tokenizer_handles_double_trainling_ws(tokenizer, text):
<add>def test_tokenizer_handles_double_trailing_ws(tokenizer, text):
<ide> tokens = tokenizer(text)
<ide> assert repr(tokens.text_with_ws) == repr(text)
<ide> | 1 |
Text | Text | add link to section | 094b2ae9ba2db28254bb09207a58bd9730ff097d | <ide><path>README.md
<ide> import the keys:
<ide> $ gpg --keyserver hkps://keys.openpgp.org --recv-keys DD8F2338BAE7501E3DD5AC78C273792F7D83545D
<ide> ```
<ide>
<del>See the bottom of this README for a full script to import active release keys.
<add>See [Release keys](#release-keys) for a script to import active release keys.
<ide>
<ide> Next, download the `SHASUMS256.txt.sig` for the release:
<ide>
<ide> gpg --keyserver hkps://keys.openpgp.org --recv-keys 108F52B48DB57BB0CC439B2997B0
<ide> gpg --keyserver hkps://keys.openpgp.org --recv-keys B9E2F5981AA6E0CD28160D9FF13993A75599653C
<ide> ```
<ide>
<del>See the section above on [Verifying binaries](#verifying-binaries) for how to
<del>use these keys to verify a downloaded file.
<add>See [Verifying binaries](#verifying-binaries) for how to use these keys to
<add>verify a downloaded file.
<ide>
<ide> <details>
<ide> | 1 |
Mixed | Python | tensorflow qa example | e3cb7a0b60890cdc0b62c7f3f7ffe87cb6b3f799 | <ide><path>examples/tensorflow/question-answering/README.md
<ide> <!---
<del>Copyright 2020 The HuggingFace Team. All rights reserved.
<add>Copyright 2021 The HuggingFace Team. All rights reserved.
<ide>
<ide> Licensed under the Apache License, Version 2.0 (the "License");
<ide> you may not use this file except in compliance with the License.
<ide> See the License for the specific language governing permissions and
<ide> limitations under the License.
<ide> -->
<ide>
<del>## SQuAD with the Tensorflow Trainer
<del>
<del>```bash
<del>python run_tf_squad.py \
<del> --model_name_or_path bert-base-uncased \
<del> --output_dir model \
<del> --max_seq_length 384 \
<del> --num_train_epochs 2 \
<del> --per_gpu_train_batch_size 8 \
<del> --per_gpu_eval_batch_size 16 \
<del> --do_train \
<del> --logging_dir logs \
<del> --logging_steps 10 \
<del> --learning_rate 3e-5 \
<del> --doc_stride 128
<del>```
<add># Question answering example
<add>
<add>This folder contains the `run_qa.py` script, demonstrating *question answering* with the 🤗 Transformers library.
<add>For straightforward use-cases you may be able to use this script without modification, although we have also
<add>included comments in the code to indicate areas that you may need to adapt to your own projects.
<add>
<add>### Usage notes
<add>Note that when contexts are long they may be split into multiple training cases, not all of which may contain
<add>the answer span.
<add>
<add>As-is, the example script will train on SQuAD or any other question-answering dataset formatted the same way, and can handle user
<add>inputs as well.
<add>
<add>### Multi-GPU and TPU usage
<ide>
<del>For the moment evaluation is not available in the Tensorflow Trainer only the training.
<add>By default, the script uses a `MirroredStrategy` and will use multiple GPUs effectively if they are available. TPUs
<add>can also be used by passing the name of the TPU resource with the `--tpu` argument. There are some issues surrounding
<add>these strategies and our models right now, which are most likely to appear in the evaluation/prediction steps. We're
<add>actively working on better support for multi-GPU and TPU training in TF, but if you encounter problems a quick
<add>workaround is to train in the multi-GPU or TPU context and then perform predictions outside of it.
<add>
<add>### Memory usage and data loading
<add>
<add>One thing to note is that all data is loaded into memory in this script. Most question answering datasets are small
<add>enough that this is not an issue, but if you have a very large dataset you will need to modify the script to handle
<add>data streaming. This is particularly challenging for TPUs, given the stricter requirements and the sheer volume of data
<add>required to keep them fed. A full explanation of all the possible pitfalls is a bit beyond this example script and
<add>README, but for more information you can see the 'Input Datasets' section of
<add>[this document](https://www.tensorflow.org/guide/tpu).
<add>
<add>### Example command
<add>```
<add>python run_qa.py \
<add>--model_name_or_path distilbert-base-cased \
<add>--output_dir output \
<add>--dataset_name squad \
<add>--do_train \
<add>--do_eval \
<add>```
<ide><path>examples/tensorflow/question-answering/run_qa.py
<add>#!/usr/bin/env python
<add># coding=utf-8
<add># Copyright 2020 The HuggingFace Team All rights reserved.
<add>#
<add># Licensed under the Apache License, Version 2.0 (the "License");
<add># you may not use this file except in compliance with the License.
<add># You may obtain a copy of the License at
<add>#
<add># http://www.apache.org/licenses/LICENSE-2.0
<add>#
<add># Unless required by applicable law or agreed to in writing, software
<add># distributed under the License is distributed on an "AS IS" BASIS,
<add># WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
<add># See the License for the specific language governing permissions and
<add># limitations under the License.
<add>"""
<add>Fine-tuning the library models for question answering.
<add>"""
<add># You can also adapt this script on your own question answering task. Pointers for this are left as comments.
<add>
<add>import logging
<add>import os
<add>import sys
<add>from dataclasses import dataclass, field
<add>from pathlib import Path
<add>from typing import Optional
<add>
<add>import tensorflow as tf
<add>from datasets import load_dataset, load_metric
<add>
<add>import transformers
<add>from transformers import (
<add> AutoConfig,
<add> AutoTokenizer,
<add> EvalPrediction,
<add> HfArgumentParser,
<add> PreTrainedTokenizerFast,
<add> TFAutoModelForQuestionAnswering,
<add> TFTrainingArguments,
<add> set_seed,
<add>)
<add>from transformers.file_utils import CONFIG_NAME, TF2_WEIGHTS_NAME
<add>from transformers.utils import check_min_version
<add>from utils_qa import postprocess_qa_predictions
<add>
<add>
<add># Will error if the minimal version of Transformers is not installed. Remove at your own risks.
<add>check_min_version("4.7.0.dev0")
<add>
<add>logger = logging.getLogger(__name__)
<add>
<add>
<add># region Arguments
<add>@dataclass
<add>class ModelArguments:
<add> """
<add> Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.
<add> """
<add>
<add> model_name_or_path: str = field(
<add> metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"}
<add> )
<add> config_name: Optional[str] = field(
<add> default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
<add> )
<add> tokenizer_name: Optional[str] = field(
<add> default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
<add> )
<add> cache_dir: Optional[str] = field(
<add> default=None,
<add> metadata={"help": "Path to directory to store the pretrained models downloaded from huggingface.co"},
<add> )
<add> model_revision: str = field(
<add> default="main",
<add> metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."},
<add> )
<add> use_auth_token: bool = field(
<add> default=False,
<add> metadata={
<add> "help": "Will use the token generated when running `transformers-cli login` (necessary to use this script "
<add> "with private models)."
<add> },
<add> )
<add>
<add>
<add>@dataclass
<add>class DataTrainingArguments:
<add> """
<add> Arguments pertaining to what data we are going to input our model for training and eval.
<add> """
<add>
<add> dataset_name: Optional[str] = field(
<add> default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."}
<add> )
<add> dataset_config_name: Optional[str] = field(
<add> default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."}
<add> )
<add> train_file: Optional[str] = field(default=None, metadata={"help": "The input training data file (a text file)."})
<add> validation_file: Optional[str] = field(
<add> default=None,
<add> metadata={"help": "An optional input evaluation data file to evaluate the perplexity on (a text file)."},
<add> )
<add> test_file: Optional[str] = field(
<add> default=None,
<add> metadata={"help": "An optional input test data file to evaluate the perplexity on (a text file)."},
<add> )
<add> overwrite_cache: bool = field(
<add> default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
<add> )
<add> preprocessing_num_workers: Optional[int] = field(
<add> default=None,
<add> metadata={"help": "The number of processes to use for the preprocessing."},
<add> )
<add> max_seq_length: int = field(
<add> default=384,
<add> metadata={
<add> "help": "The maximum total input sequence length after tokenization. Sequences longer "
<add> "than this will be truncated, sequences shorter will be padded."
<add> },
<add> )
<add> pad_to_max_length: bool = field(
<add> default=False,
<add> metadata={
<add> "help": "Whether to pad all samples to `max_seq_length`. "
<add> "If False, will pad the samples dynamically when batching to the maximum length in the batch (which can "
<add> "be faster on GPU but will be slower on TPU)."
<add> },
<add> )
<add> max_train_samples: Optional[int] = field(
<add> default=None,
<add> metadata={
<add> "help": "For debugging purposes or quicker training, truncate the number of training examples to this "
<add> "value if set."
<add> },
<add> )
<add> max_eval_samples: Optional[int] = field(
<add> default=None,
<add> metadata={
<add> "help": "For debugging purposes or quicker training, truncate the number of evaluation examples to this "
<add> "value if set."
<add> },
<add> )
<add> max_predict_samples: Optional[int] = field(
<add> default=None,
<add> metadata={
<add> "help": "For debugging purposes or quicker training, truncate the number of prediction examples to this "
<add> "value if set."
<add> },
<add> )
<add> version_2_with_negative: bool = field(
<add> default=False, metadata={"help": "If true, some of the examples do not have an answer."}
<add> )
<add> null_score_diff_threshold: float = field(
<add> default=0.0,
<add> metadata={
<add> "help": "The threshold used to select the null answer: if the best answer has a score that is less than "
<add> "the score of the null answer minus this threshold, the null answer is selected for this example. "
<add> "Only useful when `version_2_with_negative=True`."
<add> },
<add> )
<add> doc_stride: int = field(
<add> default=128,
<add> metadata={"help": "When splitting up a long document into chunks, how much stride to take between chunks."},
<add> )
<add> n_best_size: int = field(
<add> default=20,
<add> metadata={"help": "The total number of n-best predictions to generate when looking for an answer."},
<add> )
<add> max_answer_length: int = field(
<add> default=30,
<add> metadata={
<add> "help": "The maximum length of an answer that can be generated. This is needed because the start "
<add> "and end predictions are not conditioned on one another."
<add> },
<add> )
<add>
<add> def __post_init__(self):
<add> if (
<add> self.dataset_name is None
<add> and self.train_file is None
<add> and self.validation_file is None
<add> and self.test_file is None
<add> ):
<add> raise ValueError("Need either a dataset name or a training/validation file/test_file.")
<add> else:
<add> if self.train_file is not None:
<add> extension = self.train_file.split(".")[-1]
<add> assert extension in ["csv", "json"], "`train_file` should be a csv or a json file."
<add> if self.validation_file is not None:
<add> extension = self.validation_file.split(".")[-1]
<add> assert extension in ["csv", "json"], "`validation_file` should be a csv or a json file."
<add> if self.test_file is not None:
<add> extension = self.test_file.split(".")[-1]
<add> assert extension in ["csv", "json"], "`test_file` should be a csv or a json file."
<add>
<add>
<add># endregion
<add>
<add># region Helper classes
<add>class SavePretrainedCallback(tf.keras.callbacks.Callback):
<add> # Hugging Face models have a save_pretrained() method that saves both the weights and the necessary
<add> # metadata to allow them to be loaded as a pretrained model in future. This is a simple Keras callback
<add> # that saves the model with this method after each epoch.
<add> def __init__(self, output_dir, **kwargs):
<add> super().__init__()
<add> self.output_dir = output_dir
<add>
<add> def on_epoch_end(self, epoch, logs=None):
<add> self.model.save_pretrained(self.output_dir)
<add>
<add>
<add>def convert_dataset_for_tensorflow(
<add> dataset, batch_size, dataset_mode="variable_batch", shuffle=True, drop_remainder=True
<add>):
<add> """Converts a Hugging Face dataset to a Tensorflow Dataset. The dataset_mode controls whether we pad all batches
<add> to the maximum sequence length, or whether we only pad to the maximum length within that batch. The former
<add> is most useful when training on TPU, as a new graph compilation is required for each sequence length.
<add> """
<add>
<add> def densify_ragged_batch(features, label=None):
<add> features = {
<add> feature: ragged_tensor.to_tensor(shape=batch_shape[feature]) if feature in tensor_keys else ragged_tensor
<add> for feature, ragged_tensor in features.items()
<add> }
<add> if label is None:
<add> return features
<add> else:
<add> return features, label
<add>
<add> tensor_keys = ["attention_mask", "input_ids"]
<add> label_keys = ["start_positions", "end_positions"]
<add> if dataset_mode == "variable_batch":
<add> batch_shape = {key: None for key in tensor_keys}
<add> data = {key: tf.ragged.constant(dataset[key]) for key in tensor_keys}
<add> elif dataset_mode == "constant_batch":
<add> data = {key: tf.ragged.constant(dataset[key]) for key in tensor_keys}
<add> batch_shape = {
<add> key: tf.concat(([batch_size], ragged_tensor.bounding_shape()[1:]), axis=0)
<add> for key, ragged_tensor in data.items()
<add> }
<add> else:
<add> raise ValueError("Unknown dataset mode!")
<add>
<add> if all([key in dataset.features for key in label_keys]):
<add> for key in label_keys:
<add> data[key] = tf.convert_to_tensor(dataset[key])
<add> dummy_labels = tf.zeros_like(dataset[key])
<add> tf_dataset = tf.data.Dataset.from_tensor_slices((data, dummy_labels))
<add> else:
<add> tf_dataset = tf.data.Dataset.from_tensor_slices(data)
<add> if shuffle:
<add> tf_dataset = tf_dataset.shuffle(buffer_size=len(dataset))
<add> tf_dataset = tf_dataset.batch(batch_size=batch_size, drop_remainder=drop_remainder).map(densify_ragged_batch)
<add> return tf_dataset
<add>
<add>
<add># endregion
<add>
<add>
<add>def main():
<add> # region Argument parsing
<add> # See all possible arguments in src/transformers/training_args.py
<add> # or by passing the --help flag to this script.
<add> # We now keep distinct sets of args, for a cleaner separation of concerns.
<add>
<add> parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TFTrainingArguments))
<add> if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
<add> # If we pass only one argument to the script and it's the path to a json file,
<add> # let's parse it to get our arguments.
<add> model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
<add> else:
<add> model_args, data_args, training_args = parser.parse_args_into_dataclasses()
<add>
<add> output_dir = Path(training_args.output_dir)
<add> output_dir.mkdir(parents=True, exist_ok=True)
<add> # endregion
<add>
<add> # region Checkpoints
<add> checkpoint = None
<add> if len(os.listdir(training_args.output_dir)) > 0 and not training_args.overwrite_output_dir:
<add> if (output_dir / CONFIG_NAME).is_file() and (output_dir / TF2_WEIGHTS_NAME).is_file():
<add> checkpoint = output_dir
<add> logger.info(
<add> f"Checkpoint detected, resuming training from checkpoint in {training_args.output_dir}. To avoid this"
<add> " behavior, change the `--output_dir` or add `--overwrite_output_dir` to train from scratch."
<add> )
<add> else:
<add> raise ValueError(
<add> f"Output directory ({training_args.output_dir}) already exists and is not empty. "
<add> "Use --overwrite_output_dir to continue regardless."
<add> )
<add> # endregion
<add>
<add> # region Logging
<add> logging.basicConfig(
<add> format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
<add> datefmt="%m/%d/%Y %H:%M:%S",
<add> handlers=[logging.StreamHandler(sys.stdout)],
<add> )
<add> logger.setLevel(logging.INFO if training_args.should_log else logging.WARN)
<add>
<add> # Set the verbosity to info of the Transformers logger (on main process only):
<add> if training_args.should_log:
<add> transformers.utils.logging.set_verbosity_info()
<add> transformers.utils.logging.enable_default_handler()
<add> transformers.utils.logging.enable_explicit_format()
<add> logger.info(f"Training/evaluation parameters {training_args}")
<add> # endregion
<add>
<add> # Set seed before initializing model.
<add> set_seed(training_args.seed)
<add>
<add> # region Load Data
<add> # Get the datasets: you can either provide your own CSV/JSON/TXT training and evaluation files (see below)
<add> # or just provide the name of one of the public datasets available on the hub at https://huggingface.co/datasets/
<add> # (the dataset will be downloaded automatically from the datasets Hub).
<add> #
<add> # For CSV/JSON files, this script will use the column called 'text' or the first column if no column called
<add> # 'text' is found. You can easily tweak this behavior (see below).
<add> #
<add> # In distributed training, the load_dataset function guarantee that only one local process can concurrently
<add> # download the dataset.
<add> if data_args.dataset_name is not None:
<add> # Downloading and loading a dataset from the hub.
<add> datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name, cache_dir=model_args.cache_dir)
<add> else:
<add> data_files = {}
<add> if data_args.train_file is not None:
<add> data_files["train"] = data_args.train_file
<add> extension = data_args.train_file.split(".")[-1]
<add>
<add> if data_args.validation_file is not None:
<add> data_files["validation"] = data_args.validation_file
<add> extension = data_args.validation_file.split(".")[-1]
<add> if data_args.test_file is not None:
<add> data_files["test"] = data_args.test_file
<add> extension = data_args.test_file.split(".")[-1]
<add> datasets = load_dataset(extension, data_files=data_files, field="data", cache_dir=model_args.cache_dir)
<add> # See more about loading any type of standard or custom dataset (from files, python dict, pandas DataFrame, etc) at
<add> # https://huggingface.co/docs/datasets/loading_datasets.html.
<add> # endregion
<add>
<add> # region Load pretrained model and tokenizer
<add> #
<add> # Distributed training:
<add> # The .from_pretrained methods guarantee that only one local process can concurrently
<add> # download model & vocab.
<add> config = AutoConfig.from_pretrained(
<add> model_args.config_name if model_args.config_name else model_args.model_name_or_path,
<add> cache_dir=model_args.cache_dir,
<add> revision=model_args.model_revision,
<add> use_auth_token=True if model_args.use_auth_token else None,
<add> )
<add> tokenizer = AutoTokenizer.from_pretrained(
<add> model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,
<add> cache_dir=model_args.cache_dir,
<add> use_fast=True,
<add> revision=model_args.model_revision,
<add> use_auth_token=True if model_args.use_auth_token else None,
<add> )
<add> # endregion
<add>
<add> # region Tokenizer check: this script requires a fast tokenizer.
<add> if not isinstance(tokenizer, PreTrainedTokenizerFast):
<add> raise ValueError(
<add> "This example script only works for models that have a fast tokenizer. Checkout the big table of models "
<add> "at https://huggingface.co/transformers/index.html#supported-frameworks to find the model types that meet this "
<add> "requirement"
<add> )
<add> # endregion
<add>
<add> # region Preprocessing the datasets
<add> # Preprocessing is slightly different for training and evaluation.
<add> if training_args.do_train:
<add> column_names = datasets["train"].column_names
<add> elif training_args.do_eval:
<add> column_names = datasets["validation"].column_names
<add> else:
<add> column_names = datasets["test"].column_names
<add> question_column_name = "question" if "question" in column_names else column_names[0]
<add> context_column_name = "context" if "context" in column_names else column_names[1]
<add> answer_column_name = "answers" if "answers" in column_names else column_names[2]
<add>
<add> # Padding side determines if we do (question|context) or (context|question).
<add> pad_on_right = tokenizer.padding_side == "right"
<add>
<add> if data_args.max_seq_length > tokenizer.model_max_length:
<add> logger.warning(
<add> f"The max_seq_length passed ({data_args.max_seq_length}) is larger than the maximum length for the"
<add> f"model ({tokenizer.model_max_length}). Using max_seq_length={tokenizer.model_max_length}."
<add> )
<add> max_seq_length = min(data_args.max_seq_length, tokenizer.model_max_length)
<add>
<add> # Training preprocessing
<add> def prepare_train_features(examples):
<add> # Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results
<add> # in one example possible giving several features when a context is long, each of those features having a
<add> # context that overlaps a bit the context of the previous feature.
<add> tokenized_examples = tokenizer(
<add> examples[question_column_name if pad_on_right else context_column_name],
<add> examples[context_column_name if pad_on_right else question_column_name],
<add> truncation="only_second" if pad_on_right else "only_first",
<add> max_length=max_seq_length,
<add> stride=data_args.doc_stride,
<add> return_overflowing_tokens=True,
<add> return_offsets_mapping=True,
<add> padding="max_length" if data_args.pad_to_max_length else False,
<add> )
<add>
<add> # Since one example might give us several features if it has a long context, we need a map from a feature to
<add> # its corresponding example. This key gives us just that.
<add> sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
<add> # The offset mappings will give us a map from token to character position in the original context. This will
<add> # help us compute the start_positions and end_positions.
<add> offset_mapping = tokenized_examples.pop("offset_mapping")
<add>
<add> # Let's label those examples!
<add> tokenized_examples["start_positions"] = []
<add> tokenized_examples["end_positions"] = []
<add>
<add> for i, offsets in enumerate(offset_mapping):
<add> # We will label impossible answers with the index of the CLS token.
<add> input_ids = tokenized_examples["input_ids"][i]
<add> cls_index = input_ids.index(tokenizer.cls_token_id)
<add>
<add> # Grab the sequence corresponding to that example (to know what is the context and what is the question).
<add> sequence_ids = tokenized_examples.sequence_ids(i)
<add>
<add> # One example can give several spans, this is the index of the example containing this span of text.
<add> sample_index = sample_mapping[i]
<add> answers = examples[answer_column_name][sample_index]
<add> # If no answers are given, set the cls_index as answer.
<add> if len(answers["answer_start"]) == 0:
<add> tokenized_examples["start_positions"].append(cls_index)
<add> tokenized_examples["end_positions"].append(cls_index)
<add> else:
<add> # Start/end character index of the answer in the text.
<add> start_char = answers["answer_start"][0]
<add> end_char = start_char + len(answers["text"][0])
<add>
<add> # Start token index of the current span in the text.
<add> token_start_index = 0
<add> while sequence_ids[token_start_index] != (1 if pad_on_right else 0):
<add> token_start_index += 1
<add>
<add> # End token index of the current span in the text.
<add> token_end_index = len(input_ids) - 1
<add> while sequence_ids[token_end_index] != (1 if pad_on_right else 0):
<add> token_end_index -= 1
<add>
<add> # Detect if the answer is out of the span (in which case this feature is labeled with the CLS index).
<add> if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char):
<add> tokenized_examples["start_positions"].append(cls_index)
<add> tokenized_examples["end_positions"].append(cls_index)
<add> else:
<add> # Otherwise move the token_start_index and token_end_index to the two ends of the answer.
<add> # Note: we could go after the last offset if the answer is the last word (edge case).
<add> while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char:
<add> token_start_index += 1
<add> tokenized_examples["start_positions"].append(token_start_index - 1)
<add> while offsets[token_end_index][1] >= end_char:
<add> token_end_index -= 1
<add> tokenized_examples["end_positions"].append(token_end_index + 1)
<add>
<add> return tokenized_examples
<add>
<add> processed_datasets = dict()
<add> if training_args.do_train:
<add> if "train" not in datasets:
<add> raise ValueError("--do_train requires a train dataset")
<add> train_dataset = datasets["train"]
<add> if data_args.max_train_samples is not None:
<add> # We will select sample from whole data if agument is specified
<add> train_dataset = train_dataset.select(range(data_args.max_train_samples))
<add> # Create train feature from dataset
<add> train_dataset = train_dataset.map(
<add> prepare_train_features,
<add> batched=True,
<add> num_proc=data_args.preprocessing_num_workers,
<add> remove_columns=column_names,
<add> load_from_cache_file=not data_args.overwrite_cache,
<add> )
<add> if data_args.max_train_samples is not None:
<add> # Number of samples might increase during Feature Creation, We select only specified max samples
<add> train_dataset = train_dataset.select(range(data_args.max_train_samples))
<add> processed_datasets["train"] = train_dataset
<add>
<add> # Validation preprocessing
<add> def prepare_validation_features(examples):
<add> # Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results
<add> # in one example possible giving several features when a context is long, each of those features having a
<add> # context that overlaps a bit the context of the previous feature.
<add> tokenized_examples = tokenizer(
<add> examples[question_column_name if pad_on_right else context_column_name],
<add> examples[context_column_name if pad_on_right else question_column_name],
<add> truncation="only_second" if pad_on_right else "only_first",
<add> max_length=max_seq_length,
<add> stride=data_args.doc_stride,
<add> return_overflowing_tokens=True,
<add> return_offsets_mapping=True,
<add> padding="max_length" if data_args.pad_to_max_length else False,
<add> )
<add>
<add> # Since one example might give us several features if it has a long context, we need a map from a feature to
<add> # its corresponding example. This key gives us just that.
<add> sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
<add>
<add> # For evaluation, we will need to convert our predictions to substrings of the context, so we keep the
<add> # corresponding example_id and we will store the offset mappings.
<add> tokenized_examples["example_id"] = []
<add>
<add> for i in range(len(tokenized_examples["input_ids"])):
<add> # Grab the sequence corresponding to that example (to know what is the context and what is the question).
<add> sequence_ids = tokenized_examples.sequence_ids(i)
<add> context_index = 1 if pad_on_right else 0
<add>
<add> # One example can give several spans, this is the index of the example containing this span of text.
<add> sample_index = sample_mapping[i]
<add> tokenized_examples["example_id"].append(examples["id"][sample_index])
<add>
<add> # Set to None the offset_mapping that are not part of the context so it's easy to determine if a token
<add> # position is part of the context or not.
<add> tokenized_examples["offset_mapping"][i] = [
<add> (o if sequence_ids[k] == context_index else None)
<add> for k, o in enumerate(tokenized_examples["offset_mapping"][i])
<add> ]
<add>
<add> return tokenized_examples
<add>
<add> if training_args.do_eval:
<add> if "validation" not in datasets:
<add> raise ValueError("--do_eval requires a validation dataset")
<add> eval_examples = datasets["validation"]
<add> if data_args.max_eval_samples is not None:
<add> # We will select sample from whole data
<add> eval_examples = eval_examples.select(range(data_args.max_eval_samples))
<add> # Validation Feature Creation
<add> eval_dataset = eval_examples.map(
<add> prepare_validation_features,
<add> batched=True,
<add> num_proc=data_args.preprocessing_num_workers,
<add> remove_columns=column_names,
<add> load_from_cache_file=not data_args.overwrite_cache,
<add> )
<add> if data_args.max_eval_samples is not None:
<add> # During Feature creation dataset samples might increase, we will select required samples again
<add> eval_dataset = eval_dataset.select(range(data_args.max_eval_samples))
<add> processed_datasets["validation"] = eval_dataset
<add>
<add> if training_args.do_predict:
<add> if "test" not in datasets:
<add> raise ValueError("--do_predict requires a test dataset")
<add> predict_examples = datasets["test"]
<add> if data_args.max_predict_samples is not None:
<add> # We will select sample from whole data
<add> predict_examples = predict_examples.select(range(data_args.max_predict_samples))
<add> # Predict Feature Creation
<add> predict_dataset = predict_examples.map(
<add> prepare_validation_features,
<add> batched=True,
<add> num_proc=data_args.preprocessing_num_workers,
<add> remove_columns=column_names,
<add> load_from_cache_file=not data_args.overwrite_cache,
<add> )
<add> if data_args.max_predict_samples is not None:
<add> # During Feature creation dataset samples might increase, we will select required samples again
<add> predict_dataset = predict_dataset.select(range(data_args.max_predict_samples))
<add> processed_datasets["test"] = predict_dataset
<add> # endregion
<add>
<add> # region Metrics and Post-processing:
<add> def post_processing_function(examples, features, predictions, stage="eval"):
<add> # Post-processing: we match the start logits and end logits to answers in the original context.
<add> predictions = postprocess_qa_predictions(
<add> examples=examples,
<add> features=features,
<add> predictions=predictions,
<add> version_2_with_negative=data_args.version_2_with_negative,
<add> n_best_size=data_args.n_best_size,
<add> max_answer_length=data_args.max_answer_length,
<add> null_score_diff_threshold=data_args.null_score_diff_threshold,
<add> output_dir=training_args.output_dir,
<add> prefix=stage,
<add> )
<add> # Format the result to the format the metric expects.
<add> if data_args.version_2_with_negative:
<add> formatted_predictions = [
<add> {"id": k, "prediction_text": v, "no_answer_probability": 0.0} for k, v in predictions.items()
<add> ]
<add> else:
<add> formatted_predictions = [{"id": k, "prediction_text": v} for k, v in predictions.items()]
<add>
<add> references = [{"id": ex["id"], "answers": ex[answer_column_name]} for ex in examples]
<add> return EvalPrediction(predictions=formatted_predictions, label_ids=references)
<add>
<add> metric = load_metric("squad_v2" if data_args.version_2_with_negative else "squad")
<add>
<add> def compute_metrics(p: EvalPrediction):
<add> return metric.compute(predictions=p.predictions, references=p.label_ids)
<add>
<add> # endregion
<add>
<add> with training_args.strategy.scope():
<add> # region Load model
<add> if checkpoint is None:
<add> model_path = model_args.model_name_or_path
<add> else:
<add> model_path = checkpoint
<add> model = TFAutoModelForQuestionAnswering.from_pretrained(
<add> model_path,
<add> config=config,
<add> cache_dir=model_args.cache_dir,
<add> revision=model_args.model_revision,
<add> use_auth_token=True if model_args.use_auth_token else None,
<add> )
<add> optimizer = tf.keras.optimizers.Adam(
<add> learning_rate=training_args.learning_rate,
<add> beta_1=training_args.adam_beta1,
<add> beta_2=training_args.adam_beta2,
<add> epsilon=training_args.adam_epsilon,
<add> clipnorm=training_args.max_grad_norm,
<add> )
<add>
<add> def dummy_loss(y_true, y_pred):
<add> return tf.reduce_mean(y_pred)
<add>
<add> losses = {"loss": dummy_loss}
<add> model.compile(optimizer=optimizer, loss=losses)
<add> # endregion
<add>
<add> # region Training
<add> if training_args.do_train:
<add> # Make a tf.data.Dataset for this
<add> if isinstance(training_args.strategy, tf.distribute.TPUStrategy) or data_args.pad_to_max_length:
<add> logger.info("Padding all batches to max length because argument was set or we're on TPU.")
<add> dataset_mode = "constant_batch"
<add> else:
<add> dataset_mode = "variable_batch"
<add> training_dataset = convert_dataset_for_tensorflow(
<add> processed_datasets["train"],
<add> batch_size=training_args.per_device_train_batch_size,
<add> dataset_mode=dataset_mode,
<add> drop_remainder=True,
<add> shuffle=True,
<add> )
<add> model.fit(training_dataset, epochs=int(training_args.num_train_epochs))
<add> # endregion
<add>
<add> # region Evaluation
<add> if training_args.do_eval:
<add> logger.info("*** Evaluation ***")
<add> eval_inputs = {
<add> "input_ids": tf.ragged.constant(processed_datasets["validation"]["input_ids"]).to_tensor(),
<add> "attention_mask": tf.ragged.constant(processed_datasets["validation"]["attention_mask"]).to_tensor(),
<add> }
<add> eval_predictions = model.predict(eval_inputs)
<add>
<add> post_processed_eval = post_processing_function(
<add> datasets["validation"],
<add> processed_datasets["validation"],
<add> (eval_predictions.start_logits, eval_predictions.end_logits),
<add> )
<add> metrics = compute_metrics(post_processed_eval)
<add> logging.info("Evaluation metrics:")
<add> for metric, value in metrics.items():
<add> logging.info(f"{metric}: {value:.3f}")
<add> # endregion
<add>
<add> # region Prediction
<add> if training_args.do_predict:
<add> logger.info("*** Predict ***")
<add> predict_inputs = {
<add> "input_ids": tf.ragged.constant(processed_datasets["test"]["input_ids"]).to_tensor(),
<add> "attention_mask": tf.ragged.constant(processed_datasets["test"]["attention_mask"]).to_tensor(),
<add> }
<add> test_predictions = model.predict(predict_inputs)
<add> post_processed_test = post_processing_function(
<add> datasets["test"],
<add> processed_datasets["test"],
<add> (test_predictions.start_logits, test_predictions.end_logits),
<add> )
<add> metrics = compute_metrics(post_processed_test)
<add>
<add> logging.info("Test metrics:")
<add> for metric, value in metrics.items():
<add> logging.info(f"{metric}: {value:.3f}")
<add> # endregion
<add>
<add> if training_args.push_to_hub:
<add> model.push_to_hub()
<add>
<add>
<add>if __name__ == "__main__":
<add> main()
<ide><path>examples/tensorflow/question-answering/run_tf_squad.py
<del>#!/usr/bin/env python
<del># coding=utf-8
<del># Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team.
<del># Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
<del>#
<del># Licensed under the Apache License, Version 2.0 (the "License");
<del># you may not use this file except in compliance with the License.
<del># You may obtain a copy of the License at
<del>#
<del># http://www.apache.org/licenses/LICENSE-2.0
<del>#
<del># Unless required by applicable law or agreed to in writing, software
<del># distributed under the License is distributed on an "AS IS" BASIS,
<del># WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
<del># See the License for the specific language governing permissions and
<del># limitations under the License.
<del>""" Fine-tuning the library models for question-answering."""
<del>
<del>
<del>import logging
<del>import os
<del>from dataclasses import dataclass, field
<del>from typing import Optional
<del>
<del>import tensorflow as tf
<del>
<del>from transformers import (
<del> AutoConfig,
<del> AutoTokenizer,
<del> HfArgumentParser,
<del> TFAutoModelForQuestionAnswering,
<del> TFTrainer,
<del> TFTrainingArguments,
<del> squad_convert_examples_to_features,
<del>)
<del>from transformers.data.processors.squad import SquadV1Processor, SquadV2Processor
<del>from transformers.utils import logging as hf_logging
<del>
<del>
<del>hf_logging.set_verbosity_info()
<del>hf_logging.enable_default_handler()
<del>hf_logging.enable_explicit_format()
<del>
<del>
<del>logger = logging.getLogger(__name__)
<del>
<del>
<del>@dataclass
<del>class ModelArguments:
<del> """
<del> Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.
<del> """
<del>
<del> model_name_or_path: str = field(
<del> metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"}
<del> )
<del> config_name: Optional[str] = field(
<del> default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
<del> )
<del> tokenizer_name: Optional[str] = field(
<del> default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
<del> )
<del> use_fast: bool = field(default=False, metadata={"help": "Set this flag to use fast tokenization."})
<del> # If you want to tweak more attributes on your tokenizer, you should do it in a distinct script,
<del> # or just modify its tokenizer_config.json.
<del> cache_dir: Optional[str] = field(
<del> default=None,
<del> metadata={"help": "Where do you want to store the pretrained models downloaded from huggingface.co"},
<del> )
<del>
<del>
<del>@dataclass
<del>class DataTrainingArguments:
<del> """
<del> Arguments pertaining to what data we are going to input our model for training and eval.
<del> """
<del>
<del> data_dir: Optional[str] = field(
<del> default=None, metadata={"help": "The input data dir. Should contain the .json files for the SQuAD task."}
<del> )
<del> use_tfds: Optional[bool] = field(default=True, metadata={"help": "If TFDS should be used or not."})
<del> max_seq_length: int = field(
<del> default=128,
<del> metadata={
<del> "help": "The maximum total input sequence length after tokenization. Sequences longer "
<del> "than this will be truncated, sequences shorter will be padded."
<del> },
<del> )
<del> doc_stride: int = field(
<del> default=128,
<del> metadata={"help": "When splitting up a long document into chunks, how much stride to take between chunks."},
<del> )
<del> max_query_length: int = field(
<del> default=64,
<del> metadata={
<del> "help": "The maximum number of tokens for the question. Questions longer than this will "
<del> "be truncated to this length."
<del> },
<del> )
<del> max_answer_length: int = field(
<del> default=30,
<del> metadata={
<del> "help": "The maximum length of an answer that can be generated. This is needed because the start "
<del> "and end predictions are not conditioned on one another."
<del> },
<del> )
<del> overwrite_cache: bool = field(
<del> default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
<del> )
<del> version_2_with_negative: bool = field(
<del> default=False, metadata={"help": "If true, the SQuAD examples contain some that do not have an answer."}
<del> )
<del> null_score_diff_threshold: float = field(
<del> default=0.0, metadata={"help": "If null_score - best_non_null is greater than the threshold predict null."}
<del> )
<del> n_best_size: int = field(
<del> default=20, metadata={"help": "If null_score - best_non_null is greater than the threshold predict null."}
<del> )
<del> lang_id: int = field(
<del> default=0,
<del> metadata={
<del> "help": "language id of input for language-specific xlm models (see tokenization_xlm.PRETRAINED_INIT_CONFIGURATION)"
<del> },
<del> )
<del>
<del>
<del>def main():
<del> # See all possible arguments in src/transformers/training_args.py
<del> # or by passing the --help flag to this script.
<del> # We now keep distinct sets of args, for a cleaner separation of concerns.
<del> parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TFTrainingArguments))
<del> model_args, data_args, training_args = parser.parse_args_into_dataclasses()
<del>
<del> if (
<del> os.path.exists(training_args.output_dir)
<del> and os.listdir(training_args.output_dir)
<del> and training_args.do_train
<del> and not training_args.overwrite_output_dir
<del> ):
<del> raise ValueError(
<del> f"Output directory ({training_args.output_dir}) already exists and is not empty. Use --overwrite_output_dir to overcome."
<del> )
<del>
<del> # Setup logging
<del> logging.basicConfig(
<del> format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
<del> datefmt="%m/%d/%Y %H:%M:%S",
<del> level=logging.INFO,
<del> )
<del> logger.info(
<del> f"n_replicas: {training_args.n_replicas}, distributed training: {bool(training_args.n_replicas > 1)}, "
<del> f"16-bits training: {training_args.fp16}"
<del> )
<del> logger.info(f"Training/evaluation parameters {training_args}")
<del>
<del> # Prepare Question-Answering task
<del> # Load pretrained model and tokenizer
<del> #
<del> # Distributed training:
<del> # The .from_pretrained methods guarantee that only one local process can concurrently
<del> # download model & vocab.
<del>
<del> config = AutoConfig.from_pretrained(
<del> model_args.config_name if model_args.config_name else model_args.model_name_or_path,
<del> cache_dir=model_args.cache_dir,
<del> )
<del> tokenizer = AutoTokenizer.from_pretrained(
<del> model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,
<del> cache_dir=model_args.cache_dir,
<del> use_fast=model_args.use_fast,
<del> )
<del>
<del> with training_args.strategy.scope():
<del> model = TFAutoModelForQuestionAnswering.from_pretrained(
<del> model_args.model_name_or_path,
<del> from_pt=bool(".bin" in model_args.model_name_or_path),
<del> config=config,
<del> cache_dir=model_args.cache_dir,
<del> )
<del>
<del> # Get datasets
<del> if data_args.use_tfds:
<del> if data_args.version_2_with_negative:
<del> logger.warning("tensorflow_datasets does not handle version 2 of SQuAD. Switch to version 1 automatically")
<del>
<del> try:
<del> import tensorflow_datasets as tfds
<del> except ImportError:
<del> raise ImportError("If not data_dir is specified, tensorflow_datasets needs to be installed.")
<del>
<del> tfds_examples = tfds.load("squad", data_dir=data_args.data_dir)
<del> train_examples = (
<del> SquadV1Processor().get_examples_from_dataset(tfds_examples, evaluate=False)
<del> if training_args.do_train
<del> else None
<del> )
<del> eval_examples = (
<del> SquadV1Processor().get_examples_from_dataset(tfds_examples, evaluate=True)
<del> if training_args.do_eval
<del> else None
<del> )
<del> else:
<del> processor = SquadV2Processor() if data_args.version_2_with_negative else SquadV1Processor()
<del> train_examples = processor.get_train_examples(data_args.data_dir) if training_args.do_train else None
<del> eval_examples = processor.get_dev_examples(data_args.data_dir) if training_args.do_eval else None
<del>
<del> train_dataset = (
<del> squad_convert_examples_to_features(
<del> examples=train_examples,
<del> tokenizer=tokenizer,
<del> max_seq_length=data_args.max_seq_length,
<del> doc_stride=data_args.doc_stride,
<del> max_query_length=data_args.max_query_length,
<del> is_training=True,
<del> return_dataset="tf",
<del> )
<del> if training_args.do_train
<del> else None
<del> )
<del>
<del> train_dataset = train_dataset.apply(tf.data.experimental.assert_cardinality(len(train_examples)))
<del>
<del> eval_dataset = (
<del> squad_convert_examples_to_features(
<del> examples=eval_examples,
<del> tokenizer=tokenizer,
<del> max_seq_length=data_args.max_seq_length,
<del> doc_stride=data_args.doc_stride,
<del> max_query_length=data_args.max_query_length,
<del> is_training=False,
<del> return_dataset="tf",
<del> )
<del> if training_args.do_eval
<del> else None
<del> )
<del>
<del> eval_dataset = eval_dataset.apply(tf.data.experimental.assert_cardinality(len(eval_examples)))
<del>
<del> # Initialize our Trainer
<del> trainer = TFTrainer(
<del> model=model,
<del> args=training_args,
<del> train_dataset=train_dataset,
<del> eval_dataset=eval_dataset,
<del> )
<del>
<del> # Training
<del> if training_args.do_train:
<del> trainer.train()
<del> trainer.save_model()
<del> tokenizer.save_pretrained(training_args.output_dir)
<del>
<del>
<del>if __name__ == "__main__":
<del> main()
<ide><path>examples/tensorflow/question-answering/utils_qa.py
<add># coding=utf-8
<add># Copyright 2020 The HuggingFace Team All rights reserved.
<add>#
<add># Licensed under the Apache License, Version 2.0 (the "License");
<add># you may not use this file except in compliance with the License.
<add># You may obtain a copy of the License at
<add>#
<add># http://www.apache.org/licenses/LICENSE-2.0
<add>#
<add># Unless required by applicable law or agreed to in writing, software
<add># distributed under the License is distributed on an "AS IS" BASIS,
<add># WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
<add># See the License for the specific language governing permissions and
<add># limitations under the License.
<add>"""
<add>Post-processing utilities for question answering.
<add>"""
<add>import collections
<add>import json
<add>import logging
<add>import os
<add>from typing import Optional, Tuple
<add>
<add>import numpy as np
<add>from tqdm.auto import tqdm
<add>
<add>
<add>logger = logging.getLogger(__name__)
<add>
<add>
<add>def postprocess_qa_predictions(
<add> examples,
<add> features,
<add> predictions: Tuple[np.ndarray, np.ndarray],
<add> version_2_with_negative: bool = False,
<add> n_best_size: int = 20,
<add> max_answer_length: int = 30,
<add> null_score_diff_threshold: float = 0.0,
<add> output_dir: Optional[str] = None,
<add> prefix: Optional[str] = None,
<add>):
<add> """
<add> Post-processes the predictions of a question-answering model to convert them to answers that are substrings of the
<add> original contexts. This is the base postprocessing functions for models that only return start and end logits.
<add>
<add> Args:
<add> examples: The non-preprocessed dataset (see the main script for more information).
<add> features: The processed dataset (see the main script for more information).
<add> predictions (:obj:`Tuple[np.ndarray, np.ndarray]`):
<add> The predictions of the model: two arrays containing the start logits and the end logits respectively. Its
<add> first dimension must match the number of elements of :obj:`features`.
<add> version_2_with_negative (:obj:`bool`, `optional`, defaults to :obj:`False`):
<add> Whether or not the underlying dataset contains examples with no answers.
<add> n_best_size (:obj:`int`, `optional`, defaults to 20):
<add> The total number of n-best predictions to generate when looking for an answer.
<add> max_answer_length (:obj:`int`, `optional`, defaults to 30):
<add> The maximum length of an answer that can be generated. This is needed because the start and end predictions
<add> are not conditioned on one another.
<add> null_score_diff_threshold (:obj:`float`, `optional`, defaults to 0):
<add> The threshold used to select the null answer: if the best answer has a score that is less than the score of
<add> the null answer minus this threshold, the null answer is selected for this example (note that the score of
<add> the null answer for an example giving several features is the minimum of the scores for the null answer on
<add> each feature: all features must be aligned on the fact they `want` to predict a null answer).
<add>
<add> Only useful when :obj:`version_2_with_negative` is :obj:`True`.
<add> output_dir (:obj:`str`, `optional`):
<add> If provided, the dictionaries of predictions, n_best predictions (with their scores and logits) and, if
<add> :obj:`version_2_with_negative=True`, the dictionary of the scores differences between best and null
<add> answers, are saved in `output_dir`.
<add> prefix (:obj:`str`, `optional`):
<add> If provided, the dictionaries mentioned above are saved with `prefix` added to their names.
<add> is_world_process_zero (:obj:`bool`, `optional`, defaults to :obj:`True`):
<add> Whether this process is the main process or not (used to determine if logging/saves should be done).
<add> """
<add> assert len(predictions) == 2, "`predictions` should be a tuple with two elements (start_logits, end_logits)."
<add> all_start_logits, all_end_logits = predictions
<add>
<add> assert len(predictions[0]) == len(features), f"Got {len(predictions[0])} predictions and {len(features)} features."
<add>
<add> # Build a map example to its corresponding features.
<add> example_id_to_index = {k: i for i, k in enumerate(examples["id"])}
<add> features_per_example = collections.defaultdict(list)
<add> for i, feature in enumerate(features):
<add> features_per_example[example_id_to_index[feature["example_id"]]].append(i)
<add>
<add> # The dictionaries we have to fill.
<add> all_predictions = collections.OrderedDict()
<add> all_nbest_json = collections.OrderedDict()
<add> if version_2_with_negative:
<add> scores_diff_json = collections.OrderedDict()
<add>
<add> # Logging.
<add> logger.info(f"Post-processing {len(examples)} example predictions split into {len(features)} features.")
<add>
<add> # Let's loop over all the examples!
<add> for example_index, example in enumerate(tqdm(examples)):
<add> # Those are the indices of the features associated to the current example.
<add> feature_indices = features_per_example[example_index]
<add>
<add> min_null_prediction = None
<add> prelim_predictions = []
<add>
<add> # Looping through all the features associated to the current example.
<add> for feature_index in feature_indices:
<add> # We grab the predictions of the model for this feature.
<add> start_logits = all_start_logits[feature_index]
<add> end_logits = all_end_logits[feature_index]
<add> # This is what will allow us to map some the positions in our logits to span of texts in the original
<add> # context.
<add> offset_mapping = features[feature_index]["offset_mapping"]
<add> # Optional `token_is_max_context`, if provided we will remove answers that do not have the maximum context
<add> # available in the current feature.
<add> token_is_max_context = features[feature_index].get("token_is_max_context", None)
<add>
<add> # Update minimum null prediction.
<add> feature_null_score = start_logits[0] + end_logits[0]
<add> if min_null_prediction is None or min_null_prediction["score"] > feature_null_score:
<add> min_null_prediction = {
<add> "offsets": (0, 0),
<add> "score": feature_null_score,
<add> "start_logit": start_logits[0],
<add> "end_logit": end_logits[0],
<add> }
<add>
<add> # Go through all possibilities for the `n_best_size` greater start and end logits.
<add> start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist()
<add> end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist()
<add> for start_index in start_indexes:
<add> for end_index in end_indexes:
<add> # Don't consider out-of-scope answers, either because the indices are out of bounds or correspond
<add> # to part of the input_ids that are not in the context.
<add> if (
<add> start_index >= len(offset_mapping)
<add> or end_index >= len(offset_mapping)
<add> or offset_mapping[start_index] is None
<add> or offset_mapping[end_index] is None
<add> ):
<add> continue
<add> # Don't consider answers with a length that is either < 0 or > max_answer_length.
<add> if end_index < start_index or end_index - start_index + 1 > max_answer_length:
<add> continue
<add> # Don't consider answer that don't have the maximum context available (if such information is
<add> # provided).
<add> if token_is_max_context is not None and not token_is_max_context.get(str(start_index), False):
<add> continue
<add> prelim_predictions.append(
<add> {
<add> "offsets": (offset_mapping[start_index][0], offset_mapping[end_index][1]),
<add> "score": start_logits[start_index] + end_logits[end_index],
<add> "start_logit": start_logits[start_index],
<add> "end_logit": end_logits[end_index],
<add> }
<add> )
<add> if version_2_with_negative:
<add> # Add the minimum null prediction
<add> prelim_predictions.append(min_null_prediction)
<add> null_score = min_null_prediction["score"]
<add>
<add> # Only keep the best `n_best_size` predictions.
<add> predictions = sorted(prelim_predictions, key=lambda x: x["score"], reverse=True)[:n_best_size]
<add>
<add> # Add back the minimum null prediction if it was removed because of its low score.
<add> if version_2_with_negative and not any(p["offsets"] == (0, 0) for p in predictions):
<add> predictions.append(min_null_prediction)
<add>
<add> # Use the offsets to gather the answer text in the original context.
<add> context = example["context"]
<add> for pred in predictions:
<add> offsets = pred.pop("offsets")
<add> pred["text"] = context[offsets[0] : offsets[1]]
<add>
<add> # In the very rare edge case we have not a single non-null prediction, we create a fake prediction to avoid
<add> # failure.
<add> if len(predictions) == 0 or (len(predictions) == 1 and predictions[0]["text"] == ""):
<add> predictions.insert(0, {"text": "empty", "start_logit": 0.0, "end_logit": 0.0, "score": 0.0})
<add>
<add> # Compute the softmax of all scores (we do it with numpy to stay independent from torch/tf in this file, using
<add> # the LogSumExp trick).
<add> scores = np.array([pred.pop("score") for pred in predictions])
<add> exp_scores = np.exp(scores - np.max(scores))
<add> probs = exp_scores / exp_scores.sum()
<add>
<add> # Include the probabilities in our predictions.
<add> for prob, pred in zip(probs, predictions):
<add> pred["probability"] = prob
<add>
<add> # Pick the best prediction. If the null answer is not possible, this is easy.
<add> if not version_2_with_negative:
<add> all_predictions[example["id"]] = predictions[0]["text"]
<add> else:
<add> # Otherwise we first need to find the best non-empty prediction.
<add> i = 0
<add> while predictions[i]["text"] == "":
<add> i += 1
<add> best_non_null_pred = predictions[i]
<add>
<add> # Then we compare to the null prediction using the threshold.
<add> score_diff = null_score - best_non_null_pred["start_logit"] - best_non_null_pred["end_logit"]
<add> scores_diff_json[example["id"]] = float(score_diff) # To be JSON-serializable.
<add> if score_diff > null_score_diff_threshold:
<add> all_predictions[example["id"]] = ""
<add> else:
<add> all_predictions[example["id"]] = best_non_null_pred["text"]
<add>
<add> # Make `predictions` JSON-serializable by casting np.float back to float.
<add> all_nbest_json[example["id"]] = [
<add> {k: (float(v) if isinstance(v, (np.float16, np.float32, np.float64)) else v) for k, v in pred.items()}
<add> for pred in predictions
<add> ]
<add>
<add> # If we have an output_dir, let's save all those dicts.
<add> if output_dir is not None:
<add> assert os.path.isdir(output_dir), f"{output_dir} is not a directory."
<add>
<add> prediction_file = os.path.join(
<add> output_dir, "predictions.json" if prefix is None else f"{prefix}_predictions.json"
<add> )
<add> nbest_file = os.path.join(
<add> output_dir, "nbest_predictions.json" if prefix is None else f"{prefix}_nbest_predictions.json"
<add> )
<add> if version_2_with_negative:
<add> null_odds_file = os.path.join(
<add> output_dir, "null_odds.json" if prefix is None else f"{prefix}_null_odds.json"
<add> )
<add>
<add> logger.info(f"Saving predictions to {prediction_file}.")
<add> with open(prediction_file, "w") as writer:
<add> writer.write(json.dumps(all_predictions, indent=4) + "\n")
<add> logger.info(f"Saving nbest_preds to {nbest_file}.")
<add> with open(nbest_file, "w") as writer:
<add> writer.write(json.dumps(all_nbest_json, indent=4) + "\n")
<add> if version_2_with_negative:
<add> logger.info(f"Saving null_odds to {null_odds_file}.")
<add> with open(null_odds_file, "w") as writer:
<add> writer.write(json.dumps(scores_diff_json, indent=4) + "\n")
<add>
<add> return all_predictions
<add>
<add>
<add>def postprocess_qa_predictions_with_beam_search(
<add> examples,
<add> features,
<add> predictions: Tuple[np.ndarray, np.ndarray],
<add> version_2_with_negative: bool = False,
<add> n_best_size: int = 20,
<add> max_answer_length: int = 30,
<add> start_n_top: int = 5,
<add> end_n_top: int = 5,
<add> output_dir: Optional[str] = None,
<add> prefix: Optional[str] = None,
<add> is_world_process_zero: bool = True,
<add>):
<add> """
<add> Post-processes the predictions of a question-answering model with beam search to convert them to answers that are substrings of the
<add> original contexts. This is the postprocessing functions for models that return start and end logits, indices, as well as
<add> cls token predictions.
<add>
<add> Args:
<add> examples: The non-preprocessed dataset (see the main script for more information).
<add> features: The processed dataset (see the main script for more information).
<add> predictions (:obj:`Tuple[np.ndarray, np.ndarray]`):
<add> The predictions of the model: two arrays containing the start logits and the end logits respectively. Its
<add> first dimension must match the number of elements of :obj:`features`.
<add> version_2_with_negative (:obj:`bool`, `optional`, defaults to :obj:`False`):
<add> Whether or not the underlying dataset contains examples with no answers.
<add> n_best_size (:obj:`int`, `optional`, defaults to 20):
<add> The total number of n-best predictions to generate when looking for an answer.
<add> max_answer_length (:obj:`int`, `optional`, defaults to 30):
<add> The maximum length of an answer that can be generated. This is needed because the start and end predictions
<add> are not conditioned on one another.
<add> start_n_top (:obj:`int`, `optional`, defaults to 5):
<add> The number of top start logits too keep when searching for the :obj:`n_best_size` predictions.
<add> end_n_top (:obj:`int`, `optional`, defaults to 5):
<add> The number of top end logits too keep when searching for the :obj:`n_best_size` predictions.
<add> output_dir (:obj:`str`, `optional`):
<add> If provided, the dictionaries of predictions, n_best predictions (with their scores and logits) and, if
<add> :obj:`version_2_with_negative=True`, the dictionary of the scores differences between best and null
<add> answers, are saved in `output_dir`.
<add> prefix (:obj:`str`, `optional`):
<add> If provided, the dictionaries mentioned above are saved with `prefix` added to their names.
<add> is_world_process_zero (:obj:`bool`, `optional`, defaults to :obj:`True`):
<add> Whether this process is the main process or not (used to determine if logging/saves should be done).
<add> """
<add> assert len(predictions) == 5, "`predictions` should be a tuple with five elements."
<add> start_top_log_probs, start_top_index, end_top_log_probs, end_top_index, cls_logits = predictions
<add>
<add> assert len(predictions[0]) == len(
<add> features
<add> ), f"Got {len(predictions[0])} predicitions and {len(features)} features."
<add>
<add> # Build a map example to its corresponding features.
<add> example_id_to_index = {k: i for i, k in enumerate(examples["id"])}
<add> features_per_example = collections.defaultdict(list)
<add> for i, feature in enumerate(features):
<add> features_per_example[example_id_to_index[feature["example_id"]]].append(i)
<add>
<add> # The dictionaries we have to fill.
<add> all_predictions = collections.OrderedDict()
<add> all_nbest_json = collections.OrderedDict()
<add> scores_diff_json = collections.OrderedDict() if version_2_with_negative else None
<add>
<add> # Logging.
<add> logger.setLevel(logging.INFO if is_world_process_zero else logging.WARN)
<add> logger.info(f"Post-processing {len(examples)} example predictions split into {len(features)} features.")
<add>
<add> # Let's loop over all the examples!
<add> for example_index, example in enumerate(tqdm(examples)):
<add> # Those are the indices of the features associated to the current example.
<add> feature_indices = features_per_example[example_index]
<add>
<add> min_null_score = None
<add> prelim_predictions = []
<add>
<add> # Looping through all the features associated to the current example.
<add> for feature_index in feature_indices:
<add> # We grab the predictions of the model for this feature.
<add> start_log_prob = start_top_log_probs[feature_index]
<add> start_indexes = start_top_index[feature_index]
<add> end_log_prob = end_top_log_probs[feature_index]
<add> end_indexes = end_top_index[feature_index]
<add> feature_null_score = cls_logits[feature_index]
<add> # This is what will allow us to map some the positions in our logits to span of texts in the original
<add> # context.
<add> offset_mapping = features[feature_index]["offset_mapping"]
<add> # Optional `token_is_max_context`, if provided we will remove answers that do not have the maximum context
<add> # available in the current feature.
<add> token_is_max_context = features[feature_index].get("token_is_max_context", None)
<add>
<add> # Update minimum null prediction
<add> if min_null_score is None or feature_null_score < min_null_score:
<add> min_null_score = feature_null_score
<add>
<add> # Go through all possibilities for the `n_start_top`/`n_end_top` greater start and end logits.
<add> for i in range(start_n_top):
<add> for j in range(end_n_top):
<add> start_index = int(start_indexes[i])
<add> j_index = i * end_n_top + j
<add> end_index = int(end_indexes[j_index])
<add> # Don't consider out-of-scope answers (last part of the test should be unnecessary because of the
<add> # p_mask but let's not take any risk)
<add> if (
<add> start_index >= len(offset_mapping)
<add> or end_index >= len(offset_mapping)
<add> or offset_mapping[start_index] is None
<add> or offset_mapping[end_index] is None
<add> ):
<add> continue
<add> # Don't consider answers with a length negative or > max_answer_length.
<add> if end_index < start_index or end_index - start_index + 1 > max_answer_length:
<add> continue
<add> # Don't consider answer that don't have the maximum context available (if such information is
<add> # provided).
<add> if token_is_max_context is not None and not token_is_max_context.get(str(start_index), False):
<add> continue
<add> prelim_predictions.append(
<add> {
<add> "offsets": (offset_mapping[start_index][0], offset_mapping[end_index][1]),
<add> "score": start_log_prob[i] + end_log_prob[j_index],
<add> "start_log_prob": start_log_prob[i],
<add> "end_log_prob": end_log_prob[j_index],
<add> }
<add> )
<add>
<add> # Only keep the best `n_best_size` predictions.
<add> predictions = sorted(prelim_predictions, key=lambda x: x["score"], reverse=True)[:n_best_size]
<add>
<add> # Use the offsets to gather the answer text in the original context.
<add> context = example["context"]
<add> for pred in predictions:
<add> offsets = pred.pop("offsets")
<add> pred["text"] = context[offsets[0] : offsets[1]]
<add>
<add> # In the very rare edge case we have not a single non-null prediction, we create a fake prediction to avoid
<add> # failure.
<add> if len(predictions) == 0:
<add> predictions.insert(0, {"text": "", "start_logit": -1e-6, "end_logit": -1e-6, "score": -2e-6})
<add>
<add> # Compute the softmax of all scores (we do it with numpy to stay independent from torch/tf in this file, using
<add> # the LogSumExp trick).
<add> scores = np.array([pred.pop("score") for pred in predictions])
<add> exp_scores = np.exp(scores - np.max(scores))
<add> probs = exp_scores / exp_scores.sum()
<add>
<add> # Include the probabilities in our predictions.
<add> for prob, pred in zip(probs, predictions):
<add> pred["probability"] = prob
<add>
<add> # Pick the best prediction and set the probability for the null answer.
<add> all_predictions[example["id"]] = predictions[0]["text"]
<add> if version_2_with_negative:
<add> scores_diff_json[example["id"]] = float(min_null_score)
<add>
<add> # Make `predictions` JSON-serializable by casting np.float back to float.
<add> all_nbest_json[example["id"]] = [
<add> {k: (float(v) if isinstance(v, (np.float16, np.float32, np.float64)) else v) for k, v in pred.items()}
<add> for pred in predictions
<add> ]
<add>
<add> # If we have an output_dir, let's save all those dicts.
<add> if output_dir is not None:
<add> assert os.path.isdir(output_dir), f"{output_dir} is not a directory."
<add>
<add> prediction_file = os.path.join(
<add> output_dir, "predictions.json" if prefix is None else f"{prefix}_predictions.json"
<add> )
<add> nbest_file = os.path.join(
<add> output_dir, "nbest_predictions.json" if prefix is None else f"{prefix}_nbest_predictions.json"
<add> )
<add> if version_2_with_negative:
<add> null_odds_file = os.path.join(
<add> output_dir, "null_odds.json" if prefix is None else f"{prefix}_null_odds.json"
<add> )
<add>
<add> print(f"Saving predictions to {prediction_file}.")
<add> with open(prediction_file, "w") as writer:
<add> writer.write(json.dumps(all_predictions, indent=4) + "\n")
<add> print(f"Saving nbest_preds to {nbest_file}.")
<add> with open(nbest_file, "w") as writer:
<add> writer.write(json.dumps(all_nbest_json, indent=4) + "\n")
<add> if version_2_with_negative:
<add> print(f"Saving null_odds to {null_odds_file}.")
<add> with open(null_odds_file, "w") as writer:
<add> writer.write(json.dumps(scores_diff_json, indent=4) + "\n")
<add>
<add> return all_predictions, scores_diff_json | 4 |
Javascript | Javascript | do proper path-filtering for status | 73cb867ccd3f734cbd4fa6aec682c32c164710dc | <ide><path>src/git-repository-async.js
<ide> export default class GitRepositoryAsync {
<ide> // Returns a {Promise} which resolves to a {Boolean} that's true if the `path`
<ide> // is modified.
<ide> isPathModified (_path) {
<del> return this._filterStatusesByPath(_path)
<add> return this._getStatus([_path])
<ide> .then(statuses => statuses.some(status => status.isModified()))
<ide> }
<ide>
<ide> export default class GitRepositoryAsync {
<ide> // Returns a {Promise} which resolves to a {Boolean} that's true if the `path`
<ide> // is new.
<ide> isPathNew (_path) {
<del> return this._filterStatusesByPath(_path)
<add> return this._getStatus([_path])
<ide> .then(statuses => statuses.some(status => status.isNew()))
<ide> }
<ide>
<ide> export default class GitRepositoryAsync {
<ide> // value can be passed to {::isStatusModified} or {::isStatusNew} to get more
<ide> // information.
<ide> getDirectoryStatus (directoryPath) {
<del> // XXX _filterSBD already gets repoPromise
<ide> return this.repoPromise
<ide> .then(repo => {
<ide> const relativePath = this.relativize(directoryPath, repo.workdir())
<del> return this._filterStatusesByDirectory(relativePath)
<add> return this._getStatus([relativePath])
<ide> })
<ide> .then(statuses => {
<ide> return Promise.all(statuses.map(s => s.statusBit())).then(bits => {
<ide> export default class GitRepositoryAsync {
<ide> return this.repoPromise
<ide> .then(repo => {
<ide> relativePath = this.relativize(_path, repo.workdir())
<del> return this._filterStatusesByPath(_path)
<add> return this._getStatus([relativePath])
<ide> })
<ide> .then(statuses => {
<ide> const cachedStatus = this.pathStatusCache[relativePath] || 0
<ide> export default class GitRepositoryAsync {
<ide> return this.repoPromise
<ide> .then(repo => repo.getStatus())
<ide> .then(statuses => {
<del> // update the status cache
<ide> const statusPairs = statuses.map(status => [status.path(), status.statusBit()])
<ide> return Promise.all(statusPairs)
<ide> .then(statusesByPath => _.object(statusesByPath))
<ide> export default class GitRepositoryAsync {
<ide> this.subscriptions.add(bufferSubscriptions)
<ide> }
<ide>
<del> // Get the status for the given path.
<add> // Get the status for the given paths.
<ide> //
<del> // * `path` The {String} path whose status is wanted.
<add> // * `paths` The {String} paths whose status is wanted. If undefined, get the
<add> // status for the whole repository.
<ide> //
<del> // Returns a {Promise} which resolves to the {NodeGit.StatusFile} status for
<del> // the path.
<del> _filterStatusesByPath (_path) {
<del> // TODO: Is there a more efficient way to do this?
<del> let basePath = null
<add> // Returns a {Promise} which resolves to an {Array} of {NodeGit.StatusFile}
<add> // statuses for the paths.
<add> _getStatus (paths) {
<ide> return this.repoPromise
<ide> .then(repo => {
<del> basePath = repo.workdir()
<del> return repo.getStatus()
<del> })
<del> .then(statuses => {
<del> return statuses.filter(status => _path === path.join(basePath, status.path()))
<del> })
<del> }
<add> const opts = {
<add> flags: Git.Status.OPT.INCLUDE_UNTRACKED | Git.Status.OPT.RECURSE_UNTRACKED_DIRS | Git.Status.OPT.DISABLE_PATHSPEC_MATCH
<add> }
<ide>
<del> // Get the status for everything in the given directory.
<del> //
<del> // * `directoryPath` The {String} directory whose status is wanted.
<del> //
<del> // Returns a {Promise} which resolves to an {Array} of {NodeGit.StatusFile}
<del> // statuses for every file in the directory.
<del> _filterStatusesByDirectory (directoryPath) {
<del> return this.repoPromise
<del> .then(repo => repo.getStatus())
<del> .then(statuses => {
<del> return statuses.filter(status => status.path().indexOf(directoryPath) === 0)
<add> if (paths) {
<add> opts.pathspec = paths
<add> }
<add>
<add> return repo.getStatus(opts)
<ide> })
<ide> }
<ide> } | 1 |
Ruby | Ruby | remove the functionality introduce in 28d3390 | cb45ee344d5ec6974b78dc593e4eaef2101e3d42 | <ide><path>activerecord/lib/active_record/associations.rb
<ide> def association_accessor_methods(reflection, association_proxy_class)
<ide>
<ide> association = instance_variable_get(ivar) if instance_variable_defined?(ivar)
<ide>
<del> if association.nil? || !association.loaded? || force_reload
<add> if association.nil? || force_reload
<ide> association = association_proxy_class.new(self, reflection)
<ide> retval = association.reload
<ide> if retval.nil? and association_proxy_class == BelongsToAssociation
<ide> def association_accessor_methods(reflection, association_proxy_class)
<ide> end
<ide> end
<ide>
<del> if association_proxy_class == BelongsToAssociation
<del> define_method("#{reflection.primary_key_name}=") do |target_id|
<del> if instance_variable_defined?(ivar)
<del> if association = instance_variable_get(ivar)
<del> association.reset
<del> end
<del> end
<del> write_attribute(reflection.primary_key_name, target_id)
<del> end
<del> end
<del>
<ide> define_method("set_#{reflection.name}_target") do |target|
<ide> return if target.nil? and association_proxy_class == BelongsToAssociation
<ide> association = association_proxy_class.new(self, reflection)
<ide><path>activerecord/test/cases/associations/belongs_to_associations_test.rb
<ide> def test_natural_assignment
<ide> assert_equal apple.id, citibank.firm_id
<ide> end
<ide>
<del> def test_foreign_key_assignment
<del> # Test using an existing record
<del> signals37 = accounts(:signals37)
<del> assert_equal companies(:first_firm), signals37.firm
<del> signals37.firm_id = companies(:another_firm).id
<del> assert_equal companies(:another_firm), signals37.firm
<del>
<del> # Test using a new record
<del> account = Account.new
<del> account.firm_id = companies(:another_firm).id
<del> assert_equal companies(:another_firm), account.firm
<del> end
<del>
<ide> def test_no_unexpected_aliasing
<ide> first_firm = companies(:first_firm)
<ide> another_firm = companies(:another_firm)
<ide><path>activerecord/test/cases/associations_test.rb
<ide> def test_failed_reset_returns_nil
<ide> assert_nil p.author.reset
<ide> end
<ide>
<del> def test_reset_loads_association_next_time
<del> welcome = posts(:welcome)
<del> david = authors(:david)
<del> author_assoc = welcome.author
<del>
<del> assert_equal david, welcome.author # So we can be sure the test works correctly
<del> author_assoc.reset
<del> assert !author_assoc.loaded?
<del> assert_nil author_assoc.target
<del> assert_equal david, welcome.author
<del> end
<del>
<del> def test_assigning_association_id_after_reload
<del> welcome = posts(:welcome)
<del> welcome.reload
<del> assert_nothing_raised do
<del> welcome.author_id = authors(:david).id
<del> end
<del> end
<del>
<ide> def test_reload_returns_assocition
<ide> david = developers(:david)
<ide> assert_nothing_raised do | 3 |
Ruby | Ruby | handle explicit version dsl | a9a62972bdce95a2b94db1be2e732480d88e17eb | <ide><path>Library/Homebrew/dev-cmd/bump-formula-pr.rb
<ide> def bump_formula_pr
<ide> new_tag = ARGV.value("tag")
<ide> new_revision = ARGV.value("revision")
<ide> new_mirror = ARGV.value("mirror")
<add> forced_version = ARGV.value("version")
<ide> new_url_hash = if new_url && new_hash
<ide> true
<ide> elsif new_tag && new_revision
<ide> def bump_formula_pr
<ide> replacement_pairs << [/^( +)(url \"#{new_url}\"\n)/m, "\\1\\2\\1mirror \"#{new_mirror}\"\n"]
<ide> end
<ide>
<add> if forced_version && forced_version != "0"
<add> replacement_pairs << [old_formula_version, forced_version]
<add> elsif forced_version && forced_version == "0"
<add> replacement_pairs << [/^ version \"[a-z\d+\.]+\"\n/m, ""]
<add> end
<ide> new_contents = inreplace_pairs(formula.path, replacement_pairs)
<ide>
<ide> new_formula_version = formula_version(formula, requested_spec, new_contents) | 1 |
PHP | PHP | remove unneeded nullstorage | c9838a73a6590aab49104014cf290673052b7741 | <ide><path>src/Auth/Storage/NullStorage.php
<del><?php
<del>/**
<del> * CakePHP(tm) : Rapid Development Framework (http://cakephp.org)
<del> * Copyright (c) Cake Software Foundation, Inc. (http://cakefoundation.org)
<del> *
<del> * Licensed under The MIT License
<del> * For full copyright and license information, please see the LICENSE.txt
<del> * Redistributions of files must retain the above copyright notice.
<del> *
<del> * @copyright Copyright (c) Cake Software Foundation, Inc. (http://cakefoundation.org)
<del> * @link http://cakephp.org CakePHP(tm) Project
<del> * @since 3.1.0
<del> * @license http://www.opensource.org/licenses/mit-license.php MIT License
<del> */
<del>namespace Cake\Auth\Storage;
<del>
<del>use Cake\Core\InstanceConfigTrait;
<del>use Cake\Network\Request;
<del>
<del>class NullStorage implements StorageInterface
<del>{
<del> public function get()
<del> {
<del> }
<del>
<del> public function set(array $user)
<del> {
<del> }
<del>
<del> public function remove()
<del> {
<del> }
<del>} | 1 |
Mixed | Javascript | fix some nits in perf_hooks | 0a4c69a5c5e8ba93b7c9f67d08ef6c859eb1f86a | <ide><path>doc/api/perf_hooks.md
<ide> Returns a list of `PerformanceEntry` objects in chronological order
<ide> with respect to `performanceEntry.startTime` whose `performanceEntry.entryType`
<ide> is equal to `type`.
<ide>
<del>## monitorEventLoopDelay([options])
<add>## perf_hooks.monitorEventLoopDelay([options])
<ide> <!-- YAML
<ide> added: REPLACEME
<ide> -->
<ide>
<ide> * `options` {Object}
<ide> * `resolution` {number} The sampling rate in milliseconds. Must be greater
<del> than zero. Defaults to `10`.
<add> than zero. **Default:** `10`.
<ide> * Returns: {Histogram}
<ide>
<ide> Creates a `Histogram` object that samples and reports the event loop delay
<ide> detect.
<ide> const { monitorEventLoopDelay } = require('perf_hooks');
<ide> const h = monitorEventLoopDelay({ resolution: 20 });
<ide> h.enable();
<del>// Do something
<add>// Do something.
<ide> h.disable();
<ide> console.log(h.min);
<ide> console.log(h.max);
<ide> Enables the event loop delay sample timer. Returns `true` if the timer was
<ide> started, `false` if it was already started.
<ide>
<ide> #### histogram.exceeds
<add><!-- YAML
<add>added: REPLACEME
<add>-->
<ide>
<del>* Value: {number}
<add>* {number}
<ide>
<ide> The number of times the event loop delay exceeded the maximum 1 hour event
<ide> loop delay threshold.
<ide>
<ide> #### histogram.max
<add><!-- YAML
<add>added: REPLACEME
<add>-->
<ide>
<del>* Value: {number}
<add>* {number}
<ide>
<ide> The maximum recorded event loop delay.
<ide>
<ide> #### histogram.mean
<add><!-- YAML
<add>added: REPLACEME
<add>-->
<ide>
<del>* Value: {number}
<add>* {number}
<ide>
<ide> The mean of the recorded event loop delays.
<ide>
<ide> The mean of the recorded event loop delays.
<ide> added: REPLACEME
<ide> -->
<ide>
<del>* Value: {number}
<add>* {number}
<ide>
<ide> The minimum recorded event loop delay.
<ide>
<ide> #### histogram.percentile(percentile)
<add><!-- YAML
<add>added: REPLACEME
<add>-->
<ide>
<ide> * `percentile` {number} A percentile value between 1 and 100.
<add>* Returns: {number}
<ide>
<ide> Returns the value at the given percentile.
<ide>
<ide> #### histogram.percentiles
<add><!-- YAML
<add>added: REPLACEME
<add>-->
<ide>
<del>* Value: {Map}
<add>* {Map}
<ide>
<ide> Returns a `Map` object detailing the accumulated percentile distribution.
<ide>
<ide> added: REPLACEME
<ide> Resets the collected histogram data.
<ide>
<ide> #### histogram.stddev
<add><!-- YAML
<add>added: REPLACEME
<add>-->
<ide>
<del>* Value: {number}
<add>* {number}
<ide>
<ide> The standard deviation of the recorded event loop delays.
<ide>
<ide><path>tools/doc/type-parser.js
<ide> const customTypesMap = {
<ide> 'fs.Stats': 'fs.html#fs_class_fs_stats',
<ide> 'fs.WriteStream': 'fs.html#fs_class_fs_writestream',
<ide>
<del> 'Histogram': 'perf_hooks.html#perf_hooks_class_histogram',
<del>
<ide> 'http.Agent': 'http.html#http_class_http_agent',
<ide> 'http.ClientRequest': 'http.html#http_class_http_clientrequest',
<ide> 'http.IncomingMessage': 'http.html#http_class_http_incomingmessage',
<ide> const customTypesMap = {
<ide>
<ide> 'os.constants.dlopen': 'os.html#os_dlopen_constants',
<ide>
<add> 'Histogram': 'perf_hooks.html#perf_hooks_class_histogram',
<ide> 'PerformanceEntry': 'perf_hooks.html#perf_hooks_class_performanceentry',
<ide> 'PerformanceNodeTiming':
<ide> 'perf_hooks.html#perf_hooks_class_performancenodetiming_extends_performanceentry', // eslint-disable-line max-len | 2 |
Text | Text | add extra step for reporter pre-approval | deb9f5eff0958940af28d67228c0382b19cdb1dc | <ide><path>doc/contributing/security-release-process.md
<ide> The current security stewards are documented in the main Node.js
<ide> the date in the slug so that it will move to the top of the blog list.)
<ide> * (Consider using a [Vulnerability Score System](https://www.first.org/cvss/calculator/3.1)
<ide> to identify severity of each report)
<add> * Share the patch with the reporter when applicable.
<add> It will increase the fix accuracy.
<ide> * [ ] pre-release: _**LINK TO PR**_
<ide> * [ ] post-release: _**LINK TO PR**_
<ide> * List vulnerabilities in order of descending severity
<ide> The current security stewards are documented in the main Node.js
<ide> * [ ] Check that all vulnerabilities are ready for release integration:
<ide> * PRs against all affected release lines or cherry-pick clean
<ide> * Approved
<add> * (optional) Approved by the reporter
<add> * Build and send the binary to the reporter according to its architecture
<add> and ask for a review. This step is important to avoid insufficient fixes
<add> between Security Releases.
<ide> * Pass `make test`
<ide> * Have CVEs
<ide> * Make sure that dependent libraries have CVEs for their issues. We should | 1 |
PHP | PHP | use an early return to remove a layer of nesting | bc8ab57e0dad6f384d5fa918a233c2dae9c27fbd | <ide><path>src/View/Helper/SessionHelper.php
<ide> public function error() {
<ide> * @link http://book.cakephp.org/2.0/en/core-libraries/helpers/session.html#SessionHelper::flash
<ide> */
<ide> public function flash($key = 'flash', $attrs = array()) {
<del> $out = false;
<ide>
<del> if (Session::check('Message.' . $key)) {
<del> $flash = Session::read('Message.' . $key);
<del> $message = $flash['message'];
<del> unset($flash['message']);
<add> if (!Session::check('Message.' . $key)) {
<add> return '';
<add> }
<ide>
<del> if (!empty($attrs)) {
<del> $flash = array_merge($flash, $attrs);
<del> }
<add> $flash = Session::read('Message.' . $key);
<add> $message = $flash['message'];
<add> unset($flash['message']);
<ide>
<del> if ($flash['element'] === 'default') {
<del> $class = 'message';
<del> if (!empty($flash['params']['class'])) {
<del> $class = $flash['params']['class'];
<del> }
<del> $out = $this->formatTemplate('flash', [
<del> 'class' => $class,
<del> 'key' => $key,
<del> 'message' => $message
<del> ]);
<del> } elseif (!$flash['element']) {
<del> $out = $message;
<del> } else {
<del> $options = array();
<del> if (isset($flash['params']['plugin'])) {
<del> $options['plugin'] = $flash['params']['plugin'];
<del> }
<del> $tmpVars = $flash['params'];
<del> $tmpVars['message'] = $message;
<del> $out = $this->_View->element($flash['element'], $tmpVars, $options);
<add> if (!empty($attrs)) {
<add> $flash = array_merge($flash, $attrs);
<add> }
<add>
<add> if ($flash['element'] === 'default') {
<add> $class = 'message';
<add> if (!empty($flash['params']['class'])) {
<add> $class = $flash['params']['class'];
<add> }
<add> $out = $this->formatTemplate('flash', [
<add> 'class' => $class,
<add> 'key' => $key,
<add> 'message' => $message
<add> ]);
<add> } elseif (!$flash['element']) {
<add> $out = $message;
<add> } else {
<add> $options = array();
<add> if (isset($flash['params']['plugin'])) {
<add> $options['plugin'] = $flash['params']['plugin'];
<ide> }
<del> Session::delete('Message.' . $key);
<add> $tmpVars = $flash['params'];
<add> $tmpVars['message'] = $message;
<add> $out = $this->_View->element($flash['element'], $tmpVars, $options);
<ide> }
<add> Session::delete('Message.' . $key);
<ide> return $out;
<ide> }
<ide> | 1 |
Ruby | Ruby | restore formula unless read only run | 39f000c6ab7cffe661549aec1d3651502b339957 | <ide><path>Library/Homebrew/dev-cmd/bump-formula-pr.rb
<ide> def bump_formula_pr
<ide> ]
<ide> end
<ide>
<del> old_contents = File.read(formula.path) unless args.dry_run?
<add> read_only_run = args.dry_run? && !args.write?
<add> old_contents = File.read(formula.path) unless read_only_run
<ide>
<ide> if new_mirrors
<ide> replacement_pairs << [
<ide> def bump_formula_pr
<ide> end
<ide>
<ide> if new_formula_version < old_formula_version
<del> formula.path.atomic_write(old_contents) unless args.dry_run?
<add> formula.path.atomic_write(old_contents) unless read_only_run
<ide> odie <<~EOS
<ide> You need to bump this formula manually since changing the
<ide> version from #{old_formula_version} to #{new_formula_version} would be a downgrade.
<ide> EOS
<ide> elsif new_formula_version == old_formula_version
<del> formula.path.atomic_write(old_contents) unless args.dry_run?
<add> formula.path.atomic_write(old_contents) unless read_only_run
<ide> odie <<~EOS
<ide> You need to bump this formula manually since the new version
<ide> and old version are both #{new_formula_version}.
<ide> def bump_formula_pr
<ide> end
<ide>
<ide> ohai "brew update-python-resources #{formula.name}"
<del> if !args.dry_run? || (args.dry_run? && args.write?)
<add> unless read_only_run
<ide> PyPI.update_python_resources! formula, new_formula_version, silent: true, ignore_non_pypi_packages: true
<ide> end
<ide>
<ide> def forked_repo_info(formula, tap_full_name, old_contents)
<ide> end
<ide>
<ide> def inreplace_pairs(path, replacement_pairs, args:)
<del> if args.dry_run?
<add> read_only_run = args.dry_run? && !args.write?
<add> if read_only_run
<ide> str = path.open("r") { |f| Formulary.ensure_utf8_encoding(f).read }
<ide> contents = StringInreplaceExtension.new(str)
<ide> replacement_pairs.each do |old, new| | 1 |
Python | Python | set filename to none for memmaps of unnamed files | 5f1e731fa5a2d3bca239662c27288b0b8b3588e7 | <ide><path>numpy/core/memmap.py
<ide> def __new__(subtype, filename, dtype=uint8, mode='r+', offset=0,
<ide>
<ide> if isinstance(filename, basestring):
<ide> self.filename = os.path.abspath(filename)
<del> elif hasattr(filename, "name"):
<add> # py3 returns int for TemporaryFile().name
<add> elif (hasattr(filename, "name") and
<add> isinstance(filename.name, basestring)):
<ide> self.filename = os.path.abspath(filename.name)
<add> # same as memmap copies (e.g. memmap + 1)
<add> else:
<add> self.filename = None
<ide>
<ide> if own_file:
<ide> fid.close()
<ide><path>numpy/core/tests/test_memmap.py
<ide> from __future__ import division, absolute_import, print_function
<ide>
<ide> import sys
<del>from tempfile import NamedTemporaryFile, mktemp
<add>from tempfile import NamedTemporaryFile, TemporaryFile, mktemp
<ide> import os
<ide>
<ide> from numpy import memmap
<ide> def test_open_with_filename(self):
<ide> del fp
<ide> os.unlink(tmpname)
<ide>
<add> def test_unnamed_file(self):
<add> with TemporaryFile() as f:
<add> fp = memmap(f, dtype=self.dtype, shape=self.shape)
<add> del fp
<add>
<ide> def test_attributes(self):
<ide> offset = 1
<ide> mode = "w+" | 2 |
Javascript | Javascript | simplify logic of testing buffer equality | 7dd5e824be53e222af63cc86e89b7ab7f841901c | <ide><path>lib/assert.js
<ide> 'use strict';
<ide>
<ide> // UTILITY
<add>const compare = process.binding('buffer').compare;
<ide> const util = require('util');
<ide> const pSlice = Array.prototype.slice;
<ide>
<ide> function _deepEqual(actual, expected, strict) {
<ide> if (actual === expected) {
<ide> return true;
<ide> } else if (actual instanceof Buffer && expected instanceof Buffer) {
<del> if (actual.length != expected.length) return false;
<del>
<del> for (var i = 0; i < actual.length; i++) {
<del> if (actual[i] !== expected[i]) return false;
<del> }
<del>
<del> return true;
<add> return compare(actual, expected) === 0;
<ide>
<ide> // 7.2. If the expected value is a Date object, the actual value is
<ide> // equivalent if it is also a Date object that refers to the same time. | 1 |
Ruby | Ruby | fix typo on instance variable get call | 423f3b8dc1b659be43cf1990d8c6b405e2904c2f | <ide><path>activerecord/lib/active_record/persistence.rb
<ide> def reload(options = nil)
<ide> @attributes.update(fresh_object.instance_variable_get('@attributes'))
<ide>
<ide> @column_types = self.class.column_types
<del> @column_types_override = fresh_object.instance_variable_get('@columns_types_override')
<add> @column_types_override = fresh_object.instance_variable_get('@column_types_override')
<ide> @attributes_cache = {}
<ide> self
<ide> end | 1 |
Text | Text | improve russian translation | c87eb4026b94060f1a426adae0f9b451f7ce498e | <ide><path>curriculum/challenges/russian/05-apis-and-microservices/basic-node-and-express/serve-static-assets.russian.md
<ide> forumTopicId: 301518
<ide> localeTitle: Служить статическим активам
<ide> ---
<ide>
<del>## Description
<add>## Описание
<ide> <section id='description'>
<del>HTML-сервер обычно имеет один или несколько каталогов, которые доступны пользователю. Вы можете разместить там статические ресурсы, необходимые для вашего приложения (таблицы стилей, скрипты, изображения). В Express вы можете реализовать эту функцию, используя промежуточное программное обеспечение <code>express.static(path)</code> , где параметр - это абсолютный путь к папке, содержащей ресурсы. Если вы не знаете, что такое промежуточное программное обеспечение, не беспокойтесь. Об этом мы поговорим позже. В основном промежуточные программы - это функции, которые перехватывают обработчики маршрутов, добавляя некоторую информацию. <code>app.use(path, middlewareFunction)</code> программное обеспечение должно быть смонтировано с использованием метода <code>app.use(path, middlewareFunction)</code> . Первый аргумент пути не является обязательным. Если вы не передадите его, промежуточное программное обеспечение будет выполнено для всех запросов.
<del>Установите <code>express.static()</code> промежуточный слой для всех запросов с <code>app.use()</code> . Абсолютный путь к папке ресурсов - <code>__dirname + /public</code> .
<del>Теперь ваше приложение должно обслуживать таблицу стилей CSS. Снаружи общедоступная папка будет выглядеть подключенной к корневому каталогу. Ваша первая страница должна выглядеть немного лучше!
<add>HTML-сервер обычно имеет один или несколько каталогов, которые доступны пользователю. Вы можете разместить там статические ресурсы, необходимые для вашего приложения (таблицы стилей, скрипты, изображения). В Express вы можете реализовать эту функцию, используя middkeware <code>express.static(path)</code>, где параметр - это абсолютный путь к папке, содержащей ресурсы. Если вы не знаете, что такое middleware, не беспокойтесь. Об этом мы поговорим позже. В основном middleware - это функции, которые перехватывают обработчики маршрутов, добавляя некоторую информацию. <code>app.use(path, middlewareFunction)</code> middleware должно быть задействовано с использованием метода <code>app.use(path, middlewareFunction)</code> . Первый аргумент пути не является обязательным. Если вы не передадите его, middleware будет выполнено для всех запросов.
<add>
<ide> </section>
<ide>
<del>## Instructions
<add>## Задание
<ide> <section id='instructions'>
<del>Mount the <code>express.static()</code> middleware for all requests with <code>app.use()</code>. The absolute path to the assets folder is <code>__dirname + /public</code>.
<del>Now your app should be able to serve a CSS stylesheet. From outside, the public folder will appear mounted to the root directory. Your front-page should look a little better now!
<add>Установите <code>express.static()</code> middleware для всех запросов с <code>app.use()</code> . Абсолютный путь к папке ресурсов - <code>__dirname + /public</code> .
<add>Теперь ваше приложение должно предоставлять таблицу стилей CSS. Снаружи общедоступная папка будет выглядеть подключенной к корневому каталогу. Ваша первая страница должна выглядеть немного лучше!
<ide> </section>
<ide>
<ide> ## Tests | 1 |
Python | Python | fix the __getattr__ method in batchencoding | 9f5d5a531d769d07403f59661884e254f8420afe | <ide><path>src/transformers/tokenization_utils.py
<ide> def __getitem__(self, item: Union[int, str]) -> EncodingFast:
<ide> )
<ide>
<ide> def __getattr__(self, item: str):
<del> return self.data[item]
<add> try:
<add> return self.data[item]
<add> except KeyError:
<add> raise AttributeError
<ide>
<ide> def keys(self):
<ide> return self.data.keys() | 1 |
Ruby | Ruby | require deprecate_disable module in formulary | f387c7f0b24d83bebe0fc38cc0d3f74af1fd8e85 | <ide><path>Library/Homebrew/formulary.rb
<ide> def self.convert_to_string_or_symbol(string)
<ide> end
<ide>
<ide> def self.convert_to_deprecate_disable_reason_string_or_symbol(string)
<add> require "deprecate_disable"
<ide> return string unless DeprecateDisable::DEPRECATE_DISABLE_REASONS.keys.map(&:to_s).include?(string)
<ide>
<ide> string.to_sym | 1 |
Python | Python | add tests for cexp | b8c51d63eebec920164e54b7ea3b4a6d3cd1139b | <ide><path>numpy/core/tests/test_umath_complex.py
<ide> def assert_almost_equal_spec(x, y):
<ide> else:
<ide> assert_almost_equal(x, y)
<ide>
<add>class TestCexp(object):
<add> def test_simple(self):
<add> check = check_complex_value
<add> f = np.exp
<add>
<add> yield check, f, 1, 0, np.exp(1), 0, False
<add> yield check, f, 0, 1, np.cos(1), np.sin(1), False
<add>
<add> ref = np.exp(1) * np.complex(np.cos(1), np.sin(1))
<add> yield check, f, 1, 1, ref.real, ref.imag, False
<add>
<add> def test_special_values(self):
<add> # C99: Section G 6.3.1
<add>
<add> check = check_complex_value
<add> f = np.exp
<add>
<add> # cexp(+-0 + 0i) is 1 + 0i
<add> yield check, f, np.PZERO, 0, 1, 0, False
<add> yield check, f, np.NZERO, 0, 1, 0, False
<add>
<add> # cexp(x + infi) is nan + nani for finite x and raises 'invalid' FPU
<add> # exception
<add> yield check, f, 1, np.inf, np.nan, np.nan
<add> yield check, f, -1, np.inf, np.nan, np.nan
<add> yield check, f, 0, np.inf, np.nan, np.nan
<add>
<add> # cexp(inf + 0i) is inf + 0i
<add> yield check, f, np.inf, 0, np.inf, 0
<add>
<add> # cexp(-inf + yi) is +0 * (cos(y) + i sin(y)) for finite y
<add> ref = np.complex(np.cos(1.), np.sin(1.))
<add> yield check, f, -np.inf, 1, np.PZERO, np.PZERO
<add>
<add> ref = np.complex(np.cos(np.pi * 0.75), np.sin(np.pi * 0.75))
<add> yield check, f, -np.inf, 0.75 * np.pi, np.NZERO, np.PZERO
<add>
<add> # cexp(inf + yi) is +inf * (cos(y) + i sin(y)) for finite y
<add> ref = np.complex(np.cos(1.), np.sin(1.))
<add> yield check, f, np.inf, 1, np.inf, np.inf
<add>
<add> ref = np.complex(np.cos(np.pi * 0.75), np.sin(np.pi * 0.75))
<add> yield check, f, np.inf, 0.75 * np.pi, -np.inf, np.inf
<add>
<add> # cexp(-inf + inf i) is +-0 +- 0i (signs unspecified)
<add> def _check_ninf_inf(dummy):
<add> z = f(np.array(np.complex(-np.inf, np.inf)))
<add> if z.real != 0 or z.imag != 0:
<add> raise AssertionError(
<add> "cexp(-inf, inf) is (%f, %f), expected (+-0, +-0)" \
<add> % (z.real, z.imag))
<add> yield _check_ninf_inf, None
<add>
<add> # cexp(inf + inf i) is +-inf + NaNi and raised invalid FPU ex.
<add> def _check_inf_inf(dummy):
<add> z = f(np.array(np.complex(np.inf, np.inf)))
<add> if not np.isinf(z.real) or not np.isnan(z.imag):
<add> raise AssertionError(
<add> "cexp(inf, inf) is (%f, %f), expected (+-inf, nan)" \
<add> % (z.real, z.imag))
<add> yield _check_inf_inf, None
<add>
<add> # cexp(-inf + nan i) is +-0 +- 0i
<add> def _check_ninf_nan(dummy):
<add> z = f(np.array(np.complex(-np.inf, np.nan)))
<add> if z.real != 0 or z.imag != 0:
<add> raise AssertionError(
<add> "cexp(-inf, nan) is (%f, %f), expected (+-0, +-0)" \
<add> % (z.real, z.imag))
<add> yield _check_ninf_nan, None
<add>
<add> # cexp(inf + nan i) is +-inf + nan
<add> def _check_inf_nan(dummy):
<add> z = f(np.array(np.complex(np.inf, np.nan)))
<add> if not np.isinf(z.real) or not np.isnan(z.imag):
<add> raise AssertionError(
<add> "cexp(-inf, nan) is (%f, %f), expected (+-inf, nan)" \
<add> % (z.real, z.imag))
<add> yield _check_inf_nan, None
<add>
<add> # cexp(nan + 0i) is nan + 0i
<add> yield check, f, np.nan, 0, np.nan, 0
<add>
<add> # cexp(nan + yi) is nan + nani for y != 0 (optional: raises invalid FPU
<add> # ex)
<add> yield check, f, np.nan, 1, np.nan, np.nan
<add> yield check, f, np.nan, -1, np.nan, np.nan
<add>
<add> yield check, f, np.nan, np.inf, np.nan, np.nan
<add> yield check, f, np.nan, -np.inf, np.nan, np.nan
<add>
<add> # cexp(nan + nani) is nan + nani
<add> yield check, f, np.nan, np.nan, np.nan, np.nan
<add>
<ide> class TestClog(TestCase):
<ide> def test_simple(self):
<ide> x = np.array([1+0j, 1+2j]) | 1 |
Javascript | Javascript | add missing "else" keyword | a57ec27fc6ba95f69b9ca75ea62b8e87e1b3a0ed | <ide><path>src/renderers/webgl/WebGLRenderLists.js
<ide> function reversePainterSortStable( a, b ) {
<ide>
<ide> return a.renderOrder - b.renderOrder;
<ide>
<del> } if ( a.z !== b.z ) {
<add> } else if ( a.z !== b.z ) {
<ide>
<ide> return b.z - a.z;
<ide> | 1 |
Ruby | Ruby | add exception to example (closes ) | 9b2ea172d72563c677e40156ba53aaaef6ed2294 | <ide><path>activerecord/lib/active_record/migration.rb
<ide> class IrreversibleMigration < ActiveRecordError#:nodoc:
<ide> #
<ide> # def self.down
<ide> # # not much we can do to restore deleted data
<add> # raise IrreversibleMigration
<ide> # end
<ide> # end
<ide> # | 1 |
PHP | PHP | fix a docblock | 802f42fe8d3b6a106251ce53a3d41845d980a253 | <ide><path>src/Illuminate/Auth/RequestGuard.php
<ide> class RequestGuard implements Guard
<ide> * Create a new authentication guard.
<ide> *
<ide> * @param callable $callback
<del> * @param \Symfony\Component\HttpFoundation\Request $request
<add> * @param \Illuminate\Http\Request $request
<ide> * @return void
<ide> */
<del> public function __construct(callable $callback,
<del> Request $request)
<add> public function __construct(callable $callback, Request $request)
<ide> {
<ide> $this->request = $request;
<ide> $this->callback = $callback; | 1 |
Python | Python | fix identation problems in docs | 631070e57b4d6b9eb543a28cc3662affde8f4f48 | <ide><path>libcloud/compute/drivers/gce.py
<ide> def ex_create_multiple_nodes(
<ide> :type description: ``str`` or ``None``
<ide>
<ide> :keyword ex_can_ip_forward: Set to ``True`` to allow this node to
<del> send/receive non-matching src/dst packets.
<add> send/receive non-matching src/dst packets.
<ide> :type ex_can_ip_forward: ``bool`` or ``None``
<ide>
<ide> :keyword ex_preemptible: Defines whether the instance is preemptible.
<del> (If not supplied, the instance will
<del> not be preemptible)
<add> (If not supplied, the instance will
<add> not be preemptible)
<ide> :type ex_preemptible: ``bool`` or ``None``
<ide>
<ide> :keyword ex_disks_gce_struct: Support for passing in the GCE-specific | 1 |
Java | Java | do simple clean-up | 60b72d721dc2f02279ff360d4ea1de98ae62d986 | <ide><path>spring-beans/src/main/java/org/springframework/beans/factory/config/YamlProcessor.java
<ide> private boolean process(MatchCallback callback, Yaml yaml, Resource resource) {
<ide> if (logger.isDebugEnabled()) {
<ide> logger.debug("Loading from YAML: " + resource);
<ide> }
<del> Reader reader = new UnicodeReader(resource.getInputStream());
<del> try {
<add> try (Reader reader = new UnicodeReader(resource.getInputStream())) {
<ide> for (Object object : yaml.loadAll(reader)) {
<ide> if (object != null && process(asMap(object), callback)) {
<ide> count++;
<ide> private boolean process(MatchCallback callback, Yaml yaml, Resource resource) {
<ide> " from YAML resource: " + resource);
<ide> }
<ide> }
<del> finally {
<del> reader.close();
<del> }
<ide> }
<ide> catch (IOException ex) {
<ide> handleProcessError(resource, ex);
<ide><path>spring-beans/src/main/java/org/springframework/beans/factory/support/AbstractBeanDefinition.java
<ide> public void addQualifier(AutowireCandidateQualifier qualifier) {
<ide> * Return whether this bean has the specified qualifier.
<ide> */
<ide> public boolean hasQualifier(String typeName) {
<del> return this.qualifiers.keySet().contains(typeName);
<add> return this.qualifiers.containsKey(typeName);
<ide> }
<ide>
<ide> /**
<ide><path>spring-messaging/src/main/java/org/springframework/messaging/simp/stomp/StompDecoder.java
<ide> private String unescape(String inString) {
<ide> if (index + 1 >= inString.length()) {
<ide> throw new StompConversionException("Illegal escape sequence at index " + index + ": " + inString);
<ide> }
<del> Character c = inString.charAt(index + 1);
<add> char c = inString.charAt(index + 1);
<ide> if (c == 'r') {
<ide> sb.append('\r');
<ide> } | 3 |
Python | Python | remove obsolete testf77mismatch | 416d3b3069b6a829673a68464b4eab768c1b90d2 | <ide><path>numpy/linalg/tests/test_build.py
<del>from subprocess import PIPE, Popen
<del>import sys
<del>import re
<del>import pytest
<del>
<del>from numpy.linalg import lapack_lite
<del>from numpy.testing import assert_
<del>
<del>
<del>class FindDependenciesLdd:
<del>
<del> def __init__(self):
<del> self.cmd = ['ldd']
<del>
<del> try:
<del> p = Popen(self.cmd, stdout=PIPE, stderr=PIPE)
<del> stdout, stderr = p.communicate()
<del> except OSError as e:
<del> raise RuntimeError(f'command {self.cmd} cannot be run') from e
<del>
<del> def get_dependencies(self, lfile):
<del> p = Popen(self.cmd + [lfile], stdout=PIPE, stderr=PIPE)
<del> stdout, stderr = p.communicate()
<del> if not (p.returncode == 0):
<del> raise RuntimeError(f'failed dependencies check for {lfile}')
<del>
<del> return stdout
<del>
<del> def grep_dependencies(self, lfile, deps):
<del> stdout = self.get_dependencies(lfile)
<del>
<del> rdeps = dict([(dep, re.compile(dep)) for dep in deps])
<del> founds = []
<del> for l in stdout.splitlines():
<del> for k, v in rdeps.items():
<del> if v.search(l):
<del> founds.append(k)
<del>
<del> return founds
<del>
<del>
<del>class TestF77Mismatch:
<del>
<del> @pytest.mark.skipif(not(sys.platform[:5] == 'linux'),
<del> reason="no fortran compiler on non-Linux platform")
<del> def test_lapack(self):
<del> f = FindDependenciesLdd()
<del> deps = f.grep_dependencies(lapack_lite.__file__,
<del> [b'libg2c', b'libgfortran'])
<del> assert_(len(deps) <= 1,
<del> """Both g77 and gfortran runtimes linked in lapack_lite ! This is likely to
<del>cause random crashes and wrong results. See numpy INSTALL.txt for more
<del>information.""") | 1 |
Ruby | Ruby | remove special 1.9 version of excerpt helper | b8eec5ac33d6f421fe5a2c757794ed2e4965f81d | <ide><path>actionpack/lib/action_view/helpers/text_helper.rb
<ide> def highlight(text, phrases, *args)
<ide> end
<ide> end
<ide>
<del> # Extracts an excerpt from +text+ that matches the first instance of +phrase+.
<del> # The <tt>:radius</tt> option expands the excerpt on each side of the first occurrence of +phrase+ by the number of characters
<del> # defined in <tt>:radius</tt> (which defaults to 100). If the excerpt radius overflows the beginning or end of the +text+,
<del> # then the <tt>:omission</tt> option (which defaults to "...") will be prepended/appended accordingly. The resulting string
<del> # will be stripped in any case. If the +phrase+ isn't found, nil is returned.
<del> #
<del> # ==== Examples
<del> # excerpt('This is an example', 'an', :radius => 5)
<del> # # => ...s is an exam...
<del> #
<del> # excerpt('This is an example', 'is', :radius => 5)
<del> # # => This is a...
<del> #
<del> # excerpt('This is an example', 'is')
<del> # # => This is an example
<del> #
<del> # excerpt('This next thing is an example', 'ex', :radius => 2)
<del> # # => ...next...
<del> #
<del> # excerpt('This is also an example', 'an', :radius => 8, :omission => '<chop> ')
<del> # # => <chop> is also an example
<del> #
<del> # You can still use <tt>excerpt</tt> with the old API that accepts the
<del> # +radius+ as its optional third and the +ellipsis+ as its
<del> # optional forth parameter:
<del> # excerpt('This is an example', 'an', 5) # => ...s is an exam...
<del> # excerpt('This is also an example', 'an', 8, '<chop> ') # => <chop> is also an example
<del> def excerpt(text, phrase, *args)
<del> options = args.extract_options!
<del> unless args.empty?
<del> options[:radius] = args[0] || 100
<del> options[:omission] = args[1] || "..."
<del> end
<del> options.reverse_merge!(:radius => 100, :omission => "...")
<add> # Extracts an excerpt from +text+ that matches the first instance of +phrase+.
<add> # The <tt>:radius</tt> option expands the excerpt on each side of the first occurrence of +phrase+ by the number of characters
<add> # defined in <tt>:radius</tt> (which defaults to 100). If the excerpt radius overflows the beginning or end of the +text+,
<add> # then the <tt>:omission</tt> option (which defaults to "...") will be prepended/appended accordingly. The resulting string
<add> # will be stripped in any case. If the +phrase+ isn't found, nil is returned.
<add> #
<add> # ==== Examples
<add> # excerpt('This is an example', 'an', :radius => 5)
<add> # # => ...s is an exam...
<add> #
<add> # excerpt('This is an example', 'is', :radius => 5)
<add> # # => This is a...
<add> #
<add> # excerpt('This is an example', 'is')
<add> # # => This is an example
<add> #
<add> # excerpt('This next thing is an example', 'ex', :radius => 2)
<add> # # => ...next...
<add> #
<add> # excerpt('This is also an example', 'an', :radius => 8, :omission => '<chop> ')
<add> # # => <chop> is also an example
<add> #
<add> # You can still use <tt>excerpt</tt> with the old API that accepts the
<add> # +radius+ as its optional third and the +ellipsis+ as its
<add> # optional forth parameter:
<add> # excerpt('This is an example', 'an', 5) # => ...s is an exam...
<add> # excerpt('This is also an example', 'an', 8, '<chop> ') # => <chop> is also an example
<add> def excerpt(text, phrase, *args)
<add> options = args.extract_options!
<add> unless args.empty?
<add> options[:radius] = args[0] || 100
<add> options[:omission] = args[1] || "..."
<add> end
<add> options.reverse_merge!(:radius => 100, :omission => "...")
<ide>
<del> if text && phrase
<del> phrase = Regexp.escape(phrase)
<add> if text && phrase
<add> phrase = Regexp.escape(phrase)
<ide>
<del> if found_pos = text.mb_chars =~ /(#{phrase})/i
<del> start_pos = [ found_pos - options[:radius], 0 ].max
<del> end_pos = [ [ found_pos + phrase.mb_chars.length + options[:radius] - 1, 0].max, text.mb_chars.length ].min
<add> if found_pos = text.mb_chars =~ /(#{phrase})/i
<add> start_pos = [ found_pos - options[:radius], 0 ].max
<add> end_pos = [ [ found_pos + phrase.mb_chars.length + options[:radius] - 1, 0].max, text.mb_chars.length ].min
<ide>
<del> prefix = start_pos > 0 ? options[:omission] : ""
<del> postfix = end_pos < text.mb_chars.length - 1 ? options[:omission] : ""
<add> prefix = start_pos > 0 ? options[:omission] : ""
<add> postfix = end_pos < text.mb_chars.length - 1 ? options[:omission] : ""
<ide>
<del> prefix + text.mb_chars[start_pos..end_pos].strip + postfix
<del> else
<del> nil
<del> end
<add> prefix + text.mb_chars[start_pos..end_pos].strip + postfix
<add> else
<add> nil
<ide> end
<ide> end
<add> end
<ide>
<ide> # Attempts to pluralize the +singular+ word unless +count+ is 1. If
<ide> # +plural+ is supplied, it will use that when count is > 1, otherwise | 1 |
PHP | PHP | fix uniquestrict for keyless collections | ca3146d72fab9f06a24e197f556654780c93dead | <ide><path>src/Illuminate/Support/Collection.php
<ide> public function transform(callable $callback)
<ide> */
<ide> public function unique($key = null, $strict = false)
<ide> {
<del> if (is_null($key)) {
<del> return new static(array_unique($this->items, SORT_REGULAR));
<del> }
<del>
<ide> $callback = $this->valueRetriever($key);
<ide>
<ide> $exists = []; | 1 |
Python | Python | remove unused variables in src | 71f94a8a1c89577ec4482b3e5600fbcdfb3dd1a8 | <ide><path>src/transformers/data/metrics/__init__.py
<ide> from sklearn.metrics import matthews_corrcoef, f1_score
<ide>
<ide> _has_sklearn = True
<del>except (AttributeError, ImportError) as e:
<add>except (AttributeError, ImportError):
<ide> _has_sklearn = False
<ide>
<ide>
<ide><path>src/transformers/modeling_albert.py
<ide> def forward(self, input_ids, attention_mask=None, head_mask=None):
<ide> context_layer = torch.matmul(attention_probs, value_layer)
<ide>
<ide> context_layer = context_layer.permute(0, 2, 1, 3).contiguous()
<del> new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,)
<del> reshaped_context_layer = context_layer.view(*new_context_layer_shape)
<ide>
<ide> # Should find a better way to do this
<ide> w = (
<ide> def forward(self, hidden_states, attention_mask=None, head_mask=None):
<ide> # Index of the hidden group
<ide> group_idx = int(i / (self.config.num_hidden_layers / self.config.num_hidden_groups))
<ide>
<del> # Index of the layer inside the group
<del> layer_idx = int(i - group_idx * layers_per_group)
<del>
<ide> layer_group_output = self.albert_layer_groups[group_idx](
<ide> hidden_states,
<ide> attention_mask,
<ide><path>src/transformers/modeling_t5.py
<ide> def forward(
<ide> all_attentions = all_attentions + (layer_outputs[1],) # We keep only self-attention weights for now
<ide>
<ide> hidden_states = self.final_layer_norm(hidden_states)
<del> layer_output = self.dropout(hidden_states)
<ide>
<ide> # Add last layer
<ide> if self.output_hidden_states:
<ide><path>src/transformers/modeling_tf_pytorch_utils.py
<ide> def load_pytorch_weights_in_tf2_model(tf_model, pt_state_dict, tf_inputs=None, a
<ide> tf_inputs = tf_model.dummy_inputs
<ide>
<ide> if tf_inputs is not None:
<del> tfo = tf_model(tf_inputs, training=False) # Make sure model is built
<add> tf_model(tf_inputs, training=False) # Make sure model is built
<ide>
<ide> # Adapt state dict - TODO remove this and update the AWS weights files instead
<ide> # Convert old format to new format if needed from a PyTorch state_dict
<ide> def load_pytorch_weights_in_tf2_model(tf_model, pt_state_dict, tf_inputs=None, a
<ide> K.batch_set_value(weight_value_tuples)
<ide>
<ide> if tf_inputs is not None:
<del> tfo = tf_model(tf_inputs, training=False) # Make sure restore ops are run
<add> tf_model(tf_inputs, training=False) # Make sure restore ops are run
<ide>
<ide> logger.info("Loaded {:,} parameters in the TF 2.0 model.".format(tf_loaded_numel))
<ide>
<ide> def load_tf2_checkpoint_in_pytorch_model(pt_model, tf_checkpoint_path, tf_inputs
<ide>
<ide> import transformers
<ide>
<del> tf_path = os.path.abspath(tf_checkpoint_path)
<ide> logger.info("Loading TensorFlow weights from {}".format(tf_checkpoint_path))
<ide>
<ide> # Instantiate and load the associated TF 2.0 model
<ide> def load_tf2_checkpoint_in_pytorch_model(pt_model, tf_checkpoint_path, tf_inputs
<ide> tf_inputs = tf_model.dummy_inputs
<ide>
<ide> if tf_inputs is not None:
<del> tfo = tf_model(tf_inputs, training=False) # Make sure model is built
<add> tf_model(tf_inputs, training=False) # Make sure model is built
<ide>
<ide> tf_model.load_weights(tf_checkpoint_path, by_name=True)
<ide>
<ide><path>src/transformers/modeling_tf_t5.py
<ide> def call(
<ide> all_attentions = all_attentions + (layer_outputs[1],)
<ide>
<ide> hidden_states = self.final_layer_norm(hidden_states)
<del> layer_output = self.dropout(hidden_states, training=training)
<ide>
<ide> # Add last layer
<ide> if self.output_hidden_states:
<ide><path>src/transformers/modeling_tf_transfo_xl_utilities.py
<ide> def call(self, inputs, return_mean=True, training=False):
<ide> hidden, target = inputs
<ide> head_logprob = 0
<ide> if self.n_clusters == 0:
<del> softmax_b = tf.get_variable("bias", [self.config.vocab_size], initializer=tf.zeros_initializer())
<ide> output = self._logit(hidden, self.out_layers[0][0], self.out_layers[0][1], self.out_projs[0])
<ide> if target is not None:
<ide> loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=target, logits=output)
<ide><path>src/transformers/modeling_tf_utils.py
<ide> def from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs):
<ide> # Load from a PyTorch checkpoint
<ide> return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file, allow_missing_keys=True)
<ide>
<del> ret = model(model.dummy_inputs, training=False) # build the network with dummy inputs
<add> model(model.dummy_inputs, training=False) # build the network with dummy inputs
<ide>
<ide> assert os.path.isfile(resolved_archive_file), "Error retrieving file {}".format(resolved_archive_file)
<ide> # 'by_name' allow us to do transfer learning by skipping/adding layers
<ide> def from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs):
<ide> "If you tried to load a TF 2.0 model from a PyTorch checkpoint, please set from_pt=True. "
<ide> )
<ide>
<del> ret = model(model.dummy_inputs, training=False) # Make sure restore ops are run
<add> model(model.dummy_inputs, training=False) # Make sure restore ops are run
<ide>
<ide> # Check if the models are the same to output loading informations
<ide> with h5py.File(resolved_archive_file, "r") as f:
<ide> def call(self, inputs, training=False):
<ide> cls_index = inputs[1] if len(inputs) > 1 else None
<ide> assert len(inputs) <= 2, "Too many inputs."
<ide> else:
<del> input_ids = inputs.get("input_ids")
<add> hidden_states = inputs.get("hidden_states")
<ide> cls_index = inputs.get("cls_index", None)
<ide>
<ide> if self.summary_type == "last": | 7 |
Python | Python | fix numericaltype hierarchy in code doc | caab62bce506dc8aaaec1df8c85af8785bcef020 | <ide><path>numpy/core/numerictypes.py
<ide> generic
<ide> +-> bool_ (kind=b)
<ide> +-> number
<del> | integer
<del> | signedinteger (intxx) (kind=i)
<del> | byte
<del> | short
<del> | intc
<del> | intp int0
<del> | int_
<del> | longlong
<del> +-> unsignedinteger (uintxx) (kind=u)
<del> | ubyte
<del> | ushort
<del> | uintc
<del> | uintp uint0
<del> | uint_
<del> | ulonglong
<del> +-> inexact
<del> | +-> floating (floatxx) (kind=f)
<del> | | half
<del> | | single
<del> | | float_ (double)
<del> | | longfloat
<del> | \\-> complexfloating (complexxx) (kind=c)
<del> | csingle (singlecomplex)
<del> | complex_ (cfloat, cdouble)
<del> | clongfloat (longcomplex)
<add> | +-> integer
<add> | | +-> signedinteger (intxx) (kind=i)
<add> | | | byte
<add> | | | short
<add> | | | intc
<add> | | | intp int0
<add> | | | int_
<add> | | | longlong
<add> | | \\-> unsignedinteger (uintxx) (kind=u)
<add> | | ubyte
<add> | | ushort
<add> | | uintc
<add> | | uintp uint0
<add> | | uint_
<add> | | ulonglong
<add> | +-> inexact
<add> | +-> floating (floatxx) (kind=f)
<add> | | half
<add> | | single
<add> | | float_ (double)
<add> | | longfloat
<add> | \\-> complexfloating (complexxx) (kind=c)
<add> | csingle (singlecomplex)
<add> | complex_ (cfloat, cdouble)
<add> | clongfloat (longcomplex)
<ide> +-> flexible
<del> | character
<del> | void (kind=V)
<del> |
<del> | str_ (string_, bytes_) (kind=S) [Python 2]
<del> | unicode_ (kind=U) [Python 2]
<del> |
<del> | bytes_ (string_) (kind=S) [Python 3]
<del> | str_ (unicode_) (kind=U) [Python 3]
<del> |
<del> \\-> object_ (not used much) (kind=O)
<add> | +-> character
<add> | | str_ (string_, bytes_) (kind=S) [Python 2]
<add> | | unicode_ (kind=U) [Python 2]
<add> | |
<add> | | bytes_ (string_) (kind=S) [Python 3]
<add> | | str_ (unicode_) (kind=U) [Python 3]
<add> | |
<add> | \\-> void (kind=V)
<add> \\-> object_ (not used much) (kind=O)
<ide>
<ide> """
<ide> from __future__ import division, absolute_import, print_function | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.