content_type
stringclasses
8 values
main_lang
stringclasses
7 values
message
stringlengths
1
50
sha
stringlengths
40
40
patch
stringlengths
52
962k
file_count
int64
1
300
PHP
PHP
pass composer instance into failedtablecommand
d545979398ea73135254ee238e5e6d689b77bfb6
<ide><path>src/Illuminate/Queue/ConsoleServiceProvider.php <ide> public function register() <ide> <ide> $this->app->singleton('command.queue.failed-table', function($app) <ide> { <del> return new FailedTableCommand($app['files']); <add> return new FailedTableCommand($app['files'], $app['composer']); <ide> }); <ide> <ide> $this->commands(
1
Text
Text
add 1.10.0 to changelog
10397828ed488475afeb0d38f0d33a68760c70fa
<ide><path>CHANGELOG.md <ide> - [#10350](https://github.com/emberjs/ember.js/pull/10350) Make meta.cache & meta.cacheMeta lazy [@ebryn](https://github.com/ebryn) <ide> - [#10353](https://github.com/emberjs/ember.js/pull/10353) Avoid creating context bindings for collection views [@mmun](https://github.com/mmun) <ide> <add>### 1.10.0 (February 6, 2015) <add> <add>* [BUGFIX] Ensure that property case is normalized. <add>* [BUGFIX] Prevent an error from being thrown if the errorThrown property is a string when catching unhandled promise rejections. <add>* [BUGFIX] `contenteditable` elements should fire focus events in `ember-testing` click helper. <add>* [BUGFIX] Remove HTMLBars from builds `ember.debug.js` and `ember.prod.js` builds. Please see http://emberjs.com/blog/2015/02/05/compiling-templates-in-1-10-0.html for more details. <add>* [BUGFIX] Ensure that calling the `wait` testing helpe without routing works properly. <add>* [BUGFIX] Ensure that a plus sign in query params are treated as spaces. <add>* [BUGFIX] Fix broken `Ember.Test.unregisterWaiter` semantics. <add>* [BUGFIX] Allow unbound helpers to add attributes. <add>* [BUGFIX] Ensure compat helpers calling `options.fn` work. <add>* [BUGFIX] Fix memory leak in view streams. <add>* [BUGFIX] Don't render default layout for `Ember.TextField`. <add>* Update HTMLBars version to v0.8.5: <add> * Allow numbers to be parsed as HTML in IE. <add> * Add namespace detection. <add> * Include line number in error thrown for unclosed HTML element. <add> * `removeAttribute` fix for IE <11 and SVG. <add> * Disable `cloneNodes` in IE8. <add> * Improve HTML validation and error messages thrown. <add> * Fix a number of template compliation issues in IE8. <add> * Use the correct namespace in `parseHTML` (fixes various issues that occur <add> when changing to and from alternate namespaces). <add> * Ensure values are converted to `String`'s when setting attributes (fixes issues in IE10 & IE11). <add> * Change `setProperty` and `morph` to remove an `undefined` attr value. <add>* [BUGFIX] Fix usage of `emptyView` with `{{#each}}` helper. <add>* Assert if an attribute set statically and via bind-attr. For example: <add> `<div class="foo" {{bind-attr class="bar"}}></div>` will now trigger an assertion (instead of <add> silently failing). <add>* [BUGFIX] Fix deprecated bindAttr helper. <add>* [BUGFIX] Do not allow both keyword and block params. <add>* Cleanup HTMLBars public API <add> * Remove `Ember.HTMLBars.helper`. <add> * Remove internal `registerBoundHelper` function (use <add> `registerHelper('blah', makeViewHelper(SomeView))` or `registerHelper('blah', makeBoundHelper(func))`). <add>* [BUGFIX] Fix Handlebars compat mode `registerHelper` interop with `makeViewHelper`. <add>* [BUGFIX] Ensure that `mergedProperties` are properly merged when all properties are not present. <add>* Add options argument to pass url to `Ember.deprecate`. <add>* Deprecate `{{bind}}` helper. <add>* Pass array to `Ember.computed.filter` callback <add>* [BUGFIX] Prevent mandatory-setter when setter is already present. <add>* Remove Handlebars from dependencies. <add>* Fix error when parsing templates with invalid end tags. <add>* [BUGFIX] Allow makeBoundHelper to be a sub-expression. <add>* [BUGFIX] Allow compat makeBoundHelpers to be sub-expressions. <add>* [BUGFIX] Export Ember.Handlebars compat shim for `Ember.Handlebars.SafeString` and `Ember.Handlebars.Utils.escapeExpression`. <add>* [BUGFIX] Allow `Ember.inject` injected properties to be overridden (makes testing significantly easier). <add>* [BUGFIX] Don’t assert uncaught RSVP rejections. We are already logging the error, but asserting breaks everything else on the run loop queue. <add>* [BUGFIX] Allow tagName to be a CP (with deprecation). <add>* [BUGFIX] Allow view instances in {{view}}. <add>* [BUGFIX] Ensure bound attrs flush immediately. <add>* [PERFORMANCE] Initialize views in preRender state. <add>* [PERFORMANCE] `View#element` should not be observable. <add>* Add ember-template-compiler package. <add>* Rename `Ember.HTMLBars.registerASTPlugin` to `Ember.HTMLBars.registerPlugin`. <add>* Export `ember-template-compiler.js`. <add>* Escape `href`, `src`, and `background` attributes for `a`, `link`, `img`, and `iframe` elements. <add>* Move debugging file output from `ember.js` to `ember.debug.js`. <add>* Remove `templateData` property from views. <add>* Restructure `Ember.libraries` to be more idiomatic. <add>* Prevent creating an extra view for each select option. <add>* Deprecate the block form of the bind helper. <add>* Cleanup `Ember.CoreObject` init argument passing. <add>* Allow all rejection types to be handled by default RSVP error handler. <add>* Deprecate setting ContainerView#childViews. <add>* [FEATURE] ember-htmlbars - Enable the HTMLBars rendering engine. <add>* [FEATURE] ember-htmlbars-block-params - Enable block params feature for HTMLBars. <ide> <ide> ### 1.9.1 (December 23, 2014) <ide>
1
Javascript
Javascript
add support for dumping current react hierarchy
f13322c8204735e8353dbd1a3b66fab5f1d65c5a
<ide><path>Libraries/AppRegistry/AppRegistry.js <ide> var AppRegistry = { <ide> console.log(msg); <ide> BugReporting.init(); <ide> BugReporting.addSource('AppRegistry.runApplication' + runCount++, () => msg); <add> BugReporting.addFileSource('react_hierarchy.txt', () => require('dumpReactTree')()); <ide> invariant( <ide> runnables[appKey] && runnables[appKey].run, <ide> 'Application ' + appKey + ' has not been registered. This ' + <ide><path>Libraries/BugReporting/dumpReactTree.js <add>/** <add> * Copyright (c) 2013-present, Facebook, Inc. <add> * All rights reserved. <add> * <add> * This source code is licensed under the BSD-style license found in the <add> * LICENSE file in the root directory of this source tree. An additional grant <add> * of patent rights can be found in the PATENTS file in the same directory. <add> * <add> * @providesModule dumpReactTree <add> * @flow <add> */ <add>'use strict'; <add> <add>const ReactNativeMount = require('ReactNativeMount'); <add>const getReactData = require('getReactData'); <add> <add>const INDENTATION_SIZE = 2; <add>const MAX_DEPTH = 2; <add>const MAX_STRING_LENGTH = 50; <add> <add>/** <add> * Dump all React Native root views and their content. This function tries <add> * it best to get the content but ultimately relies on implementation details <add> * of React and will fail in future versions. <add> */ <add>function dumpReactTree() { <add> try { <add> return getReactTree(); <add> } catch (e) { <add> return 'Failed to dump react tree: ' + e; <add> } <add>} <add> <add>function getReactTree() { <add> let output = ''; <add> const rootIds = Object.getOwnPropertyNames(ReactNativeMount._instancesByContainerID); <add> for (const rootId of rootIds) { <add> const instance = ReactNativeMount._instancesByContainerID[rootId]; <add> output += `============ Root ID: ${rootId} ============\n`; <add> output += dumpNode(instance, 0); <add> output += `============ End root ID: ${rootId} ============\n`; <add> } <add> return output; <add>} <add> <add>function dumpNode(node: Object, identation: number) { <add> const data = getReactData(node); <add> if (data.nodeType == 'Text') { <add> return indent(identation) + data.text + '\n'; <add> } else if (data.nodeType == 'Empty') { <add> return ''; <add> } <add> let output = indent(identation) + `<${data.name}`; <add> if (data.nodeType == 'Composite') { <add> for (const propName of Object.getOwnPropertyNames(data.props || {})) { <add> if (isNormalProp(propName)) { <add> const value = convertValue(data.props[propName]); <add> if (value) { <add> output += ` ${propName}=${value}`; <add> } <add> } <add> } <add> } <add> let childOutput = ''; <add> for (const child of data.children || []) { <add> childOutput += dumpNode(child, identation + 1); <add> } <add> <add> if (childOutput) { <add> output += '>\n' + childOutput + indent(identation) + `</${data.name}>\n`;; <add> } else { <add> output += ' />\n'; <add> } <add> <add> return output; <add>} <add> <add>function isNormalProp(name: string): boolean { <add> switch (name) { <add> case 'children': <add> case 'key': <add> case 'ref': <add> return false; <add> default: <add> return true; <add> } <add>} <add> <add>function convertObject(object: Object, depth: number) { <add> if (depth >= MAX_DEPTH) { <add> return '[...omitted]'; <add> } <add> let output = '{' <add> let first = true; <add> for (const key of Object.getOwnPropertyNames(object)) { <add> if (!first) { <add> output += ', ' <add> } <add> output += `${key}: ${convertValue(object[key], depth + 1)}`; <add> first = false; <add> } <add> return output + '}'; <add>} <add> <add>function convertValue(value, depth = 0): ?string { <add> if (!value) { <add> return null; <add> } <add> <add> switch (typeof value) { <add> case 'string': <add> return JSON.stringify(possiblyEllipsis(value).replace('\n', '\\n')); <add> case 'boolean': <add> case 'number': <add> return JSON.stringify(value); <add> case 'function': <add> return '[function]'; <add> case 'object': <add> return convertObject(value, depth); <add> default: <add> return null; <add> } <add>} <add> <add>function possiblyEllipsis(value: string) { <add> if (value.length > MAX_STRING_LENGTH) { <add> return value.slice(0, MAX_STRING_LENGTH) + '...'; <add> } else { <add> return value; <add> } <add>} <add> <add>function indent(size: number) { <add> return ' '.repeat(size * INDENTATION_SIZE); <add>} <add> <add>module.exports = dumpReactTree; <ide><path>Libraries/BugReporting/getReactData.js <add>/** <add> * Copyright (c) 2015-present, Facebook, Inc. <add> * All rights reserved. <add> * <add> * This source code is licensed under the BSD-style license found in the <add> * LICENSE file in the root directory of this source tree. An additional grant <add> * of patent rights can be found in the PATENTS file in the same directory. <add> * <add> * @providesModule getReactData <add> * @flow <add> */ <add>'use strict'; <add> <add>/** <add> * Convert a react internal instance to a sanitized data object. <add> * <add> * This is shamelessly stolen from react-devtools: <add> * https://github.com/facebook/react-devtools/blob/master/backend/getData.js <add> */ <add>function getData(element: Object): Object { <add> var children = null; <add> var props = null; <add> var state = null; <add> var context = null; <add> var updater = null; <add> var name = null; <add> var type = null; <add> var text = null; <add> var publicInstance = null; <add> var nodeType = 'Native'; <add> // If the parent is a native node without rendered children, but with <add> // multiple string children, then the `element` that gets passed in here is <add> // a plain value -- a string or number. <add> if (typeof element !== 'object') { <add> nodeType = 'Text'; <add> text = element + ''; <add> } else if (element._currentElement === null || element._currentElement === false) { <add> nodeType = 'Empty'; <add> } else if (element._renderedComponent) { <add> nodeType = 'NativeWrapper'; <add> children = [element._renderedComponent]; <add> props = element._instance.props; <add> state = element._instance.state; <add> context = element._instance.context; <add> if (context && Object.keys(context).length === 0) { <add> context = null; <add> } <add> } else if (element._renderedChildren) { <add> children = childrenList(element._renderedChildren); <add> } else if (element._currentElement && element._currentElement.props) { <add> // This is a native node without rendered children -- meaning the children <add> // prop is just a string or (in the case of the <option>) a list of <add> // strings & numbers. <add> children = element._currentElement.props.children; <add> } <add> <add> if (!props && element._currentElement && element._currentElement.props) { <add> props = element._currentElement.props; <add> } <add> <add> // != used deliberately here to catch undefined and null <add> if (element._currentElement != null) { <add> type = element._currentElement.type; <add> if (typeof type === 'string') { <add> name = type; <add> } else if (element.getName) { <add> nodeType = 'Composite'; <add> name = element.getName(); <add> // 0.14 top-level wrapper <add> // TODO(jared): The backend should just act as if these don't exist. <add> if (element._renderedComponent && element._currentElement.props === element._renderedComponent._currentElement) { <add> nodeType = 'Wrapper'; <add> } <add> if (name === null) { <add> name = 'No display name'; <add> } <add> } else if (element._stringText) { <add> nodeType = 'Text'; <add> text = element._stringText; <add> } else { <add> name = type.displayName || type.name || 'Unknown'; <add> } <add> } <add> <add> if (element._instance) { <add> var inst = element._instance; <add> updater = { <add> setState: inst.setState && inst.setState.bind(inst), <add> forceUpdate: inst.forceUpdate && inst.forceUpdate.bind(inst), <add> setInProps: inst.forceUpdate && setInProps.bind(null, element), <add> setInState: inst.forceUpdate && setInState.bind(null, inst), <add> setInContext: inst.forceUpdate && setInContext.bind(null, inst), <add> }; <add> publicInstance = inst; <add> <add> // TODO: React ART currently falls in this bucket, but this doesn't <add> // actually make sense and we should clean this up after stabilizing our <add> // API for backends <add> if (inst._renderedChildren) { <add> children = childrenList(inst._renderedChildren); <add> } <add> } <add> <add> return { <add> nodeType, <add> type, <add> name, <add> props, <add> state, <add> context, <add> children, <add> text, <add> updater, <add> publicInstance, <add> }; <add>} <add> <add>function setInProps(internalInst, path: Array<string | number>, value: any) { <add> var element = internalInst._currentElement; <add> internalInst._currentElement = { <add> ...element, <add> props: copyWithSet(element.props, path, value), <add> }; <add> internalInst._instance.forceUpdate(); <add>} <add> <add>function setInState(inst, path: Array<string | number>, value: any) { <add> setIn(inst.state, path, value); <add> inst.forceUpdate(); <add>} <add> <add>function setInContext(inst, path: Array<string | number>, value: any) { <add> setIn(inst.context, path, value); <add> inst.forceUpdate(); <add>} <add> <add>function setIn(obj: Object, path: Array<string | number>, value: any) { <add> var last = path.pop(); <add> var parent = path.reduce((obj_, attr) => obj_ ? obj_[attr] : null, obj); <add> if (parent) { <add> parent[last] = value; <add> } <add>} <add> <add>function childrenList(children) { <add> var res = []; <add> for (var name in children) { <add> res.push(children[name]); <add> } <add> return res; <add>} <add> <add>function copyWithSetImpl(obj, path, idx, value) { <add> if (idx >= path.length) { <add> return value; <add> } <add> var key = path[idx]; <add> var updated = Array.isArray(obj) ? obj.slice() : {...obj}; <add> // $FlowFixMe number or string is fine here <add> updated[key] = copyWithSetImpl(obj[key], path, idx + 1, value); <add> return updated; <add>} <add> <add>function copyWithSet(obj: Object | Array<any>, path: Array<string | number>, value: any): Object | Array<any> { <add> return copyWithSetImpl(obj, path, 0, value); <add>} <add> <add>module.exports = getData;
3
Text
Text
move changelog entry for to railties [ci-skip]
c267be43d36d3b443b797a5555deadd29857ac58
<ide><path>activesupport/CHANGELOG.md <ide> <ide> *Jean Boussier* <ide> <del>* Add `Rails.application.message_verifiers` as a central point to configure <del> and create message verifiers for an application. <del> <del> This allows applications to, for example, rotate old `secret_key_base` <del> values: <del> <del> ```ruby <del> config.before_initialize do |app| <del> app.message_verifiers.rotate(secret_key_base: "old secret_key_base") <del> end <del> ``` <del> <del> And for libraries to create preconfigured message verifiers: <del> <del> ```ruby <del> ActiveStorage.verifier = Rails.application.message_verifiers["ActiveStorage"] <del> ``` <del> <del> *Jonathan Hefner* <del> <ide> * Add `assert_error_reported` and `assert_no_error_reported` <ide> <ide> Allows to easily asserts an error happened but was handled <ide><path>railties/CHANGELOG.md <add>* Add `Rails.application.message_verifiers` as a central point to configure <add> and create message verifiers for an application. <add> <add> This allows applications to, for example, rotate old `secret_key_base` <add> values: <add> <add> ```ruby <add> config.before_initialize do |app| <add> app.message_verifiers.rotate(secret_key_base: "old secret_key_base") <add> end <add> ``` <add> <add> And for libraries to create preconfigured message verifiers: <add> <add> ```ruby <add> ActiveStorage.verifier = Rails.application.message_verifiers["ActiveStorage"] <add> ``` <add> <add> *Jonathan Hefner* <add> <ide> * Support MySQL's ssl-mode option for the dbconsole command. <ide> <ide> Verifying the identity of the database server requires setting the ssl-mode
2
Javascript
Javascript
fix merge with symbol font fix (part 2)
31d8d13ba294f2e16ca984aedd932a23748b33f5
<ide><path>src/fonts.js <ide> var Font = (function FontClosure() { <ide> // Moving all symbolic font glyphs into 0xF000 - 0xF0FF range. <ide> if (this.isSymbolicFont) { <ide> for (var i = 0, ii = glyphs.length; i < ii; i++) { <del> var cid = i + 1; <del> var code = glyphs[i].unicode; <del> code = kSymbolicFontGlyphOffset | (code & 0xFF); <del> glyphs[i].unicode = toFontChar[cid] = code; <add> var code = glyphs[i].unicode & 0xFF; <add> var fontCharCode = kSymbolicFontGlyphOffset | code; <add> glyphs[i].unicode = toFontChar[code] = fontCharCode; <ide> } <ide> this.useToFontChar = true; <ide> }
1
Text
Text
add faq to reduce noise in issues
dbdc4b761b51c59c90e05fe2fb8977d2182ba992
<ide><path>.github/ISSUE_TEMPLATE.md <ide> Bug Reports: <ide> Provide a *minimal* test case, see https://webkit.org/test-case-reduction/ <ide> Use the latest shipping version of jQuery in your test case! <ide> We prefer test cases on http://jsbin.com or http://jsfiddle.net <add> <add>Frequently Reported Issues: <add> * Selectors with '#' break: See https://github.com/jquery/jquery/issues/2824 <ide> --> <ide> <ide> ** Description**
1
Javascript
Javascript
bring clock example up-to-date
cdaf99e66330002fabdbebe12144281997e3f6d3
<ide><path>examples/clock/clock.js <ide> // Based on http://vis.stanford.edu/protovis/ex/clock.html <ide> // Based on http://blog.pixelbreaker.com/polarclock <ide> <del>var w = 960, <del> h = 700, <del> r = Math.min(w, h) / 1.8, <del> s = .09, <add>var width = 960, <add> height = 700, <add> radius = Math.min(width, height) / 1.8, <add> sectorWidth = .09, <ide> fsec = d3.time.format("%S s"), <ide> fmin = d3.time.format("%M m"), <ide> fhou = d3.time.format("%H h"), <ide> var fill = d3.scale.linear() <ide> var arc = d3.svg.arc() <ide> .startAngle(0) <ide> .endAngle(function(d) { return d.value * 2 * Math.PI; }) <del> .innerRadius(function(d) { return d.index * r; }) <del> .outerRadius(function(d) { return (d.index + s) * r; }); <add> .innerRadius(function(d) { return d.index * radius; }) <add> .outerRadius(function(d) { return (d.index + sectorWidth) * radius; }); <ide> <ide> var vis = d3.select("#clock").append("svg") <del> .attr("width", w) <del> .attr("height", h) <add> .attr("width", width) <add> .attr("height", height) <ide> .append("g") <del> .attr("transform", "translate(" + w / 2 + "," + h / 2 + ")"); <add> .attr("transform", "translate(" + width / 2 + "," + height / 2 + ")"); <ide> <ide> var g = vis.selectAll("g") <ide> .data(fields) <ide> d3.timer(function() { <ide> .attr("dy", function(d) { return d.value < .5 ? "-.5em" : "1em"; }) <ide> .attr("transform", function(d) { <ide> return "rotate(" + 360 * d.value + ")" <del> + "translate(0," + -(d.index + s / 2) * r + ")" <add> + "translate(0," + -(d.index + sectorWidth / 2) * radius + ")" <ide> + "rotate(" + (d.value < .5 ? -90 : 90) + ")" <ide> }) <ide> .text(function(d) { return d.text; });
1
Python
Python
add early stopping
ccbe381dcdf2d43b2d88d33ba3dcf4680fe116a8
<ide><path>keras/callbacks.py <ide> def __init__(self, filepath, verbose=0, save_best_only=False): <ide> self.best_val_loss = np.Inf <ide> <ide> def on_epoch_end(self, epoch, logs={}): <del> '''currently, on_epoch_end receives epoch_logs from keras.models.Sequential.fit <del> which does only contain, if at all, the validation loss and validation accuracy''' <ide> if self.save_best_only and self.params['do_validation']: <ide> cur_val_loss = logs.get('val_loss') <ide> self.val_loss.append(cur_val_loss) <ide> def on_epoch_end(self, epoch, logs={}): <ide> if self.verbose > 0: <ide> print("Epoch %05d: validation loss did not improve" % (epoch)) <ide> elif self.save_best_only and not self.params['do_validation']: <del> import warnings <ide> warnings.warn("Can save best model only with validation data, skipping", RuntimeWarning) <ide> elif not self.save_best_only: <ide> if self.verbose > 0: <ide> print("Epoch %05d: saving model to %s" % (epoch, self.filepath)) <ide> self.model.save_weights(self.filepath, overwrite=True) <ide> <add> <add>class EarlyStopping(Callback): <add> def __init__(self, patience=1, verbose=0): <add> super(Callback, self).__init__() <add> <add> self.patience = patience <add> self.verbose = verbose <add> self.best_val_loss = np.Inf <add> self.wait = 0 <add> <add> def on_epoch_end(self, epoch, logs={}): <add> if not self.params['do_validation']: <add> warnings.warn("Early stopping requires validation data!", RuntimeWarning) <add> <add> cur_val_loss = logs.get('val_loss') <add> if cur_val_loss < self.best_val_loss: <add> self.best_val_loss = cur_val_loss <add> self.wait = 0 <add> else: <add> if self.wait >= self.patience: <add> if self.verbose > 0: <add> print("Epoch %05d: early stopping" % (epoch)) <add> self.model.stop_training = True <add> self.wait += 1 <ide><path>keras/models.py <ide> def fit(self, X, y, batch_size=128, nb_epoch=100, verbose=1, callbacks=[], <ide> <ide> index_array = np.arange(len(y)) <ide> <del> callbacks = cbks.CallbackList(callbacks) <ide> if verbose: <del> callbacks.append(cbks.BaseLogger()) <del> callbacks.append(cbks.History()) <add> callbacks = [cbks.BaseLogger()] + callbacks <add> callbacks = cbks.CallbackList([cbks.History()] + callbacks) <ide> <ide> callbacks._set_model(self) <ide> callbacks._set_params({ <ide> def fit(self, X, y, batch_size=128, nb_epoch=100, verbose=1, callbacks=[], <ide> }) <ide> callbacks.on_train_begin() <ide> <add> self.stop_training = False <ide> for epoch in range(nb_epoch): <ide> callbacks.on_epoch_begin(epoch) <ide> if shuffle: <ide> def fit(self, X, y, batch_size=128, nb_epoch=100, verbose=1, callbacks=[], <ide> epoch_logs['val_loss'] = val_loss <ide> <ide> callbacks.on_epoch_end(epoch, epoch_logs) <add> if self.stop_training: <add> break <ide> <ide> callbacks.on_train_end() <del> # return history <del> return callbacks.callbacks[-1] <add> return callbacks.callbacks[0] # return history <ide> <ide> def predict(self, X, batch_size=128, verbose=1): <ide> X = standardize_X(X)
2
Go
Go
remove timeoutconn package
4d8ce0ef4a1ebf992906056f46b3f664b0bd30a4
<ide><path>pkg/timeoutconn/timeoutconn.go <del>// Package timeoutconn provides overridden net.Conn that supports deadline (timeout). <del>package timeoutconn <del> <del>import ( <del> "net" <del> "time" <del>) <del> <del>// New creates a net.Conn with a timeout for every Read operation. <del>func New(netConn net.Conn, timeout time.Duration) net.Conn { <del> return &conn{netConn, timeout} <del>} <del> <del>// A net.Conn that sets a deadline for every Read operation. <del>// FIXME was documented the deadline was on Write operation too but not implement <del>type conn struct { <del> net.Conn <del> timeout time.Duration <del>} <del> <del>func (c *conn) Read(b []byte) (int, error) { <del> if c.timeout > 0 { <del> if err := c.Conn.SetReadDeadline(time.Now().Add(c.timeout)); err != nil { <del> return 0, err <del> } <del> } <del> return c.Conn.Read(b) <del>} <ide><path>pkg/timeoutconn/timeoutconn_test.go <del>package timeoutconn <del> <del>import ( <del> "bufio" <del> "fmt" <del> "net" <del> "net/http" <del> "net/http/httptest" <del> "testing" <del> "time" <del>) <del> <del>func TestRead(t *testing.T) { <del> ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { <del> fmt.Fprintln(w, "hello") <del> })) <del> defer ts.Close() <del> conn, err := net.Dial("tcp", ts.URL[7:]) <del> if err != nil { <del> t.Fatalf("failed to create connection to %q: %v", ts.URL, err) <del> } <del> tconn := New(conn, 1*time.Second) <del> <del> if _, err = bufio.NewReader(tconn).ReadString('\n'); err == nil { <del> t.Fatalf("expected timeout error, got none") <del> } <del> if _, err := fmt.Fprintf(tconn, "GET / HTTP/1.0\r\n\r\n"); err != nil { <del> t.Errorf("unexpected error: %v", err) <del> } <del> if _, err = bufio.NewReader(tconn).ReadString('\n'); err != nil { <del> t.Errorf("unexpected error: %v", err) <del> } <del>}
2
Javascript
Javascript
reset updaterange.count to -1 after updatebuffer
a40b7d287bca89bef9f8cb331e041471765bde2e
<ide><path>src/renderers/webgl/WebGLAttributes.js <ide> function WebGLAttributes( gl ) { <ide> gl.bufferSubData( bufferType, updateRange.offset * array.BYTES_PER_ELEMENT, <ide> array.subarray( updateRange.offset, updateRange.offset + updateRange.count ) ); <ide> <del> updateRange.count = 0; // reset range <add> updateRange.count = -1; // reset range <ide> <ide> } <ide>
1
Text
Text
fix typos in rails 5.0 release notes [ci skip]
e477cad2754d4c6d08b61c67951fe6db5d3b2298
<ide><path>guides/source/5_0_release_notes.md <ide> Highlights in Rails 5.0: <ide> <ide> * Action Cable <ide> * Rails API <del>* Active Rcord Attributes API <add>* Active Record Attributes API <ide> * Test Runner <ide> * Exclusive use of `rails` CLI over Rake <ide> * Sprockets 3 <ide> Please refer to the [Changelog][railties] for detailed changes. <ide> <ide> ### Removals <ide> <del>* Removed debugger supprt use byebug instead. `debugger` is not supported by <add>* Removed debugger support, use byebug instead. `debugger` is not supported by <ide> Ruby <ide> 2.2. ([commit](https://github.com/rails/rails/commit/93559da4826546d07014f8cfa399b64b4a143127)) <ide>
1
Javascript
Javascript
update documentation for permissionsandroid
9c2a5cdbc309c0b00d62c52ba1bc0835451af327
<ide><path>Libraries/PermissionsAndroid/PermissionsAndroid.js <ide> type PermissionStatus = 'granted' | 'denied' | 'never_ask_again'; <ide> * <ide> * ### Example <ide> * ``` <add> * import { PermissionsAndroid } from 'react-native'; <add> * <ide> * async function requestCameraPermission() { <ide> * try { <ide> * const granted = await PermissionsAndroid.request(
1
Javascript
Javascript
convert the sidebar to es6 syntax
26ad82f5c2dcc866f85fedf0563f65998c8a6391
<ide><path>web/pdf_sidebar.js <ide> import { mozL10n } from './ui_utils'; <ide> import { RenderingStates } from './pdf_rendering_queue'; <ide> <del>var UI_NOTIFICATION_CLASS = 'pdfSidebarNotification'; <add>const UI_NOTIFICATION_CLASS = 'pdfSidebarNotification'; <ide> <del>var SidebarView = { <add>const SidebarView = { <ide> NONE: 0, <ide> THUMBS: 1, <ide> OUTLINE: 2, <ide> var SidebarView = { <ide> * for documents containing outline/attachments. The default value is `false`. <ide> */ <ide> <del>/** <del> * @class <del> */ <del>var PDFSidebar = (function PDFSidebarClosure() { <add>class PDFSidebar { <ide> /** <del> * @constructs PDFSidebar <ide> * @param {PDFSidebarOptions} options <ide> */ <del> function PDFSidebar(options) { <add> constructor(options) { <ide> this.isOpen = false; <ide> this.active = SidebarView.THUMBS; <ide> this.isInitialViewSet = false; <ide> var PDFSidebar = (function PDFSidebarClosure() { <ide> this._addEventListeners(); <ide> } <ide> <del> PDFSidebar.prototype = { <del> reset: function PDFSidebar_reset() { <del> this.isInitialViewSet = false; <del> <del> this._hideUINotification(null); <del> this.switchView(SidebarView.THUMBS); <del> <del> this.outlineButton.disabled = false; <del> this.attachmentsButton.disabled = false; <del> }, <add> reset() { <add> this.isInitialViewSet = false; <ide> <del> /** <del> * @returns {number} One of the values in {SidebarView}. <del> */ <del> get visibleView() { <del> return (this.isOpen ? this.active : SidebarView.NONE); <del> }, <add> this._hideUINotification(null); <add> this.switchView(SidebarView.THUMBS); <ide> <del> get isThumbnailViewVisible() { <del> return (this.isOpen && this.active === SidebarView.THUMBS); <del> }, <add> this.outlineButton.disabled = false; <add> this.attachmentsButton.disabled = false; <add> } <ide> <del> get isOutlineViewVisible() { <del> return (this.isOpen && this.active === SidebarView.OUTLINE); <del> }, <add> /** <add> * @returns {number} One of the values in {SidebarView}. <add> */ <add> get visibleView() { <add> return (this.isOpen ? this.active : SidebarView.NONE); <add> } <ide> <del> get isAttachmentsViewVisible() { <del> return (this.isOpen && this.active === SidebarView.ATTACHMENTS); <del> }, <add> get isThumbnailViewVisible() { <add> return (this.isOpen && this.active === SidebarView.THUMBS); <add> } <ide> <del> /** <del> * @param {number} view - The sidebar view that should become visible, <del> * must be one of the values in {SidebarView}. <del> */ <del> setInitialView: function PDFSidebar_setInitialView(view) { <del> if (this.isInitialViewSet) { <del> return; <del> } <del> this.isInitialViewSet = true; <add> get isOutlineViewVisible() { <add> return (this.isOpen && this.active === SidebarView.OUTLINE); <add> } <ide> <del> if (this.isOpen && view === SidebarView.NONE) { <del> this._dispatchEvent(); <del> // If the user has already manually opened the sidebar, <del> // immediately closing it would be bad UX. <del> return; <del> } <del> var isViewPreserved = (view === this.visibleView); <del> this.switchView(view, /* forceOpen */ true); <add> get isAttachmentsViewVisible() { <add> return (this.isOpen && this.active === SidebarView.ATTACHMENTS); <add> } <ide> <del> if (isViewPreserved) { <del> // Prevent dispatching two back-to-back `sidebarviewchanged` events, <del> // since `this.switchView` dispatched the event if the view changed. <del> this._dispatchEvent(); <del> } <del> }, <add> /** <add> * @param {number} view - The sidebar view that should become visible, <add> * must be one of the values in {SidebarView}. <add> */ <add> setInitialView(view) { <add> if (this.isInitialViewSet) { <add> return; <add> } <add> this.isInitialViewSet = true; <ide> <del> /** <del> * @param {number} view - The sidebar view that should be switched to, <del> * must be one of the values in {SidebarView}. <del> * @param {boolean} forceOpen - (optional) Ensure that the sidebar is open. <del> * The default value is false. <del> */ <del> switchView: function PDFSidebar_switchView(view, forceOpen) { <del> if (view === SidebarView.NONE) { <del> this.close(); <del> return; <del> } <del> var isViewChanged = (view !== this.active); <del> var shouldForceRendering = false; <add> if (this.isOpen && view === SidebarView.NONE) { <add> this._dispatchEvent(); <add> // If the user has already manually opened the sidebar, <add> // immediately closing it would be bad UX. <add> return; <add> } <add> var isViewPreserved = (view === this.visibleView); <add> this.switchView(view, /* forceOpen */ true); <add> <add> if (isViewPreserved) { <add> // Prevent dispatching two back-to-back `sidebarviewchanged` events, <add> // since `this.switchView` dispatched the event if the view changed. <add> this._dispatchEvent(); <add> } <add> } <ide> <del> switch (view) { <del> case SidebarView.THUMBS: <del> this.thumbnailButton.classList.add('toggled'); <del> this.outlineButton.classList.remove('toggled'); <del> this.attachmentsButton.classList.remove('toggled'); <del> <del> this.thumbnailView.classList.remove('hidden'); <del> this.outlineView.classList.add('hidden'); <del> this.attachmentsView.classList.add('hidden'); <del> <del> if (this.isOpen && isViewChanged) { <del> this._updateThumbnailViewer(); <del> shouldForceRendering = true; <del> } <del> break; <del> case SidebarView.OUTLINE: <del> if (this.outlineButton.disabled) { <del> return; <del> } <del> this.thumbnailButton.classList.remove('toggled'); <del> this.outlineButton.classList.add('toggled'); <del> this.attachmentsButton.classList.remove('toggled'); <del> <del> this.thumbnailView.classList.add('hidden'); <del> this.outlineView.classList.remove('hidden'); <del> this.attachmentsView.classList.add('hidden'); <del> break; <del> case SidebarView.ATTACHMENTS: <del> if (this.attachmentsButton.disabled) { <del> return; <del> } <del> this.thumbnailButton.classList.remove('toggled'); <del> this.outlineButton.classList.remove('toggled'); <del> this.attachmentsButton.classList.add('toggled'); <del> <del> this.thumbnailView.classList.add('hidden'); <del> this.outlineView.classList.add('hidden'); <del> this.attachmentsView.classList.remove('hidden'); <del> break; <del> default: <del> console.error('PDFSidebar_switchView: "' + view + <del> '" is an unsupported value.'); <add> /** <add> * @param {number} view - The sidebar view that should be switched to, <add> * must be one of the values in {SidebarView}. <add> * @param {boolean} forceOpen - (optional) Ensure that the sidebar is open. <add> * The default value is `false`. <add> */ <add> switchView(view, forceOpen = false) { <add> if (view === SidebarView.NONE) { <add> this.close(); <add> return; <add> } <add> var isViewChanged = (view !== this.active); <add> var shouldForceRendering = false; <add> <add> switch (view) { <add> case SidebarView.THUMBS: <add> this.thumbnailButton.classList.add('toggled'); <add> this.outlineButton.classList.remove('toggled'); <add> this.attachmentsButton.classList.remove('toggled'); <add> <add> this.thumbnailView.classList.remove('hidden'); <add> this.outlineView.classList.add('hidden'); <add> this.attachmentsView.classList.add('hidden'); <add> <add> if (this.isOpen && isViewChanged) { <add> this._updateThumbnailViewer(); <add> shouldForceRendering = true; <add> } <add> break; <add> case SidebarView.OUTLINE: <add> if (this.outlineButton.disabled) { <ide> return; <del> } <del> // Update the active view *after* it has been validated above, <del> // in order to prevent setting it to an invalid state. <del> this.active = view | 0; <del> <del> if (forceOpen && !this.isOpen) { <del> this.open(); <del> return; // NOTE: Opening will trigger rendering, and dispatch the event. <del> } <del> if (shouldForceRendering) { <del> this._forceRendering(); <del> } <del> if (isViewChanged) { <del> this._dispatchEvent(); <del> } <del> this._hideUINotification(this.active); <del> }, <del> <del> open: function PDFSidebar_open() { <del> if (this.isOpen) { <add> } <add> this.thumbnailButton.classList.remove('toggled'); <add> this.outlineButton.classList.add('toggled'); <add> this.attachmentsButton.classList.remove('toggled'); <add> <add> this.thumbnailView.classList.add('hidden'); <add> this.outlineView.classList.remove('hidden'); <add> this.attachmentsView.classList.add('hidden'); <add> break; <add> case SidebarView.ATTACHMENTS: <add> if (this.attachmentsButton.disabled) { <add> return; <add> } <add> this.thumbnailButton.classList.remove('toggled'); <add> this.outlineButton.classList.remove('toggled'); <add> this.attachmentsButton.classList.add('toggled'); <add> <add> this.thumbnailView.classList.add('hidden'); <add> this.outlineView.classList.add('hidden'); <add> this.attachmentsView.classList.remove('hidden'); <add> break; <add> default: <add> console.error('PDFSidebar_switchView: "' + view + <add> '" is an unsupported value.'); <ide> return; <del> } <del> this.isOpen = true; <del> this.toggleButton.classList.add('toggled'); <del> <del> this.outerContainer.classList.add('sidebarMoving'); <del> this.outerContainer.classList.add('sidebarOpen'); <del> <del> if (this.active === SidebarView.THUMBS) { <del> this._updateThumbnailViewer(); <del> } <add> } <add> // Update the active view *after* it has been validated above, <add> // in order to prevent setting it to an invalid state. <add> this.active = view | 0; <add> <add> if (forceOpen && !this.isOpen) { <add> this.open(); <add> return; // NOTE: Opening will trigger rendering, and dispatch the event. <add> } <add> if (shouldForceRendering) { <ide> this._forceRendering(); <add> } <add> if (isViewChanged) { <ide> this._dispatchEvent(); <add> } <add> this._hideUINotification(this.active); <add> } <ide> <del> this._hideUINotification(this.active); <del> }, <add> open() { <add> if (this.isOpen) { <add> return; <add> } <add> this.isOpen = true; <add> this.toggleButton.classList.add('toggled'); <ide> <del> close: function PDFSidebar_close() { <del> if (!this.isOpen) { <del> return; <del> } <del> this.isOpen = false; <del> this.toggleButton.classList.remove('toggled'); <add> this.outerContainer.classList.add('sidebarMoving'); <add> this.outerContainer.classList.add('sidebarOpen'); <ide> <del> this.outerContainer.classList.add('sidebarMoving'); <del> this.outerContainer.classList.remove('sidebarOpen'); <add> if (this.active === SidebarView.THUMBS) { <add> this._updateThumbnailViewer(); <add> } <add> this._forceRendering(); <add> this._dispatchEvent(); <ide> <del> this._forceRendering(); <del> this._dispatchEvent(); <del> }, <add> this._hideUINotification(this.active); <add> } <ide> <del> toggle: function PDFSidebar_toggle() { <del> if (this.isOpen) { <del> this.close(); <del> } else { <del> this.open(); <del> } <del> }, <add> close() { <add> if (!this.isOpen) { <add> return; <add> } <add> this.isOpen = false; <add> this.toggleButton.classList.remove('toggled'); <ide> <del> /** <del> * @private <del> */ <del> _dispatchEvent: function PDFSidebar_dispatchEvent() { <del> this.eventBus.dispatch('sidebarviewchanged', { <del> source: this, <del> view: this.visibleView, <del> }); <del> }, <add> this.outerContainer.classList.add('sidebarMoving'); <add> this.outerContainer.classList.remove('sidebarOpen'); <ide> <del> /** <del> * @private <del> */ <del> _forceRendering: function PDFSidebar_forceRendering() { <del> if (this.onToggled) { <del> this.onToggled(); <del> } else { // Fallback <del> this.pdfViewer.forceRendering(); <del> this.pdfThumbnailViewer.forceRendering(); <del> } <del> }, <add> this._forceRendering(); <add> this._dispatchEvent(); <add> } <ide> <del> /** <del> * @private <del> */ <del> _updateThumbnailViewer: function PDFSidebar_updateThumbnailViewer() { <del> var pdfViewer = this.pdfViewer; <del> var thumbnailViewer = this.pdfThumbnailViewer; <del> <del> // Use the rendered pages to set the corresponding thumbnail images. <del> var pagesCount = pdfViewer.pagesCount; <del> for (var pageIndex = 0; pageIndex < pagesCount; pageIndex++) { <del> var pageView = pdfViewer.getPageView(pageIndex); <del> if (pageView && pageView.renderingState === RenderingStates.FINISHED) { <del> var thumbnailView = thumbnailViewer.getThumbnail(pageIndex); <del> thumbnailView.setImage(pageView); <del> } <del> } <del> thumbnailViewer.scrollThumbnailIntoView(pdfViewer.currentPageNumber); <del> }, <add> toggle() { <add> if (this.isOpen) { <add> this.close(); <add> } else { <add> this.open(); <add> } <add> } <ide> <del> /** <del> * @private <del> */ <del> _showUINotification: function (view) { <del> if (this.disableNotification) { <del> return; <del> } <add> /** <add> * @private <add> */ <add> _dispatchEvent() { <add> this.eventBus.dispatch('sidebarviewchanged', { <add> source: this, <add> view: this.visibleView, <add> }); <add> } <ide> <del> this.toggleButton.title = mozL10n.get('toggle_sidebar_notification.title', <del> null, 'Toggle Sidebar (document contains outline/attachments)'); <add> /** <add> * @private <add> */ <add> _forceRendering() { <add> if (this.onToggled) { <add> this.onToggled(); <add> } else { // Fallback <add> this.pdfViewer.forceRendering(); <add> this.pdfThumbnailViewer.forceRendering(); <add> } <add> } <ide> <del> if (!this.isOpen) { <del> // Only show the notification on the `toggleButton` if the sidebar is <del> // currently closed, to avoid unnecessarily bothering the user. <del> this.toggleButton.classList.add(UI_NOTIFICATION_CLASS); <del> } else if (view === this.active) { <del> // If the sidebar is currently open *and* the `view` is visible, do not <del> // bother the user with a notification on the corresponding button. <del> return; <add> /** <add> * @private <add> */ <add> _updateThumbnailViewer() { <add> var pdfViewer = this.pdfViewer; <add> var thumbnailViewer = this.pdfThumbnailViewer; <add> <add> // Use the rendered pages to set the corresponding thumbnail images. <add> var pagesCount = pdfViewer.pagesCount; <add> for (var pageIndex = 0; pageIndex < pagesCount; pageIndex++) { <add> var pageView = pdfViewer.getPageView(pageIndex); <add> if (pageView && pageView.renderingState === RenderingStates.FINISHED) { <add> var thumbnailView = thumbnailViewer.getThumbnail(pageIndex); <add> thumbnailView.setImage(pageView); <ide> } <add> } <add> thumbnailViewer.scrollThumbnailIntoView(pdfViewer.currentPageNumber); <add> } <ide> <add> /** <add> * @private <add> */ <add> _showUINotification(view) { <add> if (this.disableNotification) { <add> return; <add> } <add> <add> this.toggleButton.title = mozL10n.get('toggle_sidebar_notification.title', <add> null, 'Toggle Sidebar (document contains outline/attachments)'); <add> <add> if (!this.isOpen) { <add> // Only show the notification on the `toggleButton` if the sidebar is <add> // currently closed, to avoid unnecessarily bothering the user. <add> this.toggleButton.classList.add(UI_NOTIFICATION_CLASS); <add> } else if (view === this.active) { <add> // If the sidebar is currently open *and* the `view` is visible, do not <add> // bother the user with a notification on the corresponding button. <add> return; <add> } <add> <add> switch (view) { <add> case SidebarView.OUTLINE: <add> this.outlineButton.classList.add(UI_NOTIFICATION_CLASS); <add> break; <add> case SidebarView.ATTACHMENTS: <add> this.attachmentsButton.classList.add(UI_NOTIFICATION_CLASS); <add> break; <add> } <add> } <add> <add> /** <add> * @private <add> */ <add> _hideUINotification(view) { <add> if (this.disableNotification) { <add> return; <add> } <add> <add> var removeNotification = (view) => { <ide> switch (view) { <ide> case SidebarView.OUTLINE: <del> this.outlineButton.classList.add(UI_NOTIFICATION_CLASS); <add> this.outlineButton.classList.remove(UI_NOTIFICATION_CLASS); <ide> break; <ide> case SidebarView.ATTACHMENTS: <del> this.attachmentsButton.classList.add(UI_NOTIFICATION_CLASS); <add> this.attachmentsButton.classList.remove(UI_NOTIFICATION_CLASS); <ide> break; <ide> } <del> }, <del> <del> /** <del> * @private <del> */ <del> _hideUINotification: function (view) { <del> if (this.disableNotification) { <del> return; <del> } <del> <del> var removeNotification = function (view) { <del> switch (view) { <del> case SidebarView.OUTLINE: <del> this.outlineButton.classList.remove(UI_NOTIFICATION_CLASS); <del> break; <del> case SidebarView.ATTACHMENTS: <del> this.attachmentsButton.classList.remove(UI_NOTIFICATION_CLASS); <del> break; <del> } <del> }.bind(this); <add> }; <add> <add> if (!this.isOpen && view !== null) { <add> // Only hide the notifications when the sidebar is currently open, <add> // or when it is being reset (i.e. `view === null`). <add> return; <add> } <add> this.toggleButton.classList.remove(UI_NOTIFICATION_CLASS); <add> <add> if (view !== null) { <add> removeNotification(view); <add> return; <add> } <add> for (view in SidebarView) { // Remove all sidebar notifications on reset. <add> removeNotification(SidebarView[view]); <add> } <add> <add> this.toggleButton.title = mozL10n.get('toggle_sidebar.title', null, <add> 'Toggle Sidebar'); <add> } <ide> <del> if (!this.isOpen && view !== null) { <del> // Only hide the notifications when the sidebar is currently open, <del> // or when it is being reset (i.e. `view === null`). <del> return; <add> /** <add> * @private <add> */ <add> _addEventListeners() { <add> this.mainContainer.addEventListener('transitionend', (evt) => { <add> if (evt.target === this.mainContainer) { <add> this.outerContainer.classList.remove('sidebarMoving'); <ide> } <del> this.toggleButton.classList.remove(UI_NOTIFICATION_CLASS); <add> }); <ide> <del> if (view !== null) { <del> removeNotification(view); <del> return; <del> } <del> for (view in SidebarView) { // Remove all sidebar notifications on reset. <del> removeNotification(SidebarView[view]); <add> // Buttons for switching views. <add> this.thumbnailButton.addEventListener('click', () => { <add> this.switchView(SidebarView.THUMBS); <add> }); <add> <add> this.outlineButton.addEventListener('click', () => { <add> this.switchView(SidebarView.OUTLINE); <add> }); <add> this.outlineButton.addEventListener('dblclick', () => { <add> this.pdfOutlineViewer.toggleOutlineTree(); <add> }); <add> <add> this.attachmentsButton.addEventListener('click', () => { <add> this.switchView(SidebarView.ATTACHMENTS); <add> }); <add> <add> // Disable/enable views. <add> this.eventBus.on('outlineloaded', (evt) => { <add> var outlineCount = evt.outlineCount; <add> <add> this.outlineButton.disabled = !outlineCount; <add> <add> if (outlineCount) { <add> this._showUINotification(SidebarView.OUTLINE); <add> } else if (this.active === SidebarView.OUTLINE) { <add> // If the outline view was opened during document load, switch away <add> // from it if it turns out that the document has no outline. <add> this.switchView(SidebarView.THUMBS); <ide> } <add> }); <ide> <del> this.toggleButton.title = mozL10n.get('toggle_sidebar.title', null, <del> 'Toggle Sidebar'); <del> }, <del> <del> /** <del> * @private <del> */ <del> _addEventListeners: function PDFSidebar_addEventListeners() { <del> var self = this; <del> <del> self.mainContainer.addEventListener('transitionend', function(evt) { <del> if (evt.target === /* mainContainer */ this) { <del> self.outerContainer.classList.remove('sidebarMoving'); <del> } <del> }); <del> <del> // Buttons for switching views. <del> self.thumbnailButton.addEventListener('click', function() { <del> self.switchView(SidebarView.THUMBS); <del> }); <del> <del> self.outlineButton.addEventListener('click', function() { <del> self.switchView(SidebarView.OUTLINE); <del> }); <del> self.outlineButton.addEventListener('dblclick', function() { <del> self.pdfOutlineViewer.toggleOutlineTree(); <del> }); <del> <del> self.attachmentsButton.addEventListener('click', function() { <del> self.switchView(SidebarView.ATTACHMENTS); <del> }); <del> <del> // Disable/enable views. <del> self.eventBus.on('outlineloaded', function(e) { <del> var outlineCount = e.outlineCount; <del> <del> self.outlineButton.disabled = !outlineCount; <del> <del> if (outlineCount) { <del> self._showUINotification(SidebarView.OUTLINE); <del> } else if (self.active === SidebarView.OUTLINE) { <del> // If the outline view was opened during document load, switch away <del> // from it if it turns out that the document has no outline. <del> self.switchView(SidebarView.THUMBS); <del> } <del> }); <del> <del> self.eventBus.on('attachmentsloaded', function(e) { <del> var attachmentsCount = e.attachmentsCount; <add> this.eventBus.on('attachmentsloaded', (evt) => { <add> var attachmentsCount = evt.attachmentsCount; <ide> <del> self.attachmentsButton.disabled = !attachmentsCount; <add> this.attachmentsButton.disabled = !attachmentsCount; <ide> <del> if (attachmentsCount) { <del> self._showUINotification(SidebarView.ATTACHMENTS); <del> } else if (self.active === SidebarView.ATTACHMENTS) { <del> // If the attachment view was opened during document load, switch away <del> // from it if it turns out that the document has no attachments. <del> self.switchView(SidebarView.THUMBS); <del> } <del> }); <del> <del> // Update the thumbnailViewer, if visible, when exiting presentation mode. <del> self.eventBus.on('presentationmodechanged', function(e) { <del> if (!e.active && !e.switchInProgress && self.isThumbnailViewVisible) { <del> self._updateThumbnailViewer(); <del> } <del> }); <del> }, <del> }; <add> if (attachmentsCount) { <add> this._showUINotification(SidebarView.ATTACHMENTS); <add> } else if (this.active === SidebarView.ATTACHMENTS) { <add> // If the attachment view was opened during document load, switch away <add> // from it if it turns out that the document has no attachments. <add> this.switchView(SidebarView.THUMBS); <add> } <add> }); <ide> <del> return PDFSidebar; <del>})(); <add> // Update the thumbnailViewer, if visible, when exiting presentation mode. <add> this.eventBus.on('presentationmodechanged', (evt) => { <add> if (!evt.active && !evt.switchInProgress && this.isThumbnailViewVisible) { <add> this._updateThumbnailViewer(); <add> } <add> }); <add> } <add>} <ide> <ide> export { <ide> SidebarView,
1
Javascript
Javascript
ensure app doesn't crash when module is absent
22475ed38d181754d64a69f3d2143fa7d4b22a35
<ide><path>Libraries/ReactNative/I18nManager.js <ide> <ide> import NativeI18nManager from './NativeI18nManager'; <ide> <add>const i18nConstants = NativeI18nManager <add> ? NativeI18nManager.getConstants() <add> : { <add> isRTL: false, <add> doLeftAndRightSwapInRTL: true, <add> }; <add> <ide> module.exports = { <ide> getConstants: () => { <del> return NativeI18nManager.getConstants(); <add> return i18nConstants; <ide> }, <ide> <ide> allowRTL: (shouldAllow: boolean) => { <add> if (!NativeI18nManager) { <add> return; <add> } <add> <ide> NativeI18nManager.allowRTL(shouldAllow); <ide> }, <ide> <ide> forceRTL: (shouldForce: boolean) => { <add> if (!NativeI18nManager) { <add> return; <add> } <add> <ide> NativeI18nManager.forceRTL(shouldForce); <ide> }, <ide> <ide> swapLeftAndRightInRTL: (flipStyles: boolean) => { <add> if (!NativeI18nManager) { <add> return; <add> } <add> <ide> NativeI18nManager.swapLeftAndRightInRTL(flipStyles); <ide> }, <ide> <del> isRTL: NativeI18nManager.getConstants().isRTL, <del> doLeftAndRightSwapInRTL: NativeI18nManager.getConstants() <del> .doLeftAndRightSwapInRTL, <add> isRTL: i18nConstants.isRTL, <add> doLeftAndRightSwapInRTL: i18nConstants.doLeftAndRightSwapInRTL, <ide> }; <ide><path>Libraries/ReactNative/NativeI18nManager.js <ide> export interface Spec extends TurboModule { <ide> swapLeftAndRightInRTL: (flipStyles: boolean) => void; <ide> } <ide> <del>export default TurboModuleRegistry.getEnforcing<Spec>('I18nManager'); <add>export default TurboModuleRegistry.get<Spec>('I18nManager');
2
Ruby
Ruby
extract common logic into a method
7293cac8548fb8083f718922d8d3ca9b3a18767f
<ide><path>activerecord/lib/active_record/attribute_methods.rb <ide> def instance_method_already_implemented?(method_name) <ide> raise DangerousAttributeError, "#{method_name} is defined by ActiveRecord" <ide> end <ide> <del> if superclass == Base <add> if active_record_super == Base <ide> super <ide> else <ide> # If B < A and A defines its own attribute method, then we don't want to overwrite that. <ide><path>activerecord/lib/active_record/connection_adapters/abstract/connection_pool.rb <ide> def retrieve_connection_pool(klass) <ide> pool = @class_to_pool[klass.name] <ide> return pool if pool <ide> return nil if ActiveRecord::Base == klass <del> <del> if klass.superclass && klass.superclass < Model <del> retrieve_connection_pool klass.superclass <del> else <del> retrieve_connection_pool ActiveRecord::Base <del> end <add> retrieve_connection_pool klass.active_record_super <ide> end <ide> end <ide> <ide><path>activerecord/lib/active_record/core.rb <ide> def arel_engine <ide> if self == ActiveRecord::Base <ide> ActiveRecord::Base <ide> else <del> if connection_handler.connection_pools[name] <del> self <del> elsif superclass < ActiveRecord::Model <del> superclass.arel_engine <del> else <del> ActiveRecord::Base <del> end <add> connection_handler.connection_pools[name] ? self : active_record_super.arel_engine <ide> end <ide> end <ide> end <ide><path>activerecord/lib/active_record/inheritance.rb <ide> module Inheritance <ide> module ClassMethods <ide> # True if this isn't a concrete subclass needing a STI type condition. <ide> def descends_from_active_record? <del> if !(superclass < Model) <del> true <del> elsif superclass.abstract_class? <del> superclass.descends_from_active_record? <add> sup = active_record_super <add> <add> if sup.abstract_class? <add> sup.descends_from_active_record? <ide> else <del> superclass == Base || !columns_hash.include?(inheritance_column) <add> sup == Base || !columns_hash.include?(inheritance_column) <ide> end <ide> end <ide> <ide> def instantiate(record) <ide> instance <ide> end <ide> <add> # If this class includes ActiveRecord::Model then it won't have a <add> # superclass. So this provides a way to get to the 'root' (ActiveRecord::Base), <add> # through inheritance hierarchy, ending in Base, whether or not that is <add> # actually an ancestor of the class. <add> # <add> # Mainly for internal use. <add> def active_record_super #:nodoc: <add> if self == Base || superclass && superclass < Model::Tag <add> superclass <add> else <add> Base <add> end <add> end <add> <ide> protected <ide> <ide> # Returns the class descending directly from ActiveRecord::Base or an <ide> def class_of_active_record_descendant(klass) <ide> raise ActiveRecordError, "#{name} doesn't belong in a hierarchy descending from ActiveRecord" <ide> end <ide> <del> if klass == Base || klass.superclass == Base || <del> klass.superclass < Model::Tag && klass.superclass.abstract_class? || <del> !(klass.superclass < Model::Tag) <add> sup = klass.active_record_super <add> if klass == Base || sup == Base || sup.abstract_class? <ide> klass <ide> else <del> class_of_active_record_descendant(klass.superclass) <add> class_of_active_record_descendant(sup) <ide> end <ide> end <ide> <ide><path>activerecord/lib/active_record/model_schema.rb <ide> def quoted_table_name <ide> <ide> # Computes the table name, (re)sets it internally, and returns it. <ide> def reset_table_name #:nodoc: <del> if (superclass < ActiveRecord::Model) && superclass.abstract_class? <del> self.table_name = superclass.table_name || compute_table_name <add> if active_record_super.abstract_class? <add> self.table_name = active_record_super.table_name || compute_table_name <ide> elsif abstract_class? <del> self.table_name = superclass == Base ? nil : superclass.table_name <add> self.table_name = active_record_super == Base ? nil : active_record_super.table_name <ide> else <ide> self.table_name = compute_table_name <ide> end <ide> def full_table_name_prefix #:nodoc: <ide> <ide> # The name of the column containing the object's class when Single Table Inheritance is used <ide> def inheritance_column <del> if self == Base || !(superclass < Model) <add> if self == Base <ide> 'type' <ide> else <del> (@inheritance_column ||= nil) || superclass.inheritance_column <add> (@inheritance_column ||= nil) || active_record_super.inheritance_column <ide> end <ide> end <ide> <ide><path>activerecord/test/cases/attribute_methods/read_test.rb <ide> def type; :integer; end <ide> def setup <ide> @klass = Class.new do <ide> def self.superclass; Base; end <add> def self.active_record_super; Base; end <ide> def self.base_class; self; end <ide> <ide> include ActiveRecord::AttributeMethods <ide><path>activerecord/test/cases/connection_adapters/connection_handler_test.rb <ide> def setup <ide> @handler.establish_connection 'america', Base.connection_pool.spec <ide> @klass = Class.new do <ide> def self.name; 'america'; end <add> class << self <add> alias active_record_super superclass <add> end <ide> end <ide> @subklass = Class.new(@klass) do <ide> def self.name; 'north america'; end
7
PHP
PHP
adjust code doc
451f08565d7d491328fe3fa7be7e142f5c81ebd3
<ide><path>src/Illuminate/Testing/Fluent/Concerns/Matching.php <ide> public function whereAllType(array $bindings): self <ide> } <ide> <ide> /** <del> * Asserts that all values exist and match their expected values. <add> * Asserts that the property matches their expected values. <ide> * <ide> * @param string $key <ide> * @param array|string $expected
1
Ruby
Ruby
add ast helper functions for editing formulae
aaf7bc2bc5533f2ab177374193ad260f97b79985
<ide><path>Library/Homebrew/test/utils/ast_spec.rb <add># typed: false <add># frozen_string_literal: true <add> <add>require "utils/ast" <add> <add>describe Utils::AST do <add> let(:initial_formula) do <add> <<~RUBY <add> class Foo < Formula <add> url "https://brew.sh/foo-1.0.tar.gz" <add> license all_of: [ <add> :public_domain, <add> "MIT", <add> "GPL-3.0-or-later" => { with: "Autoconf-exception-3.0" }, <add> ] <add> end <add> RUBY <add> end <add> <add> describe ".replace_formula_stanza!" do <add> it "replaces the specified stanza in a formula" do <add> contents = initial_formula.dup <add> described_class.replace_formula_stanza! contents, name: :license, replacement: "license :public_domain" <add> expect(contents).to eq <<~RUBY <add> class Foo < Formula <add> url "https://brew.sh/foo-1.0.tar.gz" <add> license :public_domain <add> end <add> RUBY <add> end <add> end <add> <add> describe ".add_formula_stanza!" do <add> it "adds the specified stanza to a formula" do <add> contents = initial_formula.dup <add> described_class.add_formula_stanza! contents, name: :revision, text: "revision 1" <add> expect(contents).to eq <<~RUBY <add> class Foo < Formula <add> url "https://brew.sh/foo-1.0.tar.gz" <add> license all_of: [ <add> :public_domain, <add> "MIT", <add> "GPL-3.0-or-later" => { with: "Autoconf-exception-3.0" }, <add> ] <add> revision 1 <add> end <add> RUBY <add> end <add> end <add>end <ide><path>Library/Homebrew/utils/ast.rb <add># typed: true <add># frozen_string_literal: true <add> <add>module Utils <add> # Helper functions for editing Ruby files. <add> # <add> # @api private <add> module AST <add> class << self <add> extend T::Sig <add> <add> def replace_formula_stanza!(formula_contents, name:, replacement:, type: nil) <add> processed_source, body_node = process_formula(formula_contents) <add> children = body_node.begin_type? ? body_node.children.compact : [body_node] <add> stanza_node = children.find { |child| call_node_match?(child, name: name, type: type) } <add> raise "Could not find #{name} stanza!" if stanza_node.nil? <add> <add> tree_rewriter = Parser::Source::TreeRewriter.new(processed_source.buffer) <add> tree_rewriter.replace(stanza_node.source_range, replacement.strip) <add> formula_contents.replace(tree_rewriter.process) <add> end <add> <add> def add_formula_stanza!(formula_contents, name:, text:, type: nil) <add> processed_source, body_node = process_formula(formula_contents) <add> <add> preceding_component = if body_node.begin_type? <add> body_node.children.compact.reduce do |previous_child, current_child| <add> if formula_component_before_target?(current_child, <add> target_name: name, <add> target_type: type) <add> next current_child <add> else <add> break previous_child <add> end <add> end <add> else <add> body_node <add> end <add> preceding_component = preceding_component.last_argument if preceding_component.send_type? <add> <add> preceding_expr = preceding_component.location.expression <add> processed_source.comments.each do |comment| <add> comment_expr = comment.location.expression <add> distance = comment_expr.first_line - preceding_expr.first_line <add> case distance <add> when 0 <add> if comment_expr.last_line > preceding_expr.last_line || <add> comment_expr.end_pos > preceding_expr.end_pos <add> preceding_expr = comment_expr <add> end <add> when 1 <add> preceding_expr = comment_expr <add> end <add> end <add> <add> tree_rewriter = Parser::Source::TreeRewriter.new(processed_source.buffer) <add> tree_rewriter.insert_after(preceding_expr, "\n#{text.match?(/\A\s+/) ? text : text.indent(2)}") <add> formula_contents.replace(tree_rewriter.process) <add> end <add> <add> private <add> <add> def process_formula(formula_contents) <add> Homebrew.install_bundler_gems! <add> require "rubocop-ast" <add> <add> ruby_version = Version.new(HOMEBREW_REQUIRED_RUBY_VERSION).major_minor.to_f <add> processed_source = RuboCop::AST::ProcessedSource.new(formula_contents, ruby_version) <add> root_node = processed_source.ast <add> <add> class_node = if root_node.class_type? <add> root_node <add> elsif root_node.begin_type? <add> root_node.children.find { |n| n.class_type? && n.parent_class&.const_name == "Formula" } <add> end <add> <add> raise "Could not find formula class!" if class_node.nil? <add> <add> body_node = class_node.body <add> raise "Formula class is empty!" if body_node.nil? <add> <add> [processed_source, body_node] <add> end <add> <add> def formula_component_before_target?(node, target_name:, target_type: nil) <add> require "rubocops/components_order" <add> <add> RuboCop::Cop::FormulaAudit::ComponentsOrder::COMPONENT_PRECEDENCE_LIST.each do |components| <add> return false if components.any? do |component| <add> component_match?(component_name: component[:name], <add> component_type: component[:type], <add> target_name: target_name, <add> target_type: target_type) <add> end <add> return true if components.any? do |component| <add> call_node_match?(node, name: component[:name], type: component[:type]) <add> end <add> end <add> <add> false <add> end <add> <add> def component_match?(component_name:, component_type:, target_name:, target_type: nil) <add> component_name == target_name && (target_type.nil? || component_type == target_type) <add> end <add> <add> def call_node_match?(node, name:, type: nil) <add> node_type = if node.send_type? <add> :method_call <add> elsif node.block_type? <add> :block_call <add> end <add> return false if node_type.nil? <add> <add> component_match?(component_name: T.unsafe(node).method_name, <add> component_type: node_type, <add> target_name: name, <add> target_type: type) <add> end <add> end <add> end <add>end <ide><path>Library/Homebrew/utils/bottles.rb <ide> # frozen_string_literal: true <ide> <ide> require "tab" <add>require "utils/ast" <ide> <ide> module Utils <ide> # Helper functions for bottles. <ide> def formula_contents(bottle_file, <ide> end <ide> <ide> def add_bottle_stanza!(formula_contents, bottle_output) <del> Homebrew.install_bundler_gems! <del> require "rubocop-ast" <del> <del> ruby_version = Version.new(HOMEBREW_REQUIRED_RUBY_VERSION).major_minor.to_f <del> processed_source = RuboCop::AST::ProcessedSource.new(formula_contents, ruby_version) <del> root_node = processed_source.ast <del> <del> class_node = if root_node.class_type? <del> root_node <del> elsif root_node.begin_type? <del> root_node.children.find do |n| <del> n.class_type? && n.parent_class&.const_name == "Formula" <del> end <del> end <del> <del> odie "Could not find formula class!" if class_node.nil? <del> <del> body_node = class_node.body <del> odie "Formula class is empty!" if body_node.nil? <del> <del> node_before_bottle = if body_node.begin_type? <del> body_node.children.compact.reduce do |previous_child, current_child| <del> break previous_child unless component_before_bottle_block? current_child <del> <del> current_child <del> end <del> else <del> body_node <del> end <del> node_before_bottle = node_before_bottle.last_argument if node_before_bottle.send_type? <del> <del> expr_before_bottle = node_before_bottle.location.expression <del> processed_source.comments.each do |comment| <del> comment_expr = comment.location.expression <del> distance = comment_expr.first_line - expr_before_bottle.first_line <del> case distance <del> when 0 <del> if comment_expr.last_line > expr_before_bottle.last_line || <del> comment_expr.end_pos > expr_before_bottle.end_pos <del> expr_before_bottle = comment_expr <del> end <del> when 1 <del> expr_before_bottle = comment_expr <del> end <del> end <del> <del> tree_rewriter = Parser::Source::TreeRewriter.new(processed_source.buffer) <del> tree_rewriter.insert_after(expr_before_bottle, "\n\n#{bottle_output.chomp}") <del> formula_contents.replace(tree_rewriter.process) <del> end <del> <del> private <del> <del> def component_before_bottle_block?(node) <del> require "rubocops/components_order" <del> <del> RuboCop::Cop::FormulaAudit::ComponentsOrder::COMPONENT_PRECEDENCE_LIST.each do |components| <del> components.each do |component| <del> return false if component[:name] == :bottle && component[:type] == :block_call <del> <del> case component[:type] <del> when :method_call <del> return true if node.send_type? && node.method_name == component[:name] <del> when :block_call <del> return true if node.block_type? && node.method_name == component[:name] <del> end <del> end <del> end <del> false <add> Utils::AST.add_formula_stanza!(formula_contents, <add> name: :bottle, <add> type: :block_call, <add> text: "\n#{bottle_output.chomp}") <ide> end <ide> end <ide>
3
Go
Go
add nil ipam driver
ac1ec348ffcb30048d0d0141158493c29ca45afe
<ide><path>libnetwork/drivers.go <ide> import ( <ide> "github.com/docker/libnetwork/netlabel" <ide> <ide> builtinIpam "github.com/docker/libnetwork/ipams/builtin" <add> nullIpam "github.com/docker/libnetwork/ipams/null" <ide> remoteIpam "github.com/docker/libnetwork/ipams/remote" <ide> ) <ide> <ide> func initIpams(ic ipamapi.Callback, lDs, gDs interface{}) error { <ide> for _, fn := range [](func(ipamapi.Callback, interface{}, interface{}) error){ <ide> builtinIpam.Init, <ide> remoteIpam.Init, <add> nullIpam.Init, <ide> } { <ide> if err := fn(ic, lDs, gDs); err != nil { <ide> return err <ide><path>libnetwork/ipamapi/contract.go <ide> import ( <ide> const ( <ide> // DefaultIPAM is the name of the built-in default ipam driver <ide> DefaultIPAM = "default" <add> // NullIPAM is the name of the built-in null ipam driver <add> NullIPAM = "null" <ide> // PluginEndpointType represents the Endpoint Type used by Plugin system <ide> PluginEndpointType = "IpamDriver" <ide> // RequestAddressType represents the Address Type used when requesting an address <ide><path>libnetwork/ipams/null/null.go <add>// Package null implements the null ipam driver. Null ipam driver satisfies ipamapi contract, <add>// but does not effectively reserve/allocate any address pool or address <add>package null <add> <add>import ( <add> "fmt" <add> "net" <add> <add> "github.com/docker/libnetwork/discoverapi" <add> "github.com/docker/libnetwork/ipamapi" <add> "github.com/docker/libnetwork/types" <add>) <add> <add>var ( <add> defaultAS = "null" <add> defaultPool, _ = types.ParseCIDR("0.0.0.0/0") <add> defaultPoolID = fmt.Sprintf("%s/%s", defaultAS, defaultPool.String()) <add>) <add> <add>type allocator struct{} <add> <add>func (a *allocator) GetDefaultAddressSpaces() (string, string, error) { <add> return defaultAS, defaultAS, nil <add>} <add> <add>func (a *allocator) RequestPool(addressSpace, pool, subPool string, options map[string]string, v6 bool) (string, *net.IPNet, map[string]string, error) { <add> if addressSpace != defaultAS { <add> return "", nil, nil, types.BadRequestErrorf("unknown address space: %s", addressSpace) <add> } <add> if pool != "" { <add> return "", nil, nil, types.BadRequestErrorf("null ipam driver does not handle specific address pool requests") <add> } <add> if subPool != "" { <add> return "", nil, nil, types.BadRequestErrorf("null ipam driver does not handle specific address subpool requests") <add> } <add> if v6 { <add> return "", nil, nil, types.BadRequestErrorf("null ipam driver does not handle IPv6 address pool pool requests") <add> } <add> return defaultPoolID, defaultPool, nil, nil <add>} <add> <add>func (a *allocator) ReleasePool(poolID string) error { <add> return nil <add>} <add> <add>func (a *allocator) RequestAddress(poolID string, ip net.IP, opts map[string]string) (*net.IPNet, map[string]string, error) { <add> if poolID != defaultPoolID { <add> return nil, nil, types.BadRequestErrorf("unknown pool id: %s", poolID) <add> } <add> return nil, nil, nil <add>} <add> <add>func (a *allocator) ReleaseAddress(poolID string, ip net.IP) error { <add> if poolID != defaultPoolID { <add> return types.BadRequestErrorf("unknown pool id: %s", poolID) <add> } <add> return nil <add>} <add> <add>func (a *allocator) DiscoverNew(dType discoverapi.DiscoveryType, data interface{}) error { <add> return nil <add>} <add> <add>func (a *allocator) DiscoverDelete(dType discoverapi.DiscoveryType, data interface{}) error { <add> return nil <add>} <add> <add>// Init registers a remote ipam when its plugin is activated <add>func Init(ic ipamapi.Callback, l, g interface{}) error { <add> return ic.RegisterIpamDriver(ipamapi.NullIPAM, &allocator{}) <add>} <ide><path>libnetwork/ipams/null/null_test.go <add>package null <add> <add>import ( <add> "testing" <add> <add> _ "github.com/docker/libnetwork/testutils" <add> "github.com/docker/libnetwork/types" <add>) <add> <add>func TestPoolRequest(t *testing.T) { <add> a := allocator{} <add> <add> pid, pool, _, err := a.RequestPool(defaultAS, "", "", nil, false) <add> if err != nil { <add> t.Fatal(err) <add> } <add> if !types.CompareIPNet(defaultPool, pool) { <add> t.Fatalf("Unexpected pool returned. Expected %v. Got: %v", defaultPool, pool) <add> } <add> if pid != defaultPoolID { <add> t.Fatalf("Unexpected pool id returned. Expected: %s. Got: %s", defaultPoolID, pid) <add> } <add> <add> _, _, _, err = a.RequestPool("default", "", "", nil, false) <add> if err == nil { <add> t.Fatalf("Unexpected success") <add> } <add> <add> _, _, _, err = a.RequestPool(defaultAS, "192.168.0.0/16", "", nil, false) <add> if err == nil { <add> t.Fatalf("Unexpected success") <add> } <add> <add> _, _, _, err = a.RequestPool(defaultAS, "", "192.168.0.0/24", nil, false) <add> if err == nil { <add> t.Fatalf("Unexpected success") <add> } <add> <add> _, _, _, err = a.RequestPool(defaultAS, "", "", nil, true) <add> if err == nil { <add> t.Fatalf("Unexpected success") <add> } <add>} <add> <add>func TestOtherRequests(t *testing.T) { <add> a := allocator{} <add> <add> ip, _, err := a.RequestAddress(defaultPoolID, nil, nil) <add> if err != nil { <add> t.Fatal(err) <add> } <add> if ip != nil { <add> t.Fatalf("Unexpected address returned: %v", ip) <add> } <add> <add> _, _, err = a.RequestAddress("anypid", nil, nil) <add> if err == nil { <add> t.Fatalf("Unexpected success") <add> } <add> <add>}
4
Text
Text
fix some typos for plugin
6353f993d9c5e05ae5cc1c235de7d23cae7c5bd7
<ide><path>docs/reference/commandline/index.md <ide> read the [`dockerd`](dockerd.md) reference page. <ide> |:--------|:-------------------------------------------------------------------| <ide> | [plugin create](plugin_create.md) | Create a plugin from a rootfs and configuration | <ide> | [plugin disable](plugin_disable.md) | Disable a plugin | <del>| [plugin enbale](plugin_enable.md) | Enable a plugin | <add>| [plugin enable](plugin_enable.md) | Enable a plugin | <ide> | [plugin inspect](plugin_inspect.md) | Display detailed information on a plugin | <ide> | [plugin install](plugin_install.md) | Install a plugin | <ide> | [plugin ls](plugin_ls.md) | List plugins | <ide><path>docs/reference/commandline/plugin_create.md <ide> The following example shows how to create a sample `plugin`. <ide> ```bash <ide> $ ls -ls /home/pluginDir <ide> <add>total 4 <ide> 4 -rw-r--r-- 1 root root 431 Nov 7 01:40 config.json <ide> 0 drwxr-xr-x 19 root root 420 Nov 7 01:40 rootfs <ide> <ide><path>docs/reference/commandline/plugin_disable.md <ide> Options: <ide> <ide> Disables a plugin. The plugin must be installed before it can be disabled, <ide> see [`docker plugin install`](plugin_install.md). Without the `-f` option, <del>a plugin that has references (eg, volumes, networks) cannot be disabled. <add>a plugin that has references (e.g., volumes, networks) cannot be disabled. <ide> <ide> ## Examples <ide> <ide><path>docs/reference/commandline/plugin_inspect.md <ide> $ docker plugin inspect -f '{{.Id}}' tiborvass/sample-volume-plugin:latest <ide> 8c74c978c434745c3ade82f1bc0acf38d04990eaf494fa507c16d9f1daa99c21 <ide> ``` <ide> <del> <ide> ## Related commands <ide> <ide> * [plugin create](plugin_create.md) <ide><path>docs/reference/commandline/plugin_ls.md <ide> $ docker plugin ls --filter enabled=true <ide> NAME TAG DESCRIPTION ENABLED <ide> ``` <ide> <del> <ide> ### Formatting <ide> <ide> The formatting options (`--format`) pretty-prints plugins output <ide> $ docker plugin ls --format "{{.ID}}: {{.Name}}" <ide> 4be01827a72e: tiborvass/no-remove <ide> ``` <ide> <del> <ide> ## Related commands <ide> <ide> * [plugin create](plugin_create.md) <ide><path>docs/reference/commandline/plugin_push.md <ide> $ docker plugin ls <ide> <ide> ID NAME TAG DESCRIPTION ENABLED <ide> 69553ca1d456 user/plugin latest A sample plugin for Docker false <add> <ide> $ docker plugin push user/plugin <ide> ``` <ide> <ide><path>docs/reference/commandline/plugin_rm.md <ide> plugin: <ide> <ide> ```bash <ide> $ docker plugin disable tiborvass/sample-volume-plugin <add> <ide> tiborvass/sample-volume-plugin <ide> <ide> $ docker plugin rm tiborvass/sample-volume-plugin:latest <add> <ide> tiborvass/sample-volume-plugin <ide> ``` <ide> <ide><path>docs/reference/commandline/plugin_set.md <ide> the `myplugin` plugin. <ide> <ide> ```bash <ide> $ docker plugin inspect -f '{{with $device := index .Settings.Devices 0}}{{$device.Path}}{{end}}' myplugin <add> <ide> /dev/foo <ide> <ide> $ docker plugins set myplugin mydevice.path=/dev/bar <ide> <ide> $ docker plugin inspect -f '{{with $device := index .Settings.Devices 0}}{{$device.Path}}{{end}}' myplugin <add> <ide> /dev/bar <ide> ``` <ide> <ide> The following example change the value of the args on the `myplugin` plugin. <ide> <ide> ```bash <ide> $ docker plugin inspect -f '{{.Settings.Args}}' myplugin <add> <ide> ["foo", "bar"] <ide> <ide> $ docker plugins set myplugin myargs="foo bar baz" <ide> <ide> $ docker plugin inspect -f '{{.Settings.Args}}' myplugin <add> <ide> ["foo", "bar", "baz"] <ide> ``` <ide> <ide> $ docker plugin inspect -f '{{.Settings.Args}}' myplugin <ide> * [plugin ls](plugin_ls.md) <ide> * [plugin push](plugin_push.md) <ide> * [plugin rm](plugin_rm.md) <add>* [plugin upgrade](plugin_upgrade.md) <ide><path>docs/reference/commandline/plugin_upgrade.md <ide> Do you grant the above permissions? [y/N] y <ide> vieux/sshfs:next <ide> <ide> $ docker volume create -d vieux/sshfs:next -o sshcmd=root@1.2.3.4:/tmp/shared -o password=XXX sshvolume <add> <ide> sshvolume <add> <ide> $ docker run -it -v sshvolume:/data alpine sh -c "touch /data/hello" <add> <ide> $ docker plugin disable -f vieux/sshfs:next <add> <ide> viex/sshfs:next <ide> <ide> # Here docker volume ls doesn't show 'sshfsvolume', since the plugin is disabled <ide> $ docker volume ls <add> <ide> DRIVER VOLUME NAME <ide> <ide> $ docker plugin upgrade vieux/sshfs:next vieux/sshfs:next <add> <ide> Plugin "vieux/sshfs:next" is requesting the following privileges: <ide> - network: [host] <ide> - device: [/dev/fuse] <ide> - capabilities: [CAP_SYS_ADMIN] <ide> Do you grant the above permissions? [y/N] y <ide> Upgrade plugin vieux/sshfs:next to vieux/sshfs:next <add> <ide> $ docker plugin enable vieux/sshfs:next <add> <ide> viex/sshfs:next <add> <ide> $ docker volume ls <add> <ide> DRIVER VOLUME NAME <ide> viuex/sshfs:next sshvolume <add> <ide> $ docker run -it -v sshvolume:/data alpine sh -c "ls /data" <add> <ide> hello <ide> ``` <ide>
9
PHP
PHP
add tests to cover
f67d786311c12331a0fccf8cb5695ff0a51ecef3
<ide><path>tests/Fixture/TranslatesFixture.php <ide> class TranslatesFixture extends TestFixture <ide> ['locale' => 'eng', 'model' => 'Authors', 'foreign_key' => 1, 'field' => 'name', 'content' => 'May-rianoh'], <ide> ['locale' => 'dan', 'model' => 'NumberTrees', 'foreign_key' => 1, 'field' => 'name', 'content' => 'Elektroniker'], <ide> ['locale' => 'dan', 'model' => 'NumberTrees', 'foreign_key' => 11, 'field' => 'name', 'content' => 'Alien Tingerne'], <add> ['locale' => 'eng', 'model' => 'SpecialTags', 'foreign_key' => 2, 'field' => 'highlighted_time', 'content' => 'Translated Time'], <ide> ]; <ide> } <ide><path>tests/TestCase/ORM/Behavior/TranslateBehaviorTest.php <ide> class TranslateBehaviorTest extends TestCase <ide> public $fixtures = [ <ide> 'core.articles', <ide> 'core.authors', <add> 'core.special_tags', <add> 'core.tags', <ide> 'core.comments', <ide> 'core.translates' <ide> ]; <ide> public function testFindSingleLocaleBelongsto() <ide> $this->assertEquals($expected, $results); <ide> } <ide> <add> /** <add> * Tests that it is possible to translate belongsToMany associations <add> * <add> * @return void <add> */ <add> public function testFindSingleLocaleBelongsToMany() <add> { <add> $table = TableRegistry::get('Articles'); <add> $specialTags = TableRegistry::get('SpecialTags'); <add> $specialTags->addBehavior('Translate', ['fields' => ['highlighted_time']]); <add> <add> $table->belongsToMany('Tags', [ <add> 'through' => $specialTags <add> ]); <add> $specialTags->locale('eng'); <add> <add> $result = $table->get(2, ['contain' => 'Tags']); <add> $this->assertNotEmpty($result); <add> $this->assertNotEmpty($result->tags); <add> $this->assertEquals('Translated Time', $result->tags[0]->_joinData->highlighted_time); <add> } <add> <ide> /** <ide> * Tests that updating an existing record translations work <ide> * <ide><path>tests/TestCase/ORM/QueryRegressionTest.php <ide> public function testComplexNestedTypesInJoinedWhere() <ide> $this->assertInstanceOf('Cake\I18n\Time', $result->comment->article->author->updated); <ide> } <ide> <add> /** <add> * Tests that it is possible to contain to fetch <add> * associations off of a junction table. <add> * <add> * @return void <add> */ <add> public function testBelongsToManyJoinDataAssociation() <add> { <add> $articles = TableRegistry::get('Articles'); <add> <add> $tags = TableRegistry::get('Tags'); <add> $tags->hasMany('SpecialTags'); <add> <add> $specialTags = TableRegistry::get('SpecialTags'); <add> $specialTags->belongsTo('Authors'); <add> $specialTags->belongsTo('Articles'); <add> $specialTags->belongsTo('Tags'); <add> <add> $articles->belongsToMany('Tags', [ <add> 'through' => $specialTags <add> ]); <add> $query = $articles->find() <add> ->contain(['Tags', 'Tags.SpecialTags.Authors']) <add> ->where(['Articles.id' => 1]); <add> $result = $query->first(); <add> $this->assertNotEmpty($result->tags, 'Missing tags'); <add> $this->assertNotEmpty($result->tags[0], 'Missing first tag'); <add> $this->assertNotEmpty($result->tags[0]->_joinData, 'Missing _joinData'); <add> $this->assertNotEmpty($result->tags[0]->_joinData->author, 'Missing author on _joinData'); <add> } <add> <ide> /** <ide> * Tests that it is possible to use matching with dot notation <ide> * even when part of the part of the path in the dot notation is
3
Ruby
Ruby
remove extra whitespaces
24724288812c8c8892536d0519c019d8a3aea8d1
<ide><path>activeresource/lib/active_resource/http_mock.rb <ide> class InvalidRequestError < StandardError; end #:nodoc: <ide> # requests. <ide> # <ide> # To test your Active Resource model, you simply call the ActiveResource::HttpMock.respond_to <del> # method with an attached block. The block declares a set of URIs with expected input, and the output <del> # each request should return. The passed in block has any number of entries in the following generalized <add> # method with an attached block. The block declares a set of URIs with expected input, and the output <add> # each request should return. The passed in block has any number of entries in the following generalized <ide> # format: <ide> # <ide> # mock.http_method(path, request_headers = {}, body = nil, status = 200, response_headers = {}) <ide> class InvalidRequestError < StandardError; end #:nodoc: <ide> # <tt>request_headers</tt> listed above. <ide> # <ide> # In order for a mock to deliver its content, the incoming request must match by the <tt>http_method</tt>, <del> # +path+ and <tt>request_headers</tt>. If no match is found an InvalidRequestError exception <add> # +path+ and <tt>request_headers</tt>. If no match is found an +InvalidRequestError+ exception <ide> # will be raised showing you what request it could not find a response for and also what requests and response <ide> # pairs have been recorded so you can create a new mock for that request. <ide> # <ide> def delete_duplicate_responses(request) <ide> <ide> class << self <ide> <del> # Returns an array of all request objects that have been sent to the mock. You can use this to check <add> # Returns an array of all request objects that have been sent to the mock. You can use this to check <ide> # if your model actually sent an HTTP request. <ide> # <ide> # ==== Example <ide> def requests <ide> end <ide> <ide> # Returns the list of requests and their mocked responses. Look up a <del> # response for a request using responses.assoc(request). <add> # response for a request using <tt>responses.assoc(request)</tt>. <ide> def responses <ide> @@responses ||= [] <ide> end
1
Java
Java
provide dedicated @componentscan processing
856da7edb984cd8ad5643a376e536f40e06d8faa
<ide><path>org.springframework.context/src/main/java/org/springframework/context/annotation/ComponentScanAnnotationParser.java <add>/* <add> * Copyright 2002-2011 the original author or authors. <add> * <add> * Licensed under the Apache License, Version 2.0 (the "License"); <add> * you may not use this file except in compliance with the License. <add> * You may obtain a copy of the License at <add> * <add> * http://www.apache.org/licenses/LICENSE-2.0 <add> * <add> * Unless required by applicable law or agreed to in writing, software <add> * distributed under the License is distributed on an "AS IS" BASIS, <add> * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. <add> * See the License for the specific language governing permissions and <add> * limitations under the License. <add> */ <add> <add>package org.springframework.context.annotation; <add> <add>import java.lang.annotation.Annotation; <add>import java.util.ArrayList; <add>import java.util.List; <add>import java.util.Map; <add> <add>import org.springframework.beans.BeanUtils; <add>import org.springframework.beans.factory.support.BeanDefinitionRegistry; <add>import org.springframework.beans.factory.support.BeanNameGenerator; <add>import org.springframework.context.annotation.ComponentScan.Filter; <add>import org.springframework.core.env.Environment; <add>import org.springframework.core.io.ResourceLoader; <add>import org.springframework.core.type.AnnotationMetadata; <add>import org.springframework.core.type.filter.AnnotationTypeFilter; <add>import org.springframework.core.type.filter.AssignableTypeFilter; <add>import org.springframework.core.type.filter.TypeFilter; <add>import org.springframework.util.Assert; <add>import org.springframework.util.StringUtils; <add> <add>/** <add> * Parser for the @{@link ComponentScan} annotation. <add> * <add> * @author Chris Beams <add> * @since 3.1 <add> * @see ClassPathBeanDefinitionScanner#scan(String...) <add> * @see ComponentScanBeanDefinitionParser <add> */ <add>class ComponentScanAnnotationParser { <add> <add> private final ResourceLoader resourceLoader; <add> private final Environment environment; <add> private final BeanDefinitionRegistry registry; <add> <add> public ComponentScanAnnotationParser(ResourceLoader resourceLoader, Environment environment, BeanDefinitionRegistry registry) { <add> this.resourceLoader = resourceLoader; <add> this.environment = environment; <add> this.registry = registry; <add> } <add> <add> public void parse(AnnotationMetadata annotationMetadata) { <add> Map<String, Object> attribs = annotationMetadata.getAnnotationAttributes(ComponentScan.class.getName()); <add> if (attribs == null) { <add> // @ComponentScan annotation is not present -> do nothing <add> return; <add> } <add> <add> ClassPathBeanDefinitionScanner scanner = <add> new ClassPathBeanDefinitionScanner(registry, (Boolean)attribs.get("useDefaultFilters")); <add> <add> Assert.notNull(this.environment, "Environment must not be null"); <add> scanner.setEnvironment(this.environment); <add> <add> Assert.notNull(this.resourceLoader, "ResourceLoader must not be null"); <add> scanner.setResourceLoader(this.resourceLoader); <add> <add> scanner.setBeanNameGenerator(BeanUtils.instantiateClass( <add> (Class<?>)attribs.get("nameGenerator"), BeanNameGenerator.class)); <add> <add> ScopedProxyMode scopedProxyMode = (ScopedProxyMode) attribs.get("scopedProxy"); <add> if (scopedProxyMode != ScopedProxyMode.DEFAULT) { <add> scanner.setScopedProxyMode(scopedProxyMode); <add> } else { <add> scanner.setScopeMetadataResolver(BeanUtils.instantiateClass( <add> (Class<?>)attribs.get("scopeResolver"), ScopeMetadataResolver.class)); <add> } <add> <add> scanner.setResourcePattern((String)attribs.get("resourcePattern")); <add> <add> for (Filter filter : (Filter[])attribs.get("includeFilters")) { <add> scanner.addIncludeFilter(createTypeFilter(filter)); <add> } <add> for (Filter filter : (Filter[])attribs.get("excludeFilters")) { <add> scanner.addExcludeFilter(createTypeFilter(filter)); <add> } <add> <add> List<String> basePackages = new ArrayList<String>(); <add> for (String pkg : (String[])attribs.get("value")) { <add> if (StringUtils.hasText(pkg)) { <add> basePackages.add(pkg); <add> } <add> } <add> for (String pkg : (String[])attribs.get("basePackages")) { <add> if (StringUtils.hasText(pkg)) { <add> basePackages.add(pkg); <add> } <add> } <add> for (Class<?> clazz : (Class<?>[])attribs.get("basePackageClasses")) { <add> // TODO: loading user types directly here. implications on load-time <add> // weaving may mean we need to revert to stringified class names in <add> // annotation metadata <add> basePackages.add(clazz.getPackage().getName()); <add> } <add> <add> if (basePackages.isEmpty()) { <add> throw new IllegalStateException("At least one base package must be specified"); <add> } <add> <add> scanner.scan(basePackages.toArray(new String[]{})); <add> } <add> <add> private TypeFilter createTypeFilter(Filter filter) { <add> switch (filter.type()) { <add> case ANNOTATION: <add> @SuppressWarnings("unchecked") <add> Class<Annotation> filterClass = (Class<Annotation>)filter.value(); <add> return new AnnotationTypeFilter(filterClass); <add> case ASSIGNABLE_TYPE: <add> return new AssignableTypeFilter(filter.value()); <add> case CUSTOM: <add> return BeanUtils.instantiateClass(filter.value(), TypeFilter.class); <add> default: <add> throw new IllegalArgumentException("unknown filter type " + filter.type()); <add> } <add> } <add>} <ide><path>org.springframework.context/src/main/java/org/springframework/context/annotation/ConfigurationClassBeanDefinitionReader.java <ide> public class ConfigurationClassBeanDefinitionReader { <ide> <ide> private Environment environment; <ide> <add> private final ComponentScanAnnotationParser componentScanParser; <add> <ide> /** <ide> * Create a new {@link ConfigurationClassBeanDefinitionReader} instance that will be used <ide> * to populate the given {@link BeanDefinitionRegistry}. <ide> public ConfigurationClassBeanDefinitionReader(final BeanDefinitionRegistry regis <ide> this.metadataReaderFactory = metadataReaderFactory; <ide> this.resourceLoader = resourceLoader; <ide> this.environment = environment; <add> <add> this.componentScanParser = new ComponentScanAnnotationParser(resourceLoader, environment, registry); <ide> } <ide> <ide> <ide> public void loadBeanDefinitions(Set<ConfigurationClass> configurationModel) { <ide> */ <ide> private void loadBeanDefinitionsForConfigurationClass(ConfigurationClass configClass) { <ide> AnnotationMetadata metadata = configClass.getMetadata(); <add> componentScanParser.parse(metadata); <ide> doLoadBeanDefinitionForConfigurationClassIfNecessary(configClass); <ide> for (BeanMethod beanMethod : configClass.getBeanMethods()) { <ide> loadBeanDefinitionsForBeanMethod(beanMethod); <ide><path>org.springframework.context/src/test/java/org/springframework/context/annotation/ComponentScanAnnotationIntegrationTests.java <ide> import org.springframework.beans.factory.annotation.CustomAutowireConfigurer; <ide> import org.springframework.beans.factory.config.BeanDefinition; <ide> import org.springframework.beans.factory.config.SimpleMapScope; <del>import org.springframework.beans.factory.parsing.BeanDefinitionParsingException; <ide> import org.springframework.beans.factory.support.BeanDefinitionRegistry; <ide> import org.springframework.beans.factory.support.DefaultListableBeanFactory; <ide> import org.springframework.context.annotation.ComponentScan.Filter; <ide> public void invalidComponentScanDeclaration_noPackagesSpecified() { <ide> try { <ide> ctx.refresh(); <ide> fail("Expected exception when parsing @ComponentScan definition that declares no packages"); <del> } catch (BeanDefinitionParsingException ex) { <add> } catch (IllegalStateException ex) { <ide> assertThat(ex.getMessage(), containsString("At least one base package must be specified")); <ide> } <ide> }
3
Python
Python
fix cli for python 2
1a53fcc685f9cd1584339105e50514a7520bf08c
<ide><path>spacy/__main__.py <ide> # coding: utf8 <del>from __future__ import unicode_literals, print_function <add># <add>from __future__ import print_function <add># NB! This breaks in plac on Python 2!! <add>#from __future__ import unicode_literals, <ide> <ide> import plac <ide> from spacy.cli import download as cli_download
1
Mixed
Javascript
assign deprecation codes
82a73470506111ecc6361b9e0b0bb01f6377a531
<ide><path>doc/api/deprecations.md <ide> Assigning properties to the top-level `this` as an alternative <ide> to `module.exports` is deprecated. Developers should use `exports` <ide> or `module.exports` instead. <ide> <del>### DEP00XX: crypto.fips is deprecated and replaced. <add><a id="DEP0093"></a> <add>### DEP0093: crypto.fips is deprecated and replaced. <ide> <ide> Type: Documentation-only <ide> <ide> The [`crypto.fips`][] property is deprecated. Please use `crypto.setFips()` <ide> and `crypto.getFips()` instead. <ide> <del><a id="DEP0XX"></a> <del>### DEP0XXX: Using `assert.fail()` with more than one argument. <add><a id="DEP0094"></a> <add>### DEP0094: Using `assert.fail()` with more than one argument. <ide> <ide> Type: Runtime <ide> <ide> Using `assert.fail()` with more than one argument has no benefit over writing an <ide> individual error message. Either use `assert.fail()` with one argument or switch <ide> to one of the other assert methods. <ide> <del><a id="DEP00XX"></a> <del>### DEP00XX: timers.enroll() <add><a id="DEP0095"></a> <add>### DEP0095: timers.enroll() <ide> <ide> Type: Runtime <ide> <ide> `timers.enroll()` is deprecated. Please use the publicly documented [`setTimeout()`][] or [`setInterval()`][] instead. <ide> <del><a id="DEP00XX"></a> <del>### DEP00XX: timers.unenroll() <add><a id="DEP0096"></a> <add>### DEP0096: timers.unenroll() <ide> <ide> Type: Runtime <ide> <ide><path>lib/assert.js <ide> function fail(actual, expected, message, operator, stackStartFn) { <ide> 'assert.fail() with more than one argument is deprecated. ' + <ide> 'Please use assert.strictEqual() instead or only pass a message.', <ide> 'DeprecationWarning', <del> 'DEP00XXX' <add> 'DEP0094' <ide> ); <ide> } <ide> if (argsLen === 2) <ide><path>lib/timers.js <ide> function unenroll(item) { <ide> exports.unenroll = util.deprecate(unenroll, <ide> 'timers.unenroll() is deprecated. ' + <ide> 'Please use clearTimeout instead.', <del> 'DEP00XX'); <add> 'DEP0096'); <ide> <ide> <ide> // Make a regular object able to act as a timer by setting some properties. <ide> function enroll(item, msecs) { <ide> } <ide> <ide> exports.enroll = util.deprecate(enroll, <del> 'timers.unenroll() is deprecated. ' + <add> 'timers.enroll() is deprecated. ' + <ide> 'Please use clearTimeout instead.', <del> 'DEP00XX'); <add> 'DEP0095'); <ide> <ide> <ide> /* <ide><path>test/parallel/test-timers-max-duration-warning.js <ide> process.on('warning', common.mustCall((warning) => { <ide> assert.strictEqual(lines[0], `${OVERFLOW} does not fit into a 32-bit signed` + <ide> ' integer.'); <ide> assert.strictEqual(lines.length, 2); <del>}, 4)); <add>}, 5)); <ide> <ide> <ide> {
4
Ruby
Ruby
give a better error message for misspelled helpers
8d7cf75684d5e76ef635f92125a51cb4c1c0fd3b
<ide><path>actionpack/lib/abstract_controller/helpers.rb <ide> def modules_for_helpers(args) <ide> rescue LoadError => e <ide> raise AbstractController::Helpers::MissingHelperError.new(e, file_name) <ide> end <del> file_name.camelize.constantize <add> <add> mod_name = file_name.camelize <add> begin <add> mod_name.constantize <add> rescue LoadError <add> # dependencies.rb gives a similar error message but its wording is <add> # not as clear because it mentions autoloading. To the user all it <add> # matters is that a helper module couldn't be loaded, autoloading <add> # is an internal mechanism that should not leak. <add> raise NameError, "Couldn't find #{mod_name}, expected it to be defined in helpers/#{file_name}.rb" <add> end <ide> when Module <ide> arg <ide> else <ide><path>actionpack/test/controller/helper_test.rb <ide> def index <ide> end <ide> end <ide> <add>class HelpersTypoController < ActionController::Base <add> path = File.expand_path('../../fixtures/helpers_typo', __FILE__) <add> $:.unshift(path) <add> self.helpers_path = path <add>end <add> <ide> module LocalAbcHelper <ide> def a() end <ide> def b() end <ide> def test_helpers_paths_priority <ide> end <ide> end <ide> <add>class HelpersTypoControllerTest < ActiveSupport::TestCase <add> def setup <add> @autoload_paths = ActiveSupport::Dependencies.autoload_paths <add> ActiveSupport::Dependencies.autoload_paths = Array(HelpersTypoController.helpers_path) <add> end <add> <add> def test_helper_typo_error_message <add> e = assert_raise(NameError) { HelpersTypoController.helper 'admin/users' } <add> assert_equal "Couldn't find Admin::UsersHelper, expected it to be defined in helpers/admin/users_helper.rb", e.message <add> end <add> <add> def teardown <add> ActiveSupport::Dependencies.autoload_paths = @autoload_paths <add> end <add>end <add> <ide> class HelperTest < ActiveSupport::TestCase <ide> class TestController < ActionController::Base <ide> attr_accessor :delegate_attr <ide><path>actionpack/test/fixtures/helpers_typo/admin/users_helper.rb <add>module Admin <add> module UsersHelpeR <add> end <add>end <add>
3
PHP
PHP
remove elements constant
08316c529273fc8a8ce3e4d38261fc10c9b03673
<ide><path>lib/Cake/bootstrap.php <ide> */ <ide> define('APPLIBS', APP.'Lib'.DS); <ide> <del>/** <del> * Path to the application's view's elements directory. <del> * It's supposed to hold pieces of PHP/HTML that are used on multiple pages <del> * and are not linked to a particular layout (like polls, footers and so on). <del> */ <del> define('ELEMENTS', VIEWS.'Elements'.DS); <del> <ide> /** <ide> * Path to the configuration files directory. <ide> */
1
Python
Python
add inline keyword check
2626ceabbc90c7e30aa4b86d91aa8d8a3bb0863e
<ide><path>numpy/core/scons_support.py <del>#! Last Change: Wed Jul 30 02:00 PM 2008 J <add>#! Last Change: Fri Mar 13 01:00 PM 2009 J <ide> <ide> """Code to support special facilities to scons which are only useful for <ide> numpy.core, hence not put into numpy.distutils.scons""" <ide> def define_no_smp(): <ide> nosmp = 0 <ide> return nosmp == 1 <ide> <add># Inline check <add>def CheckInline(context): <add> context.Message("Checking for inline keyword... ") <add> body = """ <add>#ifndef __cplusplus <add>static %(inline)s int static_func (void) <add>{ <add> return 0; <add>} <add>%(inline)s int nostatic_func (void) <add>{ <add> return 0; <add>} <add>#endif""" <add> inline = None <add> for kw in ['inline', '__inline__', '__inline']: <add> st = context.TryCompile(body % {'inline': kw}, '.c') <add> if st: <add> inline = kw <add> break <add> <add> if inline: <add> context.Result(inline) <add> else: <add> context.Result(0) <add> return inline <add> <ide> array_api_gen_bld = Builder(action = Action(do_generate_numpy_api, '$ARRAPIGENCOMSTR'), <ide> emitter = generate_api_emitter) <ide>
1
Python
Python
fix pydocstyle warning
2e6c84327f23747bad5345179d82237827e2fd8b
<ide><path>celery/app/task.py <ide> def retry(self, args=None, kwargs=None, exc=None, throw=True, <ide> **options (Any): Extra options to pass on to :meth:`apply_async`. <ide> <ide> Raises: <add> <ide> celery.exceptions.Retry: <ide> To tell the worker that the task has been re-sent for retry. <ide> This always happens, unless the `throw` keyword argument
1
Ruby
Ruby
convert string test to spec
81d105d9a4cbdba1072c3828d173dcaac4bcc2a6
<ide><path>Library/Homebrew/test/string_spec.rb <add>require "extend/string" <add> <add>describe String do <add> describe "#undent" do <add> it "removes leading whitespace, taking the first line as reference" do <add> string = <<-EOS.undent <add> hi <add>........my friend over <add> there <add> EOS <add> <add> expect(string).to eq("hi\n........my friend over\n there\n") <add> end <add> <add> it "removes nothing if the text is not indented" do <add> string = <<-EOS.undent <add>hi <add>I'm not indented <add> EOS <add> <add> expect(string).to eq("hi\nI'm not indented\n") <add> end <add> <add> it "can be nested" do <add> nested_string = <<-EOS.undent <add> goodbye <add> EOS <add> <add> string = <<-EOS.undent <add> hello <add> #{nested_string} <add> EOS <add> <add> expect(string).to eq("hello\ngoodbye\n\n") <add> end <add> end <add>end <add> <add>describe StringInreplaceExtension do <add> subject { string.extend(described_class) } <add> let(:string) { "foobar" } <add> <add> describe "#sub!" do <add> it "adds an error to #errors when no replacement was made" do <add> subject.sub! "not here", "test" <add> expect(subject.errors).to eq(['expected replacement of "not here" with "test"']) <add> end <add> end <add>end <ide><path>Library/Homebrew/test/string_test.rb <del>require "testing_env" <del>require "extend/string" <del> <del>class StringTest < Homebrew::TestCase <del> def test_undent <del> undented = <<-EOS.undent <del> hi <del>....my friend over <del> there <del> EOS <del> assert_equal "hi\n....my friend over\nthere\n", undented <del> end <del> <del> def test_undent_not_indented <del> undented = <<-EOS.undent <del>hi <del>I'm not indented <del> EOS <del> assert_equal "hi\nI'm not indented\n", undented <del> end <del> <del> def test_undent_nested <del> nest = <<-EOS.undent <del> goodbye <del> EOS <del> <del> undented = <<-EOS.undent <del> hello <del> #{nest} <del> EOS <del> <del> assert_equal "hello\ngoodbye\n\n", undented <del> end <del> <del> def test_inreplace_sub_failure <del> s = "foobar".extend StringInreplaceExtension <del> s.sub! "not here", "test" <del> assert_equal ['expected replacement of "not here" with "test"'], s.errors <del> end <del>end
2
Javascript
Javascript
emit error on domain unhandled rejections
cd3134029ebda16d37dcf6a10093bba9dff196e9
<ide><path>lib/internal/process/promises.js <ide> function unhandledRejection(promise, reason) { <ide> maybeUnhandledPromises.set(promise, { <ide> reason, <ide> uid: ++lastPromiseId, <del> warned: false <add> warned: false, <add> domain: process.domain <ide> }); <ide> // This causes the promise to be referenced at least for one tick. <ide> pendingUnhandledRejections.push(promise); <ide> function processPromiseRejections() { <ide> } <ide> promiseInfo.warned = true; <ide> const { reason, uid } = promiseInfo; <add> function emit(reason, promise, promiseInfo) { <add> if (promiseInfo.domain) { <add> return promiseInfo.domain.emit('error', reason); <add> } <add> return process.emit('unhandledRejection', reason, promise); <add> } <ide> switch (unhandledRejectionsMode) { <ide> case kStrictUnhandledRejections: { <ide> const err = reason instanceof Error ? <ide> reason : generateUnhandledRejectionError(reason); <ide> triggerUncaughtException(err, true /* fromPromise */); <del> const handled = process.emit('unhandledRejection', reason, promise); <add> const handled = emit(reason, promise, promiseInfo); <ide> if (!handled) emitUnhandledRejectionWarning(uid, reason); <ide> break; <ide> } <ide> case kIgnoreUnhandledRejections: { <del> process.emit('unhandledRejection', reason, promise); <add> emit(reason, promise, promiseInfo); <ide> break; <ide> } <ide> case kAlwaysWarnUnhandledRejections: { <del> process.emit('unhandledRejection', reason, promise); <add> emit(reason, promise, promiseInfo); <ide> emitUnhandledRejectionWarning(uid, reason); <ide> break; <ide> } <ide> case kThrowUnhandledRejections: { <del> const handled = process.emit('unhandledRejection', reason, promise); <add> const handled = emit(reason, promise, promiseInfo); <ide> if (!handled) { <ide> const err = reason instanceof Error ? <ide> reason : generateUnhandledRejectionError(reason); <ide> function processPromiseRejections() { <ide> break; <ide> } <ide> case kWarnWithErrorCodeUnhandledRejections: { <del> const handled = process.emit('unhandledRejection', reason, promise); <add> const handled = emit(reason, promise, promiseInfo); <ide> if (!handled) { <ide> emitUnhandledRejectionWarning(uid, reason); <ide> process.exitCode = 1; <ide> function generateUnhandledRejectionError(reason) { <ide> function listenForRejections() { <ide> setPromiseRejectCallback(promiseRejectHandler); <ide> } <del> <ide> module.exports = { <ide> hasRejectionToWarn, <ide> setHasRejectionToWarn, <ide> listenForRejections, <del> processPromiseRejections <add> processPromiseRejections, <ide> }; <ide><path>test/parallel/test-domain-promise.js <ide> process.on('warning', common.mustNotCall()); <ide> })); <ide> })); <ide> } <add>{ <add> // Unhandled rejections become errors on the domain <add> const d = domain.create(); <add> d.on('error', common.mustCall((e) => { <add> assert.strictEqual(e.message, 'foo'); <add> })); <add> d.run(common.mustCall(() => { <add> Promise.reject(new Error('foo')); <add> })); <add>} <ide><path>test/parallel/test-promises-unhandled-rejections.js <ide> 'use strict'; <ide> const common = require('../common'); <ide> const assert = require('assert'); <del>const domain = require('domain'); <ide> const { inspect } = require('util'); <ide> <ide> common.disableCrashOnUnhandledRejection(); <ide> asyncTest('setImmediate + promise microtasks is too late to attach a catch' + <ide> }); <ide> }); <ide> <del>asyncTest( <del> 'Promise unhandledRejection handler does not interfere with domain' + <del> ' error handlers being given exceptions thrown from nextTick.', <del> function(done) { <del> const d = domain.create(); <del> let domainReceivedError; <del> d.on('error', function(e) { <del> domainReceivedError = e; <del> }); <del> d.run(function() { <del> const e = new Error('error'); <del> const domainError = new Error('domain error'); <del> onUnhandledSucceed(done, function(reason, promise) { <del> assert.strictEqual(reason, e); <del> assert.strictEqual(domainReceivedError, domainError); <del> }); <del> Promise.reject(e); <del> process.nextTick(function() { <del> throw domainError; <del> }); <del> }); <del> } <del>); <del> <ide> asyncTest('nextTick is immediately scheduled when called inside an event' + <ide> ' handler', function(done) { <ide> clean();
3
Text
Text
update changelog for io.js v3.3.0
a06e11d9d334d1bf67d39ee961fcb5a46c28ec39
<ide><path>CHANGELOG.md <ide> # Node.js ChangeLog <ide> <add>## 2015-09-02, Version 3.3.0, @rvagg <add> <add>### Notable changes <add> <add>* **build**: Add a `--link-module` option to `configure` that can be used to bundle additional JavaScript modules into a built binary (Bradley Meck) [#2497](https://github.com/nodejs/node/pull/2497) <add>* **docs**: Merge outstanding doc updates from joyent/node (James M Snell) [#2378](https://github.com/nodejs/node/pull/2378) <add>* **http_parser**: Significant performance improvement by having `http.Server` consume all initial data from its `net.Socket` and parsing directly without having to enter JavaScript. Any `'data'` listeners on the `net.Socket` will result in the data being "unconsumed" into JavaScript, thereby undoing any performance gains. (Fedor Indutny) [#2355](https://github.com/nodejs/node/pull/2355) <add>* **libuv**: Upgrade to 1.7.3 (from 1.6.1), see [ChangeLog](https://github.com/libuv/libuv/blob/v1.x/ChangeLog) for details (Saúl Ibarra Corretgé) [#2310](https://github.com/nodejs/node/pull/2310) <add>* **V8**: Upgrade to 4.4.63.30 (from 4.4.63.26) (Michaël Zasso) [#2482](https://github.com/nodejs/node/pull/2482) <add> <add>### Known issues <add> <add>See https://github.com/nodejs/io.js/labels/confirmed-bug for complete and current list of known issues. <add> <add>* Some uses of computed object shorthand properties are not handled correctly by the current version of V8. e.g. `[{ [prop]: val }]` evaluates to `[{}]`. [#2507](https://github.com/nodejs/node/issues/2507) <add>* Some problems with unreferenced timers running during `beforeExit` are still to be resolved. See [#1264](https://github.com/nodejs/io.js/issues/1264). <add>* Surrogate pair in REPL can freeze terminal. [#690](https://github.com/nodejs/io.js/issues/690) <add>* `process.send()` is not synchronous as the docs suggest, a regression introduced in 1.0.2, see [#760](https://github.com/nodejs/io.js/issues/760). <add>* Calling `dns.setServers()` while a DNS query is in progress can cause the process to crash on a failed assertion. [#894](https://github.com/nodejs/io.js/issues/894) <add>* `url.resolve` may transfer the auth portion of the url when resolving between two full hosts, see [#1435](https://github.com/nodejs/io.js/issues/1435). <add> <add>### Commits <add> <add>* [[`1a531b4e44`](https://github.com/nodejs/node/commit/1a531b4e44)] - **(SEMVER-MINOR)** Introduce --link-module to ./configure (Bradley Meck) [#2497](https://github.com/nodejs/node/pull/2497) <add>* [[`d2f314c190`](https://github.com/nodejs/node/commit/d2f314c190)] - **build**: fix borked chmod call for release uploads (Rod Vagg) [#2645](https://github.com/nodejs/node/pull/2645) <add>* [[`3172e9c541`](https://github.com/nodejs/node/commit/3172e9c541)] - **build**: set file permissions before uploading (Rod Vagg) [#2623](https://github.com/nodejs/node/pull/2623) <add>* [[`a860d7fae1`](https://github.com/nodejs/node/commit/a860d7fae1)] - **build**: change staging directory on new server (Rod Vagg) [#2623](https://github.com/nodejs/node/pull/2623) <add>* [[`50c0baa8d7`](https://github.com/nodejs/node/commit/50c0baa8d7)] - **build**: rename 'doc' directory to 'docs' for upload (Rod Vagg) [#2623](https://github.com/nodejs/node/pull/2623) <add>* [[`0a0577cf5f`](https://github.com/nodejs/node/commit/0a0577cf5f)] - **build**: fix bad cherry-pick for vcbuild.bat build-release (Rod Vagg) [#2625](https://github.com/nodejs/node/pull/2625) <add>* [[`34de90194b`](https://github.com/nodejs/node/commit/34de90194b)] - **build**: only define NODE_V8_OPTIONS if not empty (Evan Lucas) [#2532](https://github.com/nodejs/node/pull/2532) <add>* [[`944174b189`](https://github.com/nodejs/node/commit/944174b189)] - **build**: make ci test addons in test/addons (Ben Noordhuis) [#2428](https://github.com/nodejs/node/pull/2428) <add>* [[`e955f9a1b0`](https://github.com/nodejs/node/commit/e955f9a1b0)] - **crypto**: Use OPENSSL_cleanse to shred the data. (Сковорода Никита Андреевич) [#2575](https://github.com/nodejs/node/pull/2575) <add>* [[`395d736b9d`](https://github.com/nodejs/node/commit/395d736b9d)] - **debugger**: use strict equality comparison (Minwoo Jung) [#2558](https://github.com/nodejs/node/pull/2558) <add>* [[`1d0e5210a8`](https://github.com/nodejs/node/commit/1d0e5210a8)] - **deps**: upgrade libuv to 1.7.3 (Saúl Ibarra Corretgé) [#2310](https://github.com/nodejs/node/pull/2310) <add>* [[`34ef53364f`](https://github.com/nodejs/node/commit/34ef53364f)] - **deps**: update V8 to 4.4.63.30 (Michaël Zasso) [#2482](https://github.com/nodejs/node/pull/2482) <add>* [[`23579a5f4a`](https://github.com/nodejs/node/commit/23579a5f4a)] - **doc**: add TSC meeting minutes 2015-08-12 (Rod Vagg) [#2438](https://github.com/nodejs/node/pull/2438) <add>* [[`0cc59299a4`](https://github.com/nodejs/node/commit/0cc59299a4)] - **doc**: add TSC meeting minutes 2015-08-26 (Rod Vagg) [#2591](https://github.com/nodejs/node/pull/2591) <add>* [[`6efa96e33a`](https://github.com/nodejs/node/commit/6efa96e33a)] - **doc**: merge CHANGELOG.md with joyent/node ChangeLog (P.S.V.R) [#2536](https://github.com/nodejs/node/pull/2536) <add>* [[`f75d54607b`](https://github.com/nodejs/node/commit/f75d54607b)] - **doc**: clarify cluster behaviour with no workers (Jeremiah Senkpiel) [#2606](https://github.com/nodejs/node/pull/2606) <add>* [[`8936302121`](https://github.com/nodejs/node/commit/8936302121)] - **doc**: minor clarification in buffer.markdown (Сковорода Никита Андреевич) [#2574](https://github.com/nodejs/node/pull/2574) <add>* [[`0db0e53753`](https://github.com/nodejs/node/commit/0db0e53753)] - **doc**: add @jasnell and @sam-github to release team (Rod Vagg) [#2455](https://github.com/nodejs/node/pull/2455) <add>* [[`c16e100593`](https://github.com/nodejs/node/commit/c16e100593)] - **doc**: reorg release team to separate section (Rod Vagg) [#2455](https://github.com/nodejs/node/pull/2455) <add>* [[`e3e00143fd`](https://github.com/nodejs/node/commit/e3e00143fd)] - **doc**: fix bad merge on modules.markdown (James M Snell) <add>* [[`2f62455880`](https://github.com/nodejs/node/commit/2f62455880)] - **doc**: minor additional corrections and improvements (James M Snell) [#2378](https://github.com/nodejs/node/pull/2378) <add>* [[`3bd08aac4b`](https://github.com/nodejs/node/commit/3bd08aac4b)] - **doc**: minor grammatical update in crypto.markdown (James M Snell) [#2378](https://github.com/nodejs/node/pull/2378) <add>* [[`f707189370`](https://github.com/nodejs/node/commit/f707189370)] - **doc**: minor grammatical update (James M Snell) [#2378](https://github.com/nodejs/node/pull/2378) <add>* [[`6c98cf0266`](https://github.com/nodejs/node/commit/6c98cf0266)] - **doc**: remove repeated statement in globals.markdown (James M Snell) [#2378](https://github.com/nodejs/node/pull/2378) <add>* [[`48e6ccf8c2`](https://github.com/nodejs/node/commit/48e6ccf8c2)] - **doc**: remove 'dudes' from documentation (James M Snell) [#2378](https://github.com/nodejs/node/pull/2378) <add>* [[`b5d68f8076`](https://github.com/nodejs/node/commit/b5d68f8076)] - **doc**: update tense in child_process.markdown (James M Snell) [#2378](https://github.com/nodejs/node/pull/2378) <add>* [[`242e3fe3ba`](https://github.com/nodejs/node/commit/242e3fe3ba)] - **doc**: fixed worker.id type (James M Snell) [#2378](https://github.com/nodejs/node/pull/2378) <add>* [[`ea9ee15c21`](https://github.com/nodejs/node/commit/ea9ee15c21)] - **doc**: port is optional for socket.bind() (James M Snell) [#2378](https://github.com/nodejs/node/pull/2378) <add>* [[`0ff6657a50`](https://github.com/nodejs/node/commit/0ff6657a50)] - **doc**: fix minor types and grammar in fs docs (James M Snell) [#2378](https://github.com/nodejs/node/pull/2378) <add>* [[`94d83c04f2`](https://github.com/nodejs/node/commit/94d83c04f2)] - **doc**: update parameter name in net.markdown (James M Snell) [#2378](https://github.com/nodejs/node/pull/2378) <add>* [[`04111ce40f`](https://github.com/nodejs/node/commit/04111ce40f)] - **doc**: small typo in domain.markdown (James M Snell) [#2378](https://github.com/nodejs/node/pull/2378) <add>* [[`c9fdd1bbbf`](https://github.com/nodejs/node/commit/c9fdd1bbbf)] - **doc**: fixed typo in net.markdown (missing comma) (James M Snell) [#2378](https://github.com/nodejs/node/pull/2378) <add>* [[`27c07b3f8e`](https://github.com/nodejs/node/commit/27c07b3f8e)] - **doc**: update description of fs.exists in fs.markdown (James M Snell) [#2378](https://github.com/nodejs/node/pull/2378) <add>* [[`52018e73d9`](https://github.com/nodejs/node/commit/52018e73d9)] - **doc**: clarification on the 'close' event (James M Snell) [#2378](https://github.com/nodejs/node/pull/2378) <add>* [[`f6d3b87a25`](https://github.com/nodejs/node/commit/f6d3b87a25)] - **doc**: improve working in stream.markdown (James M Snell) [#2378](https://github.com/nodejs/node/pull/2378) <add>* [[`b5da89431a`](https://github.com/nodejs/node/commit/b5da89431a)] - **doc**: update path.extname documentation (James M Snell) [#2378](https://github.com/nodejs/node/pull/2378) <add>* [[`1d4ea609db`](https://github.com/nodejs/node/commit/1d4ea609db)] - **doc**: small clarifications to modules.markdown (James M Snell) [#2378](https://github.com/nodejs/node/pull/2378) <add>* [[`c888985591`](https://github.com/nodejs/node/commit/c888985591)] - **doc**: code style cleanups in repl.markdown (James M Snell) [#2378](https://github.com/nodejs/node/pull/2378) <add>* [[`105b493595`](https://github.com/nodejs/node/commit/105b493595)] - **doc**: correct grammar in cluster.markdown (James M Snell) [#2378](https://github.com/nodejs/node/pull/2378) <add>* [[`51b86ccac7`](https://github.com/nodejs/node/commit/51b86ccac7)] - **doc**: Clarify the module.parent is set once (James M Snell) [#2378](https://github.com/nodejs/node/pull/2378) <add>* [[`d2ffecba2d`](https://github.com/nodejs/node/commit/d2ffecba2d)] - **doc**: add internal modules notice (Jeremiah Senkpiel) [#2523](https://github.com/nodejs/node/pull/2523) <add>* [[`b36debd5cb`](https://github.com/nodejs/node/commit/b36debd5cb)] - **env**: introduce `KickNextTick` (Fedor Indutny) [#2355](https://github.com/nodejs/node/pull/2355) <add>* [[`1bc446863f`](https://github.com/nodejs/node/commit/1bc446863f)] - **http_parser**: consume StreamBase instance (Fedor Indutny) [#2355](https://github.com/nodejs/node/pull/2355) <add>* [[`ce04b735cc`](https://github.com/nodejs/node/commit/ce04b735cc)] - **src**: only memcmp if length > 0 in Buffer::Compare (Karl Skomski) [#2544](https://github.com/nodejs/node/pull/2544) <add>* [[`31823e37c7`](https://github.com/nodejs/node/commit/31823e37c7)] - **src**: DRY getsockname/getpeername code (Ben Noordhuis) [#956](https://github.com/nodejs/node/pull/956) <add>* [[`13fd96dda3`](https://github.com/nodejs/node/commit/13fd96dda3)] - **src**: missing Exception::Error in node_http_parser (Jeremiah Senkpiel) [#2550](https://github.com/nodejs/node/pull/2550) <add>* [[`42e075ae02`](https://github.com/nodejs/node/commit/42e075ae02)] - **test**: improve performance of stringbytes test (Trevor Norris) [#2544](https://github.com/nodejs/node/pull/2544) <add>* [[`fc726399fd`](https://github.com/nodejs/node/commit/fc726399fd)] - **test**: unmark test-process-argv-0.js as flaky (Rich Trott) [#2613](https://github.com/nodejs/node/pull/2613) <add>* [[`7727ba1394`](https://github.com/nodejs/node/commit/7727ba1394)] - **test**: lint and refactor to avoid autocrlf issue (Roman Reiss) [#2494](https://github.com/nodejs/node/pull/2494) <add>* [[`c56aa829f0`](https://github.com/nodejs/node/commit/c56aa829f0)] - **test**: use tmpDir instead of fixturesDir (Sakthipriyan Vairamani) [#2583](https://github.com/nodejs/node/pull/2583) <add>* [[`5e65181ea4`](https://github.com/nodejs/node/commit/5e65181ea4)] - **test**: handling failure cases properly (Sakthipriyan Vairamani) [#2206](https://github.com/nodejs/node/pull/2206) <add>* [[`c48b95e847`](https://github.com/nodejs/node/commit/c48b95e847)] - **test**: initial list of flaky tests (Alexis Campailla) [#2424](https://github.com/nodejs/node/pull/2424) <add>* [[`94e88498ba`](https://github.com/nodejs/node/commit/94e88498ba)] - **test**: pass args to test-ci via env variable (Alexis Campailla) [#2424](https://github.com/nodejs/node/pull/2424) <add>* [[`09987c7a1c`](https://github.com/nodejs/node/commit/09987c7a1c)] - **test**: support flaky tests in test-ci (Alexis Campailla) [#2424](https://github.com/nodejs/node/pull/2424) <add>* [[`08b83c8b45`](https://github.com/nodejs/node/commit/08b83c8b45)] - **test**: add test configuration templates (Alexis Campailla) [#2424](https://github.com/nodejs/node/pull/2424) <add>* [[`8f8ab6fa57`](https://github.com/nodejs/node/commit/8f8ab6fa57)] - **test**: runner should return 0 on flaky tests (Alexis Campailla) [#2424](https://github.com/nodejs/node/pull/2424) <add>* [[`0cfd3be9c6`](https://github.com/nodejs/node/commit/0cfd3be9c6)] - **test**: runner support for flaky tests (Alexis Campailla) [#2424](https://github.com/nodejs/node/pull/2424) <add>* [[`3492d2d4c6`](https://github.com/nodejs/node/commit/3492d2d4c6)] - **test**: make test-process-argv-0 robust (Rich Trott) [#2541](https://github.com/nodejs/node/pull/2541) <add>* [[`a96cc31710`](https://github.com/nodejs/node/commit/a96cc31710)] - **test**: speed up test-child-process-spawnsync.js (Rich Trott) [#2542](https://github.com/nodejs/node/pull/2542) <add>* [[`856baf4c67`](https://github.com/nodejs/node/commit/856baf4c67)] - **test**: make spawnSync() test robust (Rich Trott) [#2535](https://github.com/nodejs/node/pull/2535) <add>* [[`3aa6bbb648`](https://github.com/nodejs/node/commit/3aa6bbb648)] - **tools**: update release.sh to work with new website (Rod Vagg) [#2623](https://github.com/nodejs/node/pull/2623) <add>* [[`f2f0fe45ff`](https://github.com/nodejs/node/commit/f2f0fe45ff)] - **tools**: make add-on scraper print filenames (Ben Noordhuis) [#2428](https://github.com/nodejs/node/pull/2428) <add>* [[`bb24c4a418`](https://github.com/nodejs/node/commit/bb24c4a418)] - **win,msi**: correct installation path registry keys (João Reis) [#2565](https://github.com/nodejs/node/pull/2565) <add>* [[`752977b888`](https://github.com/nodejs/node/commit/752977b888)] - **win,msi**: change InstallScope to perMachine (João Reis) [#2565](https://github.com/nodejs/node/pull/2565) <add> <ide> ## 2015-08-25, Version 3.2.0, @rvagg <ide> <ide> ### Notable changes
1
Python
Python
return node on create node
54e2243d5cd8a5ffcbc6ef62cef1fb35e2478ddb
<ide><path>libcloud/compute/drivers/solusvm.py <ide> def create_node(self, vttype='openvz', user_id=31, nodegroup_id=1, <ide> <ide> data = json.dumps({"virtual_machine": server_params}) <ide> <add> # upon successfull machine creation, <add> # response is 201 with empty body <add> # attempting to return the real node <add> existing_nodes = self.list_nodes() <ide> try: <ide> response = self.connection.request( <ide> "/api/virtual_machines", <ide> data=data, <ide> headers={ <ide> "Content-type": "application/json"}, <ide> method="POST") <del> # response is 201 with empty body <del> return True <del> <ide> except Exception as exc: <ide> raise Exception("Failed to create node: %s" % exc) <ide> <add> new_node = None <add> for i in range(0, 10): <add> nodes = self.list_nodes() <add> for node in nodes: <add> if node.id not in [n.id for n in existing_nodes] and \ <add> node.name == hostname: <add> new_node = node <add> return new_node <add> time.sleep(10) <add> <add> <ide> def ex_start_node(self, node): <ide> """ <ide> Start a node
1
PHP
PHP
move aliases to compatbility shims
d553414eb7b09d87f0841d1b7b950f1b319976de
<ide><path>config/bootstrap.php <ide> <ide> define('TIME_START', microtime(true)); <ide> <del>// @deprecated Backward compatibility with 2.x series <del>if (PHP_VERSION_ID < 70000) { <del> class_alias('Cake\Utility\Text', 'Cake\Utility\String'); <del>} <del> <del>// @deprecated Backward compatibility with 2.x, 3.0.x <del>class_alias('Cake\Mailer\AbstractTransport', 'Cake\Network\Email\AbstractTransport'); <del>class_alias('Cake\Mailer\Transport\DebugTransport', 'Cake\Network\Email\DebugTransport'); <del>class_alias('Cake\Mailer\Email', 'Cake\Network\Email\Email'); <del>class_alias('Cake\Mailer\Transport\MailTransport', 'Cake\Network\Email\MailTransport'); <del>class_alias('Cake\Mailer\Transport\SmtpTransport', 'Cake\Network\Email\SmtpTransport'); <del> <del> <ide> require CAKE . 'basics.php'; <ide> <ide> // Sets the initial router state so future reloads work. <ide><path>src/Network/Email/AbstractTransport.php <add><?php <add>// @deprecated Backward compatibility with 2.x, 3.0.x <add>class_alias('Cake\Mailer\AbstractTransport', 'Cake\Network\Email\AbstractTransport'); <ide><path>src/Network/Email/DebugTransport.php <add><?php <add>// @deprecated Backward compatibility with 2.x, 3.0.x <add>class_alias('Cake\Mailer\Transport\DebugTransport', 'Cake\Network\Email\DebugTransport'); <ide><path>src/Network/Email/Email.php <add><?php <add>// @deprecated Backward compatibility with 2.x, 3.0.x <add>class_alias('Cake\Mailer\Email', 'Cake\Network\Email\Email'); <ide><path>src/Network/Email/MailTransport.php <add><?php <add>// @deprecated Backward compatibility with 2.x, 3.0.x <add>class_alias('Cake\Mailer\Transport\MailTransport', 'Cake\Network\Email\MailTransport'); <ide><path>src/Network/Email/SmtpTransport.php <add><?php <add>// @deprecated Backward compatibility with 2.x, 3.0.x <add>class_alias('Cake\Mailer\Transport\SmtpTransport', 'Cake\Network\Email\SmtpTransport'); <ide><path>src/Utility/String.php <add><?php <add>// @deprecated Backward compatibility with 2.x series <add>if (PHP_VERSION_ID < 70000) { <add> class_alias('Cake\Utility\Text', 'Cake\Utility\String'); <add>}
7
PHP
PHP
fix null return
7848377efcc2e09cfab3239c7a93f85beeece11f
<ide><path>src/Illuminate/Http/Client/PendingRequest.php <ide> public function buildStubHandler() <ide> ->first(); <ide> <ide> if (is_null($response)) { <del> return Factory::response(); <add> return $handler($request, $options); <ide> } elseif (is_array($response)) { <ide> return Factory::response($response); <ide> }
1
Ruby
Ruby
use present rather than any
ea7b5ff99e44aac0fc643e02dc4d046ab99ecdc7
<ide><path>activerecord/lib/active_record/relation.rb <ide> def merge(r) <ide> select(r.send(:select_clauses).join(', ')). <ide> eager_load(r.eager_load_associations). <ide> preload(r.associations_to_preload). <del> from(r.send(:sources).any? ? r.send(:from_clauses) : nil) <add> from(r.send(:sources).present? ? r.send(:from_clauses) : nil) <ide> end <ide> <ide> alias :& :merge <ide> def to_a <ide> :conditions => where_clause, <ide> :limit => @relation.taken, <ide> :offset => @relation.skipped, <del> :from => (@relation.send(:from_clauses) if @relation.send(:sources).any?) <add> :from => (@relation.send(:from_clauses) if @relation.send(:sources).present?) <ide> }, <ide> ActiveRecord::Associations::ClassMethods::JoinDependency.new(@klass, @eager_load_associations, nil)) <ide> end
1
PHP
PHP
apply fixes from styleci
92aff02c1f023f333a4ad5e91955e67c8efbe32a
<ide><path>src/Illuminate/Http/Resources/MergeValue.php <ide> public function __construct($data) <ide> { <ide> $this->data = $data instanceof Collection ? $data->all() : $data; <ide> } <del> <ide> }
1
Text
Text
fix the description of 'close' event
20f8a222598b3e9659bd61e2cbb732d1e1afa43b
<ide><path>doc/api/fs.md <ide> added: v0.1.93 <ide> added: v0.1.93 <ide> --> <ide> <del>Emitted when the `ReadStream`'s underlying file descriptor has been closed <del>using the `fs.close()` method. <add>Emitted when the `ReadStream`'s underlying file descriptor has been closed. <ide> <ide> ### Event: 'open' <ide> <!-- YAML <ide> added: v0.1.93 <ide> added: v0.1.93 <ide> --> <ide> <del>Emitted when the `WriteStream`'s underlying file descriptor has been closed <del>using the `fs.close()` method. <add>Emitted when the `WriteStream`'s underlying file descriptor has been closed. <ide> <ide> ### Event: 'open' <ide> <!-- YAML
1
Python
Python
remove outdated sqlcheckoperator docstring
86242247155a582f771a380b4c2d94bce80d6929
<ide><path>airflow/operators/sql.py <ide> class SQLCheckOperator(BaseOperator): <ide> publishing dubious data, or on the side and receive email alerts <ide> without stopping the progress of the DAG. <ide> <del> Note that this is an abstract class and get_db_hook <del> needs to be defined. Whereas a get_db_hook is hook that gets a <del> single record from an external source. <del> <ide> :param sql: the sql to be executed. (templated) <ide> :type sql: str <ide> """
1
Text
Text
add info about co-maintainers in readme
a43829c2fd2887ef1fff6b4a021992659d8a3dbe
<ide><path>README.md <ide> There are a number of small backwards incompatible changes with version 2.0.0. [ <ide> <ide> ## [Contributing](https://github.com/moment/moment/blob/develop/CONTRIBUTING.md) <ide> <add>We're looking for co-maintainers! If you want to become a master of time please <add>write to [ichernev](https://github.com/ichernev). <add> <ide> ## License <ide> <ide> Moment.js is freely distributable under the terms of the [MIT license](https://github.com/moment/moment/blob/develop/LICENSE).
1
Javascript
Javascript
fix minor grammatical errors
e6a2527cdfa6a7ef22ecf6a456e7aa48ddec23ce
<ide><path>src/ng/rootElement.js <ide> * @description <ide> * The root element of Angular application. This is either the element where {@link <ide> * ng.directive:ngApp ngApp} was declared or the element passed into <del> * {@link angular.bootstrap}. The element represent the root element of application. It is also the <del> * location where the applications {@link auto.$injector $injector} service gets <del> * published, it can be retrieved using `$rootElement.injector()`. <add> * {@link angular.bootstrap}. The element represents the root element of application. It is also the <add> * location where the application's {@link auto.$injector $injector} service gets <add> * published, and can be retrieved using `$rootElement.injector()`. <ide> */ <ide> <ide>
1
Javascript
Javascript
add example on components
f7f347329efbff66a66477282f21fe6fef651498
<ide><path>Libraries/Components/ActivityIndicator/ActivityIndicator.js <ide> type DefaultProps = { <ide> <ide> /** <ide> * Displays a circular loading indicator. <add> * <add> * ### Example <add> * <add> * ```ReactNativeWebPlayer <add> * import React, { Component } from 'react' <add> * import { <add> * ActivityIndicator, <add> * AppRegistry, <add> * StyleSheet, <add> * Text, <add> * View, <add> * } from 'react-native' <add> * <add> * class App extends Component { <add> * render() { <add> * return ( <add> * <View style={[styles.container, styles.horizontal]}> <add> * <ActivityIndicator size="large" color="#0000ff" /> <add> * <ActivityIndicator size="small" color="#00ff00" /> <add> * <ActivityIndicator size="large" color="#0000ff" /> <add> * <ActivityIndicator size="small" color="#00ff00" /> <add> * </View> <add> * ) <add> * } <add> * } <add> * <add> * const styles = StyleSheet.create({ <add> * container: { <add> * flex: 1, <add> * justifyContent: 'center' <add> * }, <add> * horizontal: { <add> * flexDirection: 'row', <add> * justifyContent: 'space-around', <add> * padding: 10 <add> * } <add> * }) <add> * <add> * AppRegistry.registerComponent('App', () => App) <add> * ``` <ide> */ <ide> /* $FlowFixMe(>=0.53.0 site=react_native_fb,react_native_oss) This comment <ide> * suppresses an error when upgrading Flow's support for React. To see the <ide><path>Libraries/Components/Touchable/TouchableHighlight.js <ide> const PRESS_RETENTION_OFFSET = {top: 20, left: 20, right: 20, bottom: 30}; <ide> * ); <ide> * }, <ide> * ``` <add> * <add> * <add> * ### Example <add> * <add> * ```ReactNativeWebPlayer <add> * import React, { Component } from 'react' <add> * import { <add> * AppRegistry, <add> * StyleSheet, <add> * TouchableHighlight, <add> * Text, <add> * View, <add> * } from 'react-native' <add> * <add> * class App extends Component { <add> * constructor(props) { <add> * super(props) <add> * this.state = { count: 0 } <add> * } <add> * <add> * onPress = () => { <add> * this.setState({ <add> * count: this.state.count+1 <add> * }) <add> * } <add> * <add> * render() { <add> * return ( <add> * <View style={styles.container}> <add> * <TouchableHighlight <add> * style={styles.button} <add> * onPress={this.onPress} <add> * > <add> * <Text> Touch Here </Text> <add> * </TouchableHighlight> <add> * <View style={[styles.countContainer]}> <add> * <Text style={[styles.countText]}> <add> * { this.state.count !== 0 ? this.state.count: null} <add> * </Text> <add> * </View> <add> * </View> <add> * ) <add> * } <add> * } <add> * <add> * const styles = StyleSheet.create({ <add> * container: { <add> * flex: 1, <add> * justifyContent: 'center', <add> * paddingHorizontal: 10 <add> * }, <add> * button: { <add> * alignItems: 'center', <add> * backgroundColor: '#DDDDDD', <add> * padding: 10 <add> * }, <add> * countContainer: { <add> * alignItems: 'center', <add> * padding: 10 <add> * }, <add> * countText: { <add> * color: '#FF00FF' <add> * } <add> * }) <add> * <add> * AppRegistry.registerComponent('App', () => App) <add> * ``` <add> * <ide> */ <ide> <ide> var TouchableHighlight = createReactClass({ <ide><path>Libraries/Components/Touchable/TouchableOpacity.js <ide> var PRESS_RETENTION_OFFSET = {top: 20, left: 20, right: 20, bottom: 30}; <ide> * } from 'react-native' <ide> * <ide> * class App extends Component { <del> * constructor(props) { <del> * super(props) <del> * this.state = { count: 0 } <del> * } <add> * constructor(props) { <add> * super(props) <add> * this.state = { count: 0 } <add> * } <ide> * <del> * onPress = () => { <del> * this.setState({ <del> * count: this.state.count+1 <del> * }) <del> * } <add> * onPress = () => { <add> * this.setState({ <add> * count: this.state.count+1 <add> * }) <add> * } <ide> * <ide> * render() { <del> * return ( <del> * <View style={styles.container}> <del> * <TouchableOpacity <add> * return ( <add> * <View style={styles.container}> <add> * <TouchableOpacity <ide> * style={styles.button} <ide> * onPress={this.onPress} <ide> * >
3
Text
Text
fix small typo
0bc1b2fece72930529976d03aa1218eeaab2e3e3
<ide><path>guide/english/certifications/javascript-algorithms-and-data-structures/basic-data-structures/check-if-an-object-has-a-property/index.md <ide> title: Check if an Object has a Property <ide> ## Check if an Object has a Property <ide> <ide> Method: <del>- The simplest way to complete this challenge is to create an `ìf-statement` to check wether or not the object contains all useres, then to return a true or false statement. The first solution does just this. <add>- The simplest way to complete this challenge is to create an `ìf-statement` to check wether or not the object contains all users, then to return a true or false statement. The first solution does just this. <ide> - The second solution works in exactly the same way, only it uses 1 line of code - `Conditional(ternary)-Operator` - within the function. <ide> <ide> [developer.mozilla.org](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Conditional_Operator) provides a more in depth analysis of the ternary operator.
1
PHP
PHP
remove exit() from consoleerrorhandler
88637b5df5e58ba1a94c6d484de1f8b31beba4a7
<ide><path>App/Console/cake.php <ide> */ <ide> include dirname(__DIR__) . '/Config/bootstrap.php'; <ide> <del>return Cake\Console\ShellDispatcher::run($argv); <add>exit(Cake\Console\ShellDispatcher::run($argv)); <ide><path>lib/Cake/Console/ConsoleErrorHandler.php <ide> <?php <ide> /** <del> * ErrorHandler for Console Shells <del> * <del> * PHP 5 <del> * <ide> * CakePHP(tm) : Rapid Development Framework (http://cakephp.org) <ide> * Copyright 2005-2012, Cake Software Foundation, Inc. (http://cakefoundation.org) <ide> * <ide> * @license MIT License (http://www.opensource.org/licenses/mit-license.php) <ide> */ <ide> namespace Cake\Console; <add> <ide> use Cake\Core\Configure; <ide> use Cake\Error\ErrorHandler; <ide> <ide> public static function getStderr() { <ide> * Handle a exception in the console environment. Prints a message to stderr. <ide> * <ide> * @param Exception $exception The exception to handle <del> * @return void <add> * @return integer Exit code from exception caught. <ide> */ <ide> public static function handleException(\Exception $exception) { <ide> $stderr = static::getStderr(); <ide> $stderr->write(__d('cake_console', "<error>Error:</error> %s\n%s", <ide> $exception->getMessage(), <ide> $exception->getTraceAsString() <ide> )); <del> // TODO this makes this method impossible to test. <del> exit($exception->getCode() ? $exception->getCode() : 1); <add> return $exception->getCode() ?: 1; <ide> } <ide> <ide> /** <ide><path>lib/Cake/Console/ShellDispatcher.php <ide> public function __construct($args = array(), $bootstrap = true) { <ide> * Run the dispatcher <ide> * <ide> * @param array $argv The argv from PHP <del> * @return void <add> * @return integer The exit code of the shell process. <ide> */ <ide> public static function run($argv) { <ide> $dispatcher = new ShellDispatcher($argv); <del> $dispatcher->_stop($dispatcher->dispatch() === false ? 1 : 0); <add> return $dispatcher->dispatch(); <ide> } <ide> <ide> /** <ide> protected function _initConstants() { <ide> ini_set('implicit_flush', true); <ide> ini_set('max_execution_time', 0); <ide> } <del> define('CAKEPHP_SHELL', true); <add> if (!defined('CAKEPHP_SHELL')) { <add> define('CAKEPHP_SHELL', true); <add> } <ide> } <ide> <ide> /** <ide> public function setErrorHandlers() { <ide> $errorHandler = new ConsoleErrorHandler(); <ide> if (empty($error['consoleHandler'])) { <ide> $error['consoleHandler'] = array($errorHandler, 'handleError'); <del> Configure::write('error', $error); <add> Configure::write('Error', $error); <ide> } <ide> if (empty($exception['consoleHandler'])) { <ide> $exception['consoleHandler'] = array($errorHandler, 'handleException'); <del> Configure::write('exception', $exception); <add> Configure::write('Exception', $exception); <ide> } <del> set_exception_handler($exception['consoleHandler']); <ide> set_error_handler($error['consoleHandler'], Configure::read('Error.level')); <ide> } <ide> <ide> /** <ide> * Dispatches a CLI request <ide> * <add> * @return integer The cli command exit code. 0 is success. <add> */ <add> public function dispatch() { <add> try { <add> $exit = 0; <add> $this->_dispatch(); <add> } catch (\Exception $e) { <add> $handler = Configure::read('Exception.consoleHandler'); <add> $exit = $handler($e); <add> } <add> return $exit; <add> } <add> <add>/** <add> * Dispatch a request. <add> * <ide> * @return boolean <ide> * @throws MissingShellMethodException <ide> */ <del> public function dispatch() { <add> protected function _dispatch() { <ide> $shell = $this->shiftArgs(); <ide> <ide> if (!$shell) { <ide><path>lib/Cake/Console/cake.php <ide> require $root . '/App/Config/bootstrap.php'; <ide> } <ide> unset($root, $loaded, $appIndex, $dir); <del>return Cake\Console\ShellDispatcher::run($argv); <add>exit(Cake\Console\ShellDispatcher::run($argv)); <ide><path>lib/Cake/Test/TestCase/Console/ConsoleErrorHandlerTest.php <ide> <?php <ide> /** <del> * ConsoleErrorHandler Test case <del> * <del> * PHP versions 5 <del> * <ide> * CakePHP(tm) : Rapid Development Framework (http://cakephp.org) <ide> * Copyright 2005-2012, Cake Software Foundation, Inc. (http://cakefoundation.org) <ide> * <ide> * <ide> * @copyright Copyright 2005-2012, Cake Software Foundation, Inc. (http://cakefoundation.org) <ide> * @link http://cakephp.org CakePHP(tm) Project <del> * @package Cake.Test.Case.Console <ide> * @since CakePHP(tm) v 2.0 <ide> * @license MIT License (http://www.opensource.org/licenses/mit-license.php) <ide> */ <ide> namespace Cake\Test\TestCase\Console; <add> <ide> use Cake\Console\ConsoleErrorHandler; <ide> use Cake\Error; <ide> use Cake\TestSuite\TestCase; <ide> public function testCakeErrors() { <ide> ConsoleErrorHandler::$stderr->expects($this->once())->method('write') <ide> ->with($this->stringContains('Missing action')); <ide> <del> $this->Error->expects($this->once()) <del> ->method('_stop') <del> ->with(404); <del> <del> $this->Error->handleException($exception); <add> $result = $this->Error->handleException($exception); <add> $this->assertEquals(404, $result); <ide> } <ide> <ide> /** <ide> public function testNonCakeExceptions() { <ide> ConsoleErrorHandler::$stderr->expects($this->once())->method('write') <ide> ->with($this->stringContains('Too many parameters.')); <ide> <del> $this->Error->expects($this->once()) <del> ->method('_stop') <del> ->with(1); <del> <del> $this->Error->handleException($exception); <add> $result = $this->Error->handleException($exception); <add> $this->assertEquals(1, $result); <ide> } <ide> <ide> /** <ide> public function testError404Exception() { <ide> ConsoleErrorHandler::$stderr->expects($this->once())->method('write') <ide> ->with($this->stringContains('dont use me in cli.')); <ide> <del> $this->Error->expects($this->once()) <del> ->method('_stop') <del> ->with(404); <del> <del> $this->Error->handleException($exception); <add> $result = $this->Error->handleException($exception); <add> $this->assertEquals(404, $result); <ide> } <ide> <ide> /** <ide> public function testError500Exception() { <ide> ConsoleErrorHandler::$stderr->expects($this->once())->method('write') <ide> ->with($this->stringContains('dont use me in cli.')); <ide> <del> $this->Error->expects($this->once()) <del> ->method('_stop') <del> ->with(500); <del> <del> $this->Error->handleException($exception); <add> $result = $this->Error->handleException($exception); <add> $this->assertEquals(500, $result); <ide> } <ide> <ide> } <ide><path>lib/Cake/Test/TestCase/Console/ShellDispatcherTest.php <ide> <?php <ide> /** <del> * ShellDispatcherTest file <del> * <del> * PHP 5 <del> * <ide> * CakePHP(tm) Tests <http://book.cakephp.org/2.0/en/development/testing.html> <ide> * Copyright 2005-2012, Cake Software Foundation, Inc. <ide> * <ide> * <ide> * @copyright Copyright 2005-2012, Cake Software Foundation, Inc. <ide> * @link http://book.cakephp.org/2.0/en/development/testing.html CakePHP(tm) Tests <del> * @package Cake.Test.Case.Console <ide> * @since CakePHP(tm) v 1.2.0.5432 <ide> * @license MIT License (http://www.opensource.org/licenses/mit-license.php) <ide> */ <ide> namespace Cake\Test\TestCase\Console; <add> <ide> use Cake\Console\ShellDispatcher; <ide> use Cake\Core\App; <ide> use Cake\Core\Configure; <ide> public function testDispatchShellWithMain() { <ide> <ide> $Dispatcher->args = array('mock_with_main'); <ide> $result = $Dispatcher->dispatch(); <del> $this->assertTrue($result); <add> $this->assertEquals(0, $result); <ide> $this->assertEquals(array(), $Dispatcher->args); <ide> } <ide> <ide> public function testDispatchShellWithoutMain() { <ide> <ide> $Dispatcher->args = array('mock_without_main', 'initdb'); <ide> $result = $Dispatcher->dispatch(); <del> $this->assertTrue($result); <add> $this->assertEquals(0, $result); <ide> } <ide> <ide> /** <ide> public function testDispatchNotAShellWithMain() { <ide> <ide> $Dispatcher->args = array('mock_with_main_not_a'); <ide> $result = $Dispatcher->dispatch(); <del> $this->assertTrue($result); <add> $this->assertEquals(0, $result); <ide> $this->assertEquals(array(), $Dispatcher->args); <ide> <ide> $Shell = new \MockWithMainNotAShell($Dispatcher); <ide> public function testDispatchNotAShellWithMain() { <ide> <ide> $Dispatcher->args = array('mock_with_main_not_a', 'initdb'); <ide> $result = $Dispatcher->dispatch(); <del> $this->assertTrue($result); <add> $this->assertEquals(0, $result); <ide> } <ide> <ide> /** <ide> public function testDispatchNotAShellWithoutMain() { <ide> <ide> $Dispatcher->args = array('mock_without_main_not_a'); <ide> $result = $Dispatcher->dispatch(); <del> $this->assertTrue($result); <add> $this->assertEquals(0, $result); <ide> $this->assertEquals(array(), $Dispatcher->args); <ide> <ide> $Shell = new \MockWithoutMainNotAShell($Dispatcher); <ide> public function testDispatchNotAShellWithoutMain() { <ide> <ide> $Dispatcher->args = array('mock_without_main_not_a', 'initdb'); <ide> $result = $Dispatcher->dispatch(); <del> $this->assertTrue($result); <add> $this->assertEquals(0, $result); <ide> } <ide> <ide> /**
6
Text
Text
add v3.13.1 to changelog.md
af661414bf9ee5e8ac5565e15a280e330ce69510
<ide><path>CHANGELOG.md <ide> - [#18381](https://github.com/emberjs/ember.js/pull/18381) Drop Node 6 and 11 support. <ide> - [#18410](https://github.com/emberjs/ember.js/pull/18410) Use ember-cli-htmlbars for inline precompilation if possible. <ide> <add>### v3.13.1 (September 23, 2019) <add> <add>- [#18273](https://github.com/emberjs/ember.js/pull/18273) [BUGFIX] Fix issues with SSR rehydration of <title>. (#18273) <add>- [#18418](https://github.com/emberjs/ember.js/pull/18418) / [#18418](https://github.com/emberjs/ember.js/pull/18418) [BUGFIX] Require Octane features when using Octane preview (#18418) <add> <ide> ### v3.13.0 (September 19, 2019) <ide> <ide> - [#16366](https://github.com/emberjs/ember.js/pull/16366) / [#16903](https://github.com/emberjs/ember.js/pull/16903) / [#17572](https://github.com/emberjs/ember.js/pull/17572) / [#17682](https://github.com/emberjs/ember.js/pull/17682) / [#17765](https://github.com/emberjs/ember.js/pull/17765) / [#17751](https://github.com/emberjs/ember.js/pull/17751) / [#17835](https://github.com/emberjs/ember.js/pull/17835) / [#18059](https://github.com/emberjs/ember.js/pull/18059) / [#17951](https://github.com/emberjs/ember.js/pull/17951) / [#18069](https://github.com/emberjs/ember.js/pull/18069) / [#18074](https://github.com/emberjs/ember.js/pull/18074) / [#18073](https://github.com/emberjs/ember.js/pull/18073) / [#18091](https://github.com/emberjs/ember.js/pull/18091) / [#18186](https://github.com/emberjs/ember.js/pull/18186) / [#18223](https://github.com/emberjs/ember.js/pull/18223) / [#18358](https://github.com/emberjs/ember.js/pull/18358) / [#18266](https://github.com/emberjs/ember.js/pull/18266) [FEATURE] Implement the [Tracked Properties](https://github.com/emberjs/rfcs/blob/master/text/0410-tracked-properties.md) and [Tracked Property Updates](https://github.com/emberjs/rfcs/blob/master/text/0478-tracked-properties-updates.md) RFCs.
1
Ruby
Ruby
fix typo in ar callbacks
55df1df937f096bbdae29f8af965f204f08943b8
<ide><path>activerecord/lib/active_record/callbacks.rb <ide> module ActiveRecord <ide> # <ide> # Callbacks are hooks into the life cycle of an Active Record object that allow you to trigger logic <ide> # before or after an alteration of the object state. This can be used to make sure that associated and <del> # dependent objects are deleted when +destroy+ is called (by overwriting +before_destroy+) or to massage attributes <add> # dependent objects are deleted when +destroy+ is called (by overwriting +before_destroy+) or to message attributes <ide> # before they're validated (by overwriting +before_validation+). As an example of the callbacks initiated, consider <ide> # the <tt>Base#save</tt> call for a new record: <ide> #
1
Python
Python
clean the trainer state
29baa8fabe15393ec4451beceee6d025881ec992
<ide><path>src/transformers/__init__.py <ide> from .tokenization_xlnet import SPIECE_UNDERLINE, XLNetTokenizer <ide> <ide> # Trainer <del>from .trainer_utils import EvalPrediction, set_seed <add>from .trainer_utils import EvalPrediction, TrainerState, set_seed <ide> from .training_args import TrainingArguments <ide> from .training_args_tf import TFTrainingArguments <ide> from .utils import logging <ide><path>src/transformers/trainer.py <ide> import inspect <del>import json <ide> import math <ide> import os <ide> import re <ide> def __init__( <ide> "You should subclass `Trainer` and override the `create_optimizer_and_scheduler` method." <ide> ) <ide> self.tb_writer = tb_writer <del> self.log_history = [] <ide> if "prediction_loss_only" in kwargs: <ide> warnings.warn( <del> "Passing `prediction_loss_only` as a keyword argument is deprecated and won't be possible in a future version. Use `args.prediction_loss_only` instead.", <add> "Passing `prediction_loss_only` as a keyword argument is deprecated and won't be possible in a " <add> + "future version. Use `args.prediction_loss_only` instead. Setting " <add> + f"`args.prediction_loss_only={kwargs['prediction_loss_only']}", <ide> FutureWarning, <ide> ) <ide> self.args.prediction_loss_only = kwargs.pop("prediction_loss_only") <ide> def __init__( <ide> if isinstance(eval_dataset, datasets.Dataset): <ide> self._remove_unused_columns(self.eval_dataset, description="evaluation") <ide> <del> self.global_step = None <del> self.epoch = None <del> self.total_flos = None <add> self.state = TrainerState() <add> # Internal variable for total_flos used to count as tensors (for distributed + TPU), will be sent in the <add> # state at each call to self.log. <add> self._total_flos = None <ide> if self.args.fp16 and _use_native_amp: <ide> self.scaler = torch.cuda.amp.GradScaler() <ide> self.hp_search_backend = None <ide> self.use_tune_checkpoints = False <del> if self.args.label_names is None: <del> self.args.label_names = ( <del> ["start_positions, end_positions"] <del> if type(self.model) in MODEL_FOR_QUESTION_ANSWERING_MAPPING.values() <del> else ["labels"] <del> ) <add> default_label_names = ( <add> ["start_positions, end_positions"] <add> if type(self.model) in MODEL_FOR_QUESTION_ANSWERING_MAPPING.values() <add> else ["labels"] <add> ) <add> self.label_names = default_label_names if self.args.label_names is None else self.args.label_names <ide> <ide> def _remove_unused_columns(self, dataset: "datasets.Dataset", description: Optional[str] = None): <ide> if not self.args.remove_unused_columns: <ide> def _report_to_hp_search( <ide> if trial.should_prune(): <ide> raise optuna.TrialPruned() <ide> elif self.hp_search_backend == HPSearchBackend.RAY: <del> if self.global_step % self.args.save_steps == 0: <add> if self.state.global_step % self.args.save_steps == 0: <ide> self._tune_save_checkpoint() <ide> tune.report(objective=self.objective, **metrics) <ide> <ide> def _tune_save_checkpoint(self): <ide> if not self.use_tune_checkpoints: <ide> return <del> with tune.checkpoint_dir(step=self.global_step) as checkpoint_dir: <add> with tune.checkpoint_dir(step=self.state.global_step) as checkpoint_dir: <ide> self.args.output_dir = checkpoint_dir <del> output_dir = os.path.join(self.args.output_dir, f"{PREFIX_CHECKPOINT_DIR}-{self.global_step}") <add> output_dir = os.path.join(self.args.output_dir, f"{PREFIX_CHECKPOINT_DIR}-{self.state.global_step}") <ide> self.save_model(output_dir) <ide> if self.is_world_master(): <ide> torch.save(self.optimizer.state_dict(), os.path.join(output_dir, "optimizer.pt")) <ide> def train(self, model_path: Optional[str] = None, trial: Union["optuna.Trial", D <ide> num_update_steps_per_epoch = len(train_dataloader) // self.args.gradient_accumulation_steps <ide> num_update_steps_per_epoch = max(num_update_steps_per_epoch, 1) <ide> if self.args.max_steps > 0: <del> t_total = self.args.max_steps <add> max_steps = self.args.max_steps <ide> num_train_epochs = self.args.max_steps // num_update_steps_per_epoch + int( <ide> self.args.max_steps % num_update_steps_per_epoch > 0 <ide> ) <ide> else: <del> t_total = int(num_update_steps_per_epoch * self.args.num_train_epochs) <add> max_steps = int(num_update_steps_per_epoch * self.args.num_train_epochs) <ide> num_train_epochs = self.args.num_train_epochs <del> self.args.max_steps = t_total <add> num_train_epochs = int(np.ceil(num_train_epochs)) <ide> <del> self.create_optimizer_and_scheduler(num_training_steps=t_total) <add> self.create_optimizer_and_scheduler(num_training_steps=max_steps) <ide> self.state = TrainerState() <ide> <ide> # Check if saved optimizer or scheduler states exist <ide> def train(self, model_path: Optional[str] = None, trial: Union["optuna.Trial", D <ide> self.lr_scheduler.load_state_dict(torch.load(os.path.join(model_path, "scheduler.pt"))) <ide> reissue_pt_warnings(caught_warnings) <ide> <del> # Check if a saved Trainer state exist <del> if model_path is not None and os.path.isfile(os.path.join(model_path, "trainer_state.json")): <del> self.state = TrainerState.load_from_json(os.path.join(model_path, "trainer_state.json")) <del> <add> # Moxed precision training with apex (torch < 1.6) <ide> model = self.model <ide> if self.args.fp16 and _use_apex: <ide> if not is_apex_available(): <ide> raise ImportError("Please install apex from https://www.github.com/nvidia/apex to use fp16 training.") <ide> model, self.optimizer = amp.initialize(model, self.optimizer, opt_level=self.args.fp16_opt_level) <ide> <del> # multi-gpu training (should be after apex fp16 initialization) <add> # Multi-gpu training (should be after apex fp16 initialization) <ide> if self.args.n_gpu > 1: <ide> model = torch.nn.DataParallel(model) <ide> <ide> def train(self, model_path: Optional[str] = None, trial: Union["optuna.Trial", D <ide> logger.info(" Instantaneous batch size per device = %d", self.args.per_device_train_batch_size) <ide> logger.info(" Total train batch size (w. parallel, distributed & accumulation) = %d", total_train_batch_size) <ide> logger.info(" Gradient Accumulation steps = %d", self.args.gradient_accumulation_steps) <del> logger.info(" Total optimization steps = %d", t_total) <add> logger.info(" Total optimization steps = %d", max_steps) <ide> <del> self.global_step = 0 <del> self.epoch = 0 <add> self.state.epoch = 0 <ide> epochs_trained = 0 <ide> steps_trained_in_current_epoch = 0 <ide> <ide> # Check if continuing training from a checkpoint <del> if model_path is not None: <del> # set global_step to global_step of last saved checkpoint from model path <del> try: <del> self.global_step = int(model_path.split("-")[-1].split(os.path.sep)[0]) <del> <del> epochs_trained = self.global_step // num_update_steps_per_epoch <del> steps_trained_in_current_epoch = self.global_step % (num_update_steps_per_epoch) <del> <del> logger.info(" Continuing training from checkpoint, will skip to saved global_step") <del> logger.info(" Continuing training from epoch %d", epochs_trained) <del> logger.info(" Continuing training from global step %d", self.global_step) <del> logger.info(" Will skip the first %d steps in the first epoch", steps_trained_in_current_epoch) <del> except ValueError: <del> self.global_step = 0 <del> logger.info(" Starting fine-tuning.") <add> if model_path and os.path.isfile(os.path.join(model_path, "trainer_state.json")): <add> self.state = TrainerState.load_from_json(os.path.join(model_path, "trainer_state.json")) <add> epochs_trained = self.state.global_step // num_update_steps_per_epoch <add> steps_trained_in_current_epoch = self.state.global_step % (num_update_steps_per_epoch) <add> <add> logger.info(" Continuing training from checkpoint, will skip to saved global_step") <add> logger.info(" Continuing training from epoch %d", epochs_trained) <add> logger.info(" Continuing training from global step %d", self.state.global_step) <add> logger.info(" Will skip the first %d steps in the first epoch", steps_trained_in_current_epoch) <add> <add> # This should be the same if the state has been saved but in case the training arguments changed, it's safer <add> # to set this after the load. <add> self.state.max_steps = max_steps <add> self.state.num_train_epochs = num_train_epochs <ide> <ide> tr_loss = torch.tensor(0.0).to(self.args.device) <del> self.total_flos = self.state.total_flos <add> self._total_flos = self.state.total_flos <ide> logging_loss_scalar = 0.0 <ide> model.zero_grad() <ide> disable_tqdm = self.args.disable_tqdm or not self.is_local_process_zero() <del> train_pbar = trange(epochs_trained, int(np.ceil(num_train_epochs)), desc="Epoch", disable=disable_tqdm) <del> for epoch in range(epochs_trained, int(np.ceil(num_train_epochs))): <add> train_pbar = trange(epochs_trained, num_train_epochs, desc="Epoch", disable=disable_tqdm) <add> for epoch in range(epochs_trained, num_train_epochs): <ide> if isinstance(train_dataloader, DataLoader) and isinstance(train_dataloader.sampler, DistributedSampler): <ide> train_dataloader.sampler.set_epoch(epoch) <ide> <ide> def train(self, model_path: Optional[str] = None, trial: Union["optuna.Trial", D <ide> continue <ide> <ide> tr_loss += self.training_step(model, inputs) <del> self.total_flos += self.floating_point_ops(inputs) <add> self._total_flos += self.floating_point_ops(inputs) <ide> <ide> if (step + 1) % self.args.gradient_accumulation_steps == 0 or ( <ide> # last step in epoch but step is always smaller than gradient_accumulation_steps <ide> def train(self, model_path: Optional[str] = None, trial: Union["optuna.Trial", D <ide> <ide> self.lr_scheduler.step() <ide> model.zero_grad() <del> self.global_step += 1 <del> self.epoch = epoch + (step + 1) / len(epoch_iterator) <add> self.state.global_step += 1 <add> self.state.epoch = epoch + (step + 1) / len(epoch_iterator) <ide> <del> if (self.args.logging_steps > 0 and self.global_step % self.args.logging_steps == 0) or ( <del> self.global_step == 1 and self.args.logging_first_step <add> if (self.args.logging_steps > 0 and self.state.global_step % self.args.logging_steps == 0) or ( <add> self.state.global_step == 1 and self.args.logging_first_step <ide> ): <ide> logs: Dict[str, float] = {} <ide> tr_loss_scalar = tr_loss.item() <ide> def train(self, model_path: Optional[str] = None, trial: Union["optuna.Trial", D <ide> <ide> if ( <ide> self.args.evaluation_strategy == EvaluationStrategy.STEPS <del> and self.global_step % self.args.eval_steps == 0 <add> and self.state.global_step % self.args.eval_steps == 0 <ide> ): <ide> metrics = self.evaluate() <ide> self._report_to_hp_search(trial, epoch, metrics) <ide> def train(self, model_path: Optional[str] = None, trial: Union["optuna.Trial", D <ide> if ( <ide> not self.args.load_best_model_at_end <ide> and self.args.save_steps > 0 <del> and self.global_step % self.args.save_steps == 0 <add> and self.state.global_step % self.args.save_steps == 0 <ide> ): <ide> self._save_training(model, trial) <ide> <ide> epoch_pbar.update(1) <del> if self.args.max_steps > 0 and self.global_step >= self.args.max_steps: <add> if self.state.global_step >= max_steps: <ide> break <ide> epoch_pbar.close() <ide> train_pbar.update(1) <ide> def train(self, model_path: Optional[str] = None, trial: Union["optuna.Trial", D <ide> "You enabled PyTorch/XLA debug metrics but you don't have a TPU " <ide> "configured. Check your training configuration if this is unexpected." <ide> ) <del> if self.args.max_steps > 0 and self.global_step >= self.args.max_steps: <add> if self.state.global_step >= max_steps: <ide> break <ide> <ide> train_pbar.close() <ide> def train(self, model_path: Optional[str] = None, trial: Union["optuna.Trial", D <ide> state_dict = torch.load(os.path.join(self.state.best_model_checkpoint, WEIGHTS_NAME)) <ide> self.model.load_state_dict(state_dict) <ide> <del> return TrainOutput(self.global_step, tr_loss.item() / self.global_step) <add> return TrainOutput(self.state.global_step, tr_loss.item() / self.state.global_step) <ide> <ide> def _save_training(self, model, trial, metrics=None): <ide> # In all cases (even distributed/parallel), self.model is always a reference <ide> def _save_training(self, model, trial, metrics=None): <ide> else: <ide> assert model is self.model, f"Model {model} should be a reference to self.model" <ide> # Save model checkpoint <del> checkpoint_folder = f"{PREFIX_CHECKPOINT_DIR}-{self.global_step}" <add> checkpoint_folder = f"{PREFIX_CHECKPOINT_DIR}-{self.state.global_step}" <ide> if self.hp_search_backend is not None and trial is not None: <ide> run_id = trial.number if self.hp_search_backend == HPSearchBackend.OPTUNA else tune.get_trial_id() <ide> checkpoint_folder += f"-run-{run_id}" <ide> def log(self, logs: Dict[str, float], iterator: Optional[tqdm] = None) -> None: <ide> ) <ide> return self._log(logs, iterator=iterator) <ide> <del> if self.epoch is not None: <del> logs["epoch"] = self.epoch <del> if self.total_flos is not None: <del> if self.args.local_rank != -1: <del> total_flos = distributed_broadcast_scalars([self.total_flos]).sum().item() <del> else: <del> total_flos = self.total_flos <del> if total_flos > 0: <del> logs["total_flos"] = total_flos <del> if self.global_step is None: <del> # when logging evaluation metrics without training <del> self.global_step = 0 <add> if self.state.epoch is not None: <add> logs["epoch"] = self.state.epoch <add> if self._total_flos is not None: <add> self.store_flos() <add> logs["total_flos"] = self.state.total_flos <ide> if self.tb_writer: <ide> for k, v in logs.items(): <ide> if isinstance(v, (int, float)): <del> self.tb_writer.add_scalar(k, v, self.global_step) <add> self.tb_writer.add_scalar(k, v, self.state.global_step) <ide> else: <ide> logger.warning( <ide> "Trainer is attempting to log a value of " <ide> def log(self, logs: Dict[str, float], iterator: Optional[tqdm] = None) -> None: <ide> self.tb_writer.flush() <ide> if is_wandb_available(): <ide> if self.is_world_process_zero(): <del> wandb.log(logs, step=self.global_step) <add> wandb.log(logs, step=self.state.global_step) <ide> if is_comet_available(): <ide> if self.is_world_process_zero(): <ide> experiment = comet_ml.config.get_global_experiment() <ide> if experiment is not None: <del> experiment._log_metrics(logs, step=self.global_step, epoch=self.epoch, framework="transformers") <del> output = {**logs, **{"step": self.global_step}} <del> if self.is_world_process_zero(): <del> self.log_history.append(output) <add> experiment._log_metrics( <add> logs, step=self.state.global_step, epoch=self.state.epoch, framework="transformers" <add> ) <add> output = {**logs, **{"step": self.state.global_step}} <add> self.state.log_history.append(output) <ide> if iterator is not None: <ide> iterator.write(output) <ide> else: <ide> def _save_tpu(self, output_dir: Optional[str] = None): <ide> if xm.is_master_ordinal(): <ide> os.makedirs(output_dir, exist_ok=True) <ide> torch.save(self.args, os.path.join(output_dir, "training_args.bin")) <del> json.dump( <del> self.log_history, open(os.path.join(output_dir, "log_history.json"), "w"), indent=2, ensure_ascii=False <del> ) <ide> <ide> # Save a trained model and configuration using `save_pretrained()`. <ide> # They can then be reloaded using `from_pretrained()` <ide> def _save(self, output_dir: Optional[str] = None): <ide> <ide> # Good practice: save your training arguments together with the trained model <ide> torch.save(self.args, os.path.join(output_dir, "training_args.bin")) <del> json.dump( <del> self.log_history, open(os.path.join(output_dir, "log_history.json"), "w"), indent=2, ensure_ascii=False <del> ) <ide> <ide> def store_flos(self): <ide> # Storing the number of floating-point operations that went into the model <del> if self.total_flos is not None: <add> if self._total_flos is not None: <ide> if self.args.local_rank != -1: <del> self.state.total_flos = distributed_broadcast_scalars([self.total_flos]).sum().item() <add> self.state.total_flos = distributed_broadcast_scalars([self._total_flos]).sum().item() <ide> else: <del> self.state.total_flos = self.total_flos <add> self.state.total_flos = self._total_flos <ide> <ide> def _sorted_checkpoints(self, checkpoint_prefix=PREFIX_CHECKPOINT_DIR, use_mtime=False) -> List[str]: <ide> ordering_and_checkpoint_path = [] <ide> def prediction_step( <ide> Tuple[Optional[float], Optional[torch.Tensor], Optional[torch.Tensor]]: <ide> A tuple with the loss, logits and labels (each being optional). <ide> """ <del> has_labels = all(inputs.get(k) is not None for k in self.args.label_names) <add> has_labels = all(inputs.get(k) is not None for k in self.label_names) <ide> inputs = self._prepare_inputs(inputs) <ide> <ide> with torch.no_grad(): <ide> def prediction_step( <ide> logits = logits[0] <ide> <ide> if has_labels: <del> labels = tuple(inputs.get(name).detach() for name in self.args.label_names) <add> labels = tuple(inputs.get(name).detach() for name in self.label_names) <ide> if len(labels) == 1: <ide> labels = labels[0] <ide> else: <ide><path>src/transformers/trainer_utils.py <ide> def distributed_broadcast_scalars( <ide> @dataclass <ide> class TrainerState: <ide> """ <del> A class containing the `Trainer` fields that will be saved along the model and optimizer. <add> A class containing the `Trainer` inner state that will be saved along the model and optimizer. <add> <add> .. note:: <add> <add> In all this class, one step is to be understood as one update step. When using gradient accumulation, one <add> update step may require several forward and backward passes: if you use :obj:`gradient_accumulation_steps=n`, <add> then one update step requires going throuch `n` batches. <add> <add> Args: <add> epoch (:obj:`float`, `optional`): <add> Only set during training, will represent the epoch the training is at (the decimal part being the <add> percentage of the current epoch completed). <add> global_step (:obj:`int`, `optional`, defaults to 0): <add> During training, represents the number of update steps completed. <add> max_steps (:obj:`int`, `optional`, defaults to 0): <add> The number of update steps to do during the current training. <add> total_flos (:obj:`int`, `optional`, defaults to 0): <add> The total number of floating operations done by the model since the beginning of training. <add> log_history (:obj:`List[Dict[str, float]]`, `optional`): <add> The list of logs done since the beginning of training. <add> best_metric (:obj:`float`, `optional`): <add> When tracking the best model, the value of the best metric encountered so far. <add> best_model_checkpoint (:obj:`str`, `optional`): <add> When tracking the best model, the value of the name of the checkpoint for the best model encountered so <add> far. <ide> """ <ide> <add> epoch: Optional[float] = None <add> global_step: int = 0 <add> max_steps: int = 0 <add> num_train_epochs: int = 0 <ide> total_flos: int = 0 <add> log_history: List[Dict[str, float]] = None <ide> best_metric: Optional[float] = None <ide> best_model_checkpoint: Optional[str] = None <ide> <add> def __post_init__(self): <add> if self.log_history is None: <add> self.log_history = [] <add> <ide> def save_to_json(self, json_path: str): <ide> """ Save the content of this instance in JSON format inside :obj:`json_path`.""" <ide> json_string = json.dumps(dataclasses.asdict(self), indent=2, sort_keys=True) + "\n" <ide><path>tests/test_trainer.py <del>import json <add>import dataclasses <ide> import os <ide> import tempfile <ide> import unittest <ide> LineByLineTextDataset, <ide> PreTrainedModel, <ide> Trainer, <add> TrainerState, <ide> ) <ide> <ide> <ide> def check_trained_model(self, model, alternate_seed=False): <ide> self.assertTrue(torch.allclose(model.b, b)) <ide> <ide> def check_saved_checkpoints(self, output_dir, freq, total, is_pretrained=True): <del> file_list = [WEIGHTS_NAME, "training_args.bin", "log_history.json", "optimizer.pt", "scheduler.pt"] <add> file_list = [WEIGHTS_NAME, "training_args.bin", "optimizer.pt", "scheduler.pt", "trainer_state.json"] <ide> if is_pretrained: <ide> file_list.append("config.json") <ide> for step in range(freq, total, freq): <ide> def check_best_model_has_been_loaded( <ide> self, output_dir, freq, total, trainer, metric, greater_is_better=False, is_pretrained=True <ide> ): <ide> checkpoint = os.path.join(output_dir, f"checkpoint-{(total // freq) * freq}") <del> log_history = json.load(open(os.path.join(checkpoint, "log_history.json"))) <add> log_history = TrainerState.load_from_json(os.path.join(checkpoint, "trainer_state.json")).log_history <ide> <ide> values = [d[metric] for d in log_history] <ide> best_value = max(values) if greater_is_better else min(values) <ide> def check_best_model_has_been_loaded( <ide> metrics = trainer.evaluate() <ide> self.assertEqual(metrics[metric], best_value) <ide> <add> def test_training_arguments_are_left_untouched(self): <add> trainer = get_regression_trainer() <add> trainer.train() <add> args = TrainingArguments("./regression") <add> self.assertEqual(args.to_dict(), trainer.args.to_dict()) <add> <ide> def test_reproducible_training(self): <ide> # Checks that training worked, model trained and seed made a reproducible training. <ide> trainer = get_regression_trainer(learning_rate=0.1) <ide> def test_save_checkpoints(self): <ide> trainer.train() <ide> self.check_saved_checkpoints(tmpdir, 5, int(self.n_epochs * 64 / self.batch_size), False) <ide> <add> def test_can_resume_training(self): <add> if torch.cuda.device_count() > 2: <add> # This test will fail for more than 2 GPUs since the batch size will get bigger and with the number of <add> # save_steps, the checkpoint will resume training at epoch 2 or more (so the data seen by the model <add> # won't be the same since the training dataloader is shuffled). <add> return <add> with tempfile.TemporaryDirectory() as tmpdir: <add> trainer = get_regression_trainer(output_dir=tmpdir, train_len=128, save_steps=5, learning_rate=0.1) <add> trainer.train() <add> (a, b) = trainer.model.a.item(), trainer.model.b.item() <add> state = dataclasses.asdict(trainer.state) <add> <add> checkpoint = os.path.join(tmpdir, "checkpoint-5") <add> <add> # Reinitialize trainer and load model <add> model = RegressionPreTrainedModel.from_pretrained(checkpoint) <add> trainer = Trainer(model, trainer.args, train_dataset=trainer.train_dataset) <add> <add> trainer.train(model_path=checkpoint) <add> (a1, b1) = trainer.model.a.item(), trainer.model.b.item() <add> state1 = dataclasses.asdict(trainer.state) <add> self.assertEqual(a, a1) <add> self.assertEqual(b, b1) <add> self.assertEqual(state, state1) <add> <add> # With a regular model that is not a PreTrainedModel <add> with tempfile.TemporaryDirectory() as tmpdir: <add> trainer = get_regression_trainer( <add> output_dir=tmpdir, train_len=128, save_steps=5, learning_rate=0.1, pretrained=False <add> ) <add> trainer.train() <add> (a, b) = trainer.model.a.item(), trainer.model.b.item() <add> state = dataclasses.asdict(trainer.state) <add> <add> checkpoint = os.path.join(tmpdir, "checkpoint-5") <add> <add> # Reinitialize trainer and load model <add> model = RegressionModel() <add> state_dict = torch.load(os.path.join(checkpoint, WEIGHTS_NAME)) <add> model.load_state_dict(state_dict) <add> trainer = Trainer(model, trainer.args, train_dataset=trainer.train_dataset) <add> <add> trainer.train(model_path=checkpoint) <add> (a1, b1) = trainer.model.a.item(), trainer.model.b.item() <add> state1 = dataclasses.asdict(trainer.state) <add> self.assertEqual(a, a1) <add> self.assertEqual(b, b1) <add> self.assertEqual(state, state1) <add> <ide> def test_load_best_model_at_end(self): <ide> total = int(self.n_epochs * 64 / self.batch_size) <ide> with tempfile.TemporaryDirectory() as tmpdir:
4
Text
Text
add code of conduct
2418c198035bf49c21d917e1db04c562851bc17b
<ide><path>CODE_OF_CONDUCT.md <add># Contributor Code of Conduct <add> <add>As contributors and maintainers of this project, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities. <add> <add>We are committed to making participation in this project a harassment-free experience for everyone, regardless of level of experience, gender, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, age, or religion. <add> <add>Examples of unacceptable behavior by participants include the use of sexual language or imagery, derogatory comments or personal attacks, trolling, public or private harassment, insults, or other unprofessional conduct. <add> <add>Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct. Project maintainers who do not follow the Code of Conduct may be removed from the project team. <add> <add>Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by opening an issue or contacting one or more of the project maintainers. <add> <add>This Code of Conduct is adapted from the [Contributor Covenant](http://contributor-covenant.org), version 1.0.0, available at [http://contributor-covenant.org/version/1/0/0/](http://contributor-covenant.org/version/1/0/0/) <add>
1
PHP
PHP
add mixin support to eloquent builder
7cb1ff67b7cba591394760e6743710abe8959377
<ide><path>src/Illuminate/Database/Eloquent/Builder.php <ide> use Illuminate\Support\Arr; <ide> use Illuminate\Support\Str; <ide> use Illuminate\Support\Traits\ForwardsCalls; <add>use ReflectionClass; <add>use ReflectionMethod; <ide> <ide> /** <ide> * @property-read HigherOrderBuilderProxy $orWhere <ide> public static function __callStatic($method, $parameters) <ide> return; <ide> } <ide> <add> if ($method === 'mixin') { <add> $mixin = $parameters[0]; <add> $replace = $parameters[1] ?? true; <add> <add> $methods = (new ReflectionClass($mixin))->getMethods( <add> ReflectionMethod::IS_PUBLIC | ReflectionMethod::IS_PROTECTED <add> ); <add> <add> foreach ($methods as $method) { <add> if ($replace || ! static::hasMacro($method->name)) { <add> $method->setAccessible(true); <add> static::macro($method->name, $method->invoke($mixin)); <add> } <add> } <add> <add> return; <add> } <add> <ide> if (! static::hasGlobalMacro($method)) { <ide> static::throwBadMethodCallException($method); <ide> }
1
Text
Text
fix spacing issue
eb06bb7288b6a5d9e79806c2e920099cf4a94638
<ide><path>guides/source/upgrading_ruby_on_rails.md <ide> this gem such as `whitelist_attributes` or `mass_assignment_sanitizer` options. <ide> ``` <ide> <ide> * Rails 4.0 has deprecated `ActiveRecord::Fixtures` in favor of `ActiveRecord::FixtureSet`. <add> <ide> * Rails 4.0 has deprecated `ActiveRecord::TestCase` in favor of `ActiveSupport::TestCase`. <ide> <ide> * Rails 4.0 has deprecated the old-style hash based finder API. This means that
1
Javascript
Javascript
flip tuple order of usetransition
a632f7de3bd35eaf6d5082054af4da92dd37cf20
<ide><path>packages/eslint-plugin-react-hooks/__tests__/ESLintRuleExhaustiveDeps-test.js <ide> const tests = { <ide> const [state4, dispatch2] = React.useReducer(); <ide> const [state5, maybeSetState] = useFunnyState(); <ide> const [state6, maybeDispatch] = useFunnyReducer(); <del> const [startTransition1] = useTransition(); <del> const [startTransition2, isPending2] = useTransition(); <del> const [startTransition3] = React.useTransition(); <del> const [startTransition4, isPending4] = React.useTransition(); <add> const [isPending1] = useTransition(); <add> const [isPending2, startTransition2] = useTransition(); <add> const [isPending3] = React.useTransition(); <add> const [isPending4, startTransition4] = React.useTransition(); <ide> const mySetState = useCallback(() => {}, []); <ide> let myDispatch = useCallback(() => {}, []); <ide> <ide><path>packages/eslint-plugin-react-hooks/src/ExhaustiveDeps.js <ide> export default { <ide> } <ide> } <ide> } else if (name === 'useTransition') { <del> if (id.type === 'ArrayPattern' && isArray(resolved.identifiers)) { <del> // Is first tuple value the same reference we're checking? <del> if (id.elements[0] === resolved.identifiers[0]) { <add> // Only consider second value in initializing tuple stable. <add> if ( <add> id.type === 'ArrayPattern' && <add> id.elements.length === 2 && <add> Array.isArray(resolved.identifiers) <add> ) { <add> // Is second tuple value the same reference we're checking? <add> if (id.elements[1] === resolved.identifiers[0]) { <ide> // Setter is stable. <ide> return true; <ide> } <ide><path>packages/react-debug-tools/src/ReactDebugHooks.js <ide> function useMutableSource<Source, Snapshot>( <ide> return value; <ide> } <ide> <del>function useTransition(): [(() => void) => void, boolean] { <add>function useTransition(): [boolean, (() => void) => void] { <ide> // useTransition() composes multiple hooks internally. <ide> // Advance the current hook index the same number of times <ide> // so that subsequent hooks have the right memoized state. <ide> function useTransition(): [(() => void) => void, boolean] { <ide> stackError: new Error(), <ide> value: undefined, <ide> }); <del> return [callback => {}, false]; <add> return [false, callback => {}]; <ide> } <ide> <ide> function useDeferredValue<T>(value: T): T { <ide><path>packages/react-devtools-shared/src/devtools/views/Components/InspectedElementErrorsAndWarningsTree.js <ide> export default function InspectedElementErrorsAndWarningsTree({ <ide> const refresh = useCacheRefresh(); <ide> <ide> const [ <del> startClearErrorsTransition, <ide> isErrorsTransitionPending, <add> startClearErrorsTransition, <ide> ] = useTransition(); <ide> const clearErrorsForInspectedElement = () => { <ide> const {id} = inspectedElement; <ide> export default function InspectedElementErrorsAndWarningsTree({ <ide> }; <ide> <ide> const [ <del> startClearWarningsTransition, <ide> isWarningsTransitionPending, <add> startClearWarningsTransition, <ide> ] = useTransition(); <ide> const clearWarningsForInspectedElement = () => { <ide> const {id} = inspectedElement; <ide><path>packages/react-devtools-shared/src/devtools/views/Components/KeyValue.js <ide> export default function KeyValue({ <ide> isReadOnly = value[meta.readonly]; <ide> } <ide> <del> const [startInspectPathsTransition, isInspectPathsPending] = useTransition(); <add> const [isInspectPathsPending, startInspectPathsTransition] = useTransition(); <ide> const toggleIsOpen = () => { <ide> if (isOpen) { <ide> setIsOpen(false); <ide><path>packages/react-dom/src/server/ReactPartialRendererHooks.js <ide> function useDeferredValue<T>(value: T): T { <ide> return value; <ide> } <ide> <del>function useTransition(): [(callback: () => void) => void, boolean] { <add>function useTransition(): [boolean, (callback: () => void) => void] { <ide> resolveCurrentlyRenderingComponent(); <ide> const startTransition = callback => { <ide> callback(); <ide> }; <del> return [startTransition, false]; <add> return [false, startTransition]; <ide> } <ide> <ide> function useOpaqueIdentifier(): OpaqueIDType { <ide><path>packages/react-reconciler/src/ReactFiberHooks.new.js <ide> function startTransition(setPending, callback) { <ide> } <ide> } <ide> <del>function mountTransition(): [(() => void) => void, boolean] { <add>function mountTransition(): [boolean, (() => void) => void] { <ide> const [isPending, setPending] = mountState(false); <ide> // The `start` method never changes. <ide> const start = startTransition.bind(null, setPending); <ide> const hook = mountWorkInProgressHook(); <ide> hook.memoizedState = start; <del> return [start, isPending]; <add> return [isPending, start]; <ide> } <ide> <del>function updateTransition(): [(() => void) => void, boolean] { <add>function updateTransition(): [boolean, (() => void) => void] { <ide> const [isPending] = updateState(false); <ide> const hook = updateWorkInProgressHook(); <ide> const start = hook.memoizedState; <del> return [start, isPending]; <add> return [isPending, start]; <ide> } <ide> <del>function rerenderTransition(): [(() => void) => void, boolean] { <add>function rerenderTransition(): [boolean, (() => void) => void] { <ide> const [isPending] = rerenderState(false); <ide> const hook = updateWorkInProgressHook(); <ide> const start = hook.memoizedState; <del> return [start, isPending]; <add> return [isPending, start]; <ide> } <ide> <ide> let isUpdatingOpaqueValueInRenderPhase = false; <ide> if (__DEV__) { <ide> mountHookTypesDev(); <ide> return mountDeferredValue(value); <ide> }, <del> useTransition(): [(() => void) => void, boolean] { <add> useTransition(): [boolean, (() => void) => void] { <ide> currentHookNameInDev = 'useTransition'; <ide> mountHookTypesDev(); <ide> return mountTransition(); <ide> if (__DEV__) { <ide> updateHookTypesDev(); <ide> return mountDeferredValue(value); <ide> }, <del> useTransition(): [(() => void) => void, boolean] { <add> useTransition(): [boolean, (() => void) => void] { <ide> currentHookNameInDev = 'useTransition'; <ide> updateHookTypesDev(); <ide> return mountTransition(); <ide> if (__DEV__) { <ide> updateHookTypesDev(); <ide> return updateDeferredValue(value); <ide> }, <del> useTransition(): [(() => void) => void, boolean] { <add> useTransition(): [boolean, (() => void) => void] { <ide> currentHookNameInDev = 'useTransition'; <ide> updateHookTypesDev(); <ide> return updateTransition(); <ide> if (__DEV__) { <ide> updateHookTypesDev(); <ide> return rerenderDeferredValue(value); <ide> }, <del> useTransition(): [(() => void) => void, boolean] { <add> useTransition(): [boolean, (() => void) => void] { <ide> currentHookNameInDev = 'useTransition'; <ide> updateHookTypesDev(); <ide> return rerenderTransition(); <ide> if (__DEV__) { <ide> mountHookTypesDev(); <ide> return mountDeferredValue(value); <ide> }, <del> useTransition(): [(() => void) => void, boolean] { <add> useTransition(): [boolean, (() => void) => void] { <ide> currentHookNameInDev = 'useTransition'; <ide> warnInvalidHookAccess(); <ide> mountHookTypesDev(); <ide> if (__DEV__) { <ide> updateHookTypesDev(); <ide> return updateDeferredValue(value); <ide> }, <del> useTransition(): [(() => void) => void, boolean] { <add> useTransition(): [boolean, (() => void) => void] { <ide> currentHookNameInDev = 'useTransition'; <ide> warnInvalidHookAccess(); <ide> updateHookTypesDev(); <ide> if (__DEV__) { <ide> updateHookTypesDev(); <ide> return rerenderDeferredValue(value); <ide> }, <del> useTransition(): [(() => void) => void, boolean] { <add> useTransition(): [boolean, (() => void) => void] { <ide> currentHookNameInDev = 'useTransition'; <ide> warnInvalidHookAccess(); <ide> updateHookTypesDev(); <ide><path>packages/react-reconciler/src/ReactFiberHooks.old.js <ide> function startTransition(setPending, callback) { <ide> } <ide> } <ide> <del>function mountTransition(): [(() => void) => void, boolean] { <add>function mountTransition(): [boolean, (() => void) => void] { <ide> const [isPending, setPending] = mountState(false); <ide> // The `start` method never changes. <ide> const start = startTransition.bind(null, setPending); <ide> const hook = mountWorkInProgressHook(); <ide> hook.memoizedState = start; <del> return [start, isPending]; <add> return [isPending, start]; <ide> } <ide> <del>function updateTransition(): [(() => void) => void, boolean] { <add>function updateTransition(): [boolean, (() => void) => void] { <ide> const [isPending] = updateState(false); <ide> const hook = updateWorkInProgressHook(); <ide> const start = hook.memoizedState; <del> return [start, isPending]; <add> return [isPending, start]; <ide> } <ide> <del>function rerenderTransition(): [(() => void) => void, boolean] { <add>function rerenderTransition(): [boolean, (() => void) => void] { <ide> const [isPending] = rerenderState(false); <ide> const hook = updateWorkInProgressHook(); <ide> const start = hook.memoizedState; <del> return [start, isPending]; <add> return [isPending, start]; <ide> } <ide> <ide> let isUpdatingOpaqueValueInRenderPhase = false; <ide> if (__DEV__) { <ide> mountHookTypesDev(); <ide> return mountDeferredValue(value); <ide> }, <del> useTransition(): [(() => void) => void, boolean] { <add> useTransition(): [boolean, (() => void) => void] { <ide> currentHookNameInDev = 'useTransition'; <ide> mountHookTypesDev(); <ide> return mountTransition(); <ide> if (__DEV__) { <ide> updateHookTypesDev(); <ide> return mountDeferredValue(value); <ide> }, <del> useTransition(): [(() => void) => void, boolean] { <add> useTransition(): [boolean, (() => void) => void] { <ide> currentHookNameInDev = 'useTransition'; <ide> updateHookTypesDev(); <ide> return mountTransition(); <ide> if (__DEV__) { <ide> updateHookTypesDev(); <ide> return updateDeferredValue(value); <ide> }, <del> useTransition(): [(() => void) => void, boolean] { <add> useTransition(): [boolean, (() => void) => void] { <ide> currentHookNameInDev = 'useTransition'; <ide> updateHookTypesDev(); <ide> return updateTransition(); <ide> if (__DEV__) { <ide> updateHookTypesDev(); <ide> return rerenderDeferredValue(value); <ide> }, <del> useTransition(): [(() => void) => void, boolean] { <add> useTransition(): [boolean, (() => void) => void] { <ide> currentHookNameInDev = 'useTransition'; <ide> updateHookTypesDev(); <ide> return rerenderTransition(); <ide> if (__DEV__) { <ide> mountHookTypesDev(); <ide> return mountDeferredValue(value); <ide> }, <del> useTransition(): [(() => void) => void, boolean] { <add> useTransition(): [boolean, (() => void) => void] { <ide> currentHookNameInDev = 'useTransition'; <ide> warnInvalidHookAccess(); <ide> mountHookTypesDev(); <ide> if (__DEV__) { <ide> updateHookTypesDev(); <ide> return updateDeferredValue(value); <ide> }, <del> useTransition(): [(() => void) => void, boolean] { <add> useTransition(): [boolean, (() => void) => void] { <ide> currentHookNameInDev = 'useTransition'; <ide> warnInvalidHookAccess(); <ide> updateHookTypesDev(); <ide> if (__DEV__) { <ide> updateHookTypesDev(); <ide> return rerenderDeferredValue(value); <ide> }, <del> useTransition(): [(() => void) => void, boolean] { <add> useTransition(): [boolean, (() => void) => void] { <ide> currentHookNameInDev = 'useTransition'; <ide> warnInvalidHookAccess(); <ide> updateHookTypesDev(); <ide><path>packages/react-reconciler/src/ReactInternalTypes.js <ide> export type Dispatcher = {| <ide> ): void, <ide> useDebugValue<T>(value: T, formatterFn: ?(value: T) => mixed): void, <ide> useDeferredValue<T>(value: T): T, <del> useTransition(): [(() => void) => void, boolean], <add> useTransition(): [boolean, (() => void) => void], <ide> useMutableSource<Source, Snapshot>( <ide> source: MutableSource<Source>, <ide> getSnapshot: MutableSourceGetSnapshotFn<Source, Snapshot>, <ide><path>packages/react-reconciler/src/__tests__/ReactHooks-test.internal.js <ide> describe('ReactHooks', () => { <ide> ]; <ide> <ide> if (__EXPERIMENTAL__) { <del> const useTransitionHelper = () => React.useTransition({timeoutMs: 1000}); <add> const useTransitionHelper = () => React.useTransition(); <ide> const useDeferredValueHelper = () => <ide> React.useDeferredValue(0, {timeoutMs: 1000}); <ide> <ide><path>packages/react-reconciler/src/__tests__/ReactHooksWithNoopRenderer-test.js <ide> describe('ReactHooksWithNoopRenderer', () => { <ide> // TODO: This should probably warn <ide> // @gate experimental <ide> it('calling startTransition inside render phase', async () => { <del> let startTransition; <ide> function App() { <ide> const [counter, setCounter] = useState(0); <del> const [_startTransition] = useTransition(); <del> startTransition = _startTransition; <ide> <ide> if (counter === 0) { <del> startTransition(() => { <add> React.unstable_startTransition(() => { <ide> setCounter(c => c + 1); <ide> }); <ide> } <ide> describe('ReactHooksWithNoopRenderer', () => { <ide> let transition; <ide> function App() { <ide> const [show, setShow] = useState(false); <del> const [startTransition, isPending] = useTransition({ <del> timeoutMs: 1000, <del> }); <add> const [isPending, startTransition] = useTransition(); <ide> transition = () => { <ide> startTransition(() => { <ide> setShow(true); <ide><path>packages/react-reconciler/src/__tests__/ReactSuspenseWithNoopRenderer-test.js <ide> describe('ReactSuspenseWithNoopRenderer', () => { <ide> let startTransition; <ide> function B() { <ide> const [textB, _setTextB] = useState('B'); <del> const [_startTransition] = useTransition({timeoutMs: 10000}); <add> // eslint-disable-next-line no-unused-vars <add> const [_, _startTransition] = useTransition(); <ide> startTransition = _startTransition; <ide> setTextB = _setTextB; <ide> return ( <ide> describe('ReactSuspenseWithNoopRenderer', () => { <ide> let setTextWithLongTransition; <ide> <ide> function App() { <del> const [startShortTransition, isPending1] = React.unstable_useTransition({ <del> timeoutMs: 5000, <del> }); <del> const [startLongTransition, isPending2] = React.unstable_useTransition({ <del> timeoutMs: 30000, <del> }); <add> const [isPending1, startShortTransition] = React.unstable_useTransition(); <add> const [isPending2, startLongTransition] = React.unstable_useTransition(); <ide> const isPending = isPending1 || isPending2; <ide> const [text, setText] = React.useState(''); <ide> const [mirror, setMirror] = React.useState(''); <ide><path>packages/react-reconciler/src/__tests__/ReactTransition-test.js <ide> describe('ReactTransition', () => { <ide> let start; <ide> function App() { <ide> const [show, setShow] = useState(false); <del> const [_start, isPending] = useTransition(); <add> const [isPending, _start] = useTransition(); <ide> start = () => _start(() => setShow(true)); <ide> return ( <ide> <Suspense fallback={<Text text="Loading..." />}> <ide> describe('ReactTransition', () => { <ide> async () => { <ide> let update; <ide> function App() { <del> const [startContentChange, isContentPending] = useTransition(); <add> const [isContentPending, startContentChange] = useTransition(); <ide> const [label, setLabel] = useState('A'); <ide> const [contents, setContents] = useState('A'); <ide> update = value => { <ide><path>packages/react-server/src/ReactFizzHooks.js <ide> function unsupportedStartTransition() { <ide> invariant(false, 'startTransition cannot be called during server rendering.'); <ide> } <ide> <del>function useTransition(): [(callback: () => void) => void, boolean] { <add>function useTransition(): [boolean, (callback: () => void) => void] { <ide> resolveCurrentlyRenderingComponent(); <del> return [unsupportedStartTransition, false]; <add> return [false, unsupportedStartTransition]; <ide> } <ide> <ide> function useOpaqueIdentifier(): OpaqueIDType { <ide><path>packages/react/src/ReactHooks.js <ide> export function useDebugValue<T>( <ide> <ide> export const emptyObject = {}; <ide> <del>export function useTransition(): [(() => void) => void, boolean] { <add>export function useTransition(): [boolean, (() => void) => void] { <ide> const dispatcher = resolveDispatcher(); <ide> return dispatcher.useTransition(); <ide> }
15
Text
Text
clarify guidelines for nglocale changes
8dc08fb1d793cf8ba3072d21d18e37545d28f452
<ide><path>CONTRIBUTING.md <ide> If you find a bug in the source code or a mistake in the documentation, you can <ide> submitting an issue to our [GitHub Repository][github]. Even better you can submit a Pull Request <ide> with a fix. <ide> <del>***Localization Issue:*** *Angular.js uses the [Google Closure I18N library], to generate its own I18N files. This means that <del>any changes to these files would be lost the next time that we import the library. The recommended <del>approach is to submit a patch to the I18N project directly, instead of submitting it here.* <add>**Localization Issues:** Angular.js uses the [Google Closure I18N library] to generate <add>its own I18N files (the ngLocale module). This means that any changes to these files would be lost <add>the next time that we import the library. <add>Since the Closure library i18n data is itself auto-generated from the data of the <add>[Common Locale Data Repository (CLDR)] project, errors in the data should <add>be reported there. See also the [Closure guide to i18n changes]. <ide> <ide> **Please see the Submission Guidelines below**. <ide> <ide> You can find out more detailed information about contributing in the <ide> [plunker]: http://plnkr.co/edit <ide> [stackoverflow]: http://stackoverflow.com/questions/tagged/angularjs <ide> [unit-testing]: https://docs.angularjs.org/guide/unit-testing <add>[Common Locale Data Repository (CLDR)]: http://cldr.unicode.org <add>[Closure guide to i18n changes]: https://github.com/google/closure-library/wiki/Internationalization-%28i18n%29-changes-in-Closure-Library <ide> <ide> [![Analytics](https://ga-beacon.appspot.com/UA-8594346-11/angular.js/CONTRIBUTING.md?pixel)](https://github.com/igrigorik/ga-beacon)
1
Ruby
Ruby
remove detail initialization metaprogramming
737b718ea0c88d3f6c139343f957f386e06fffc9
<ide><path>actionview/lib/action_view/lookup_context.rb <ide> class LookupContext #:nodoc: <ide> <ide> def self.register_detail(name, &block) <ide> self.registered_details << name <del> initialize = registered_details.map { |n| "@details[:#{n}] = details[:#{n}] || default_#{n}" } <add> Accessors::DEFAULT_PROCS[name] = block <ide> <ide> Accessors.send :define_method, :"default_#{name}", &block <ide> Accessors.module_eval <<-METHOD, __FILE__, __LINE__ + 1 <ide> def #{name}=(value) <ide> value = value.present? ? Array(value) : default_#{name} <ide> _set_detail(:#{name}, value) if value != @details[:#{name}] <ide> end <del> <del> remove_possible_method :initialize_details <del> def initialize_details(details) <del> #{initialize.join("\n")} <del> end <ide> METHOD <ide> end <ide> <ide> # Holds accessors for the registered details. <ide> module Accessors #:nodoc: <add> DEFAULT_PROCS = {} <ide> end <ide> <ide> register_detail(:locale) do <ide> def normalize_name(name, prefixes) #:nodoc: <ide> include ViewPaths <ide> <ide> def initialize(view_paths, details = {}, prefixes = []) <del> @details, @details_key = {}, nil <add> @details_key = nil <ide> @cache = true <ide> @prefixes = prefixes <ide> @rendered_format = nil <ide> <add> @details = initialize_details({}, details) <ide> self.view_paths = view_paths <del> initialize_details(details) <ide> end <ide> <add> def initialize_details(target, details) <add> registered_details.each do |k| <add> target[k] = details[k] || Accessors::DEFAULT_PROCS[k].call <add> end <add> target <add> end <add> private :initialize_details <add> <ide> # Override formats= to expand ["*/*"] values and automatically <ide> # add :html as fallback to :js. <ide> def formats=(values)
1
Go
Go
remove byhash hack
6128dcea4a9bbe808baba4e18c9c4fee3a265532
<ide><path>graphdriver/devmapper/deviceset.go <ide> func (devices *DeviceSet) deactivatePool() error { <ide> func (devices *DeviceSet) deactivateDevice(hash string) error { <ide> utils.Debugf("[devmapper] deactivateDevice(%s)", hash) <ide> defer utils.Debugf("[devmapper] deactivateDevice END") <del> var devname string <del> // FIXME: shouldn't we just register the pool into devices? <del> devname, err := devices.byHash(hash) <del> if err != nil { <del> return err <add> <add> info := devices.Devices[hash] <add> if info == nil { <add> return fmt.Errorf("Unknown device %s", hash) <ide> } <del> devinfo, err := getInfo(devname) <add> devinfo, err := getInfo(info.Name()) <ide> if err != nil { <ide> utils.Debugf("\n--->Err: %s\n", err) <ide> return err <ide> } <ide> if devinfo.Exists != 0 { <del> if err := devices.removeDeviceAndWait(devname); err != nil { <add> if err := devices.removeDeviceAndWait(info.Name()); err != nil { <ide> utils.Debugf("\n--->Err: %s\n", err) <ide> return err <ide> } <ide> func (devices *DeviceSet) waitRemove(devname string) error { <ide> // a) the device registered at <device_set_prefix>-<hash> is closed, <ide> // or b) the 1 second timeout expires. <ide> func (devices *DeviceSet) waitClose(hash string) error { <del> devname, err := devices.byHash(hash) <del> if err != nil { <del> return err <add> info := devices.Devices[hash] <add> if info == nil { <add> return fmt.Errorf("Unknown device %s", hash) <ide> } <ide> i := 0 <ide> for ; i < 1000; i += 1 { <del> devinfo, err := getInfo(devname) <add> devinfo, err := getInfo(info.Name()) <ide> if err != nil { <ide> return err <ide> } <ide> if i%100 == 0 { <del> utils.Debugf("Waiting for unmount of %s: opencount=%d", devname, devinfo.OpenCount) <add> utils.Debugf("Waiting for unmount of %s: opencount=%d", hash, devinfo.OpenCount) <ide> } <ide> if devinfo.OpenCount == 0 { <ide> break <ide> } <ide> time.Sleep(1 * time.Millisecond) <ide> } <ide> if i == 1000 { <del> return fmt.Errorf("Timeout while waiting for device %s to close", devname) <add> return fmt.Errorf("Timeout while waiting for device %s to close", hash) <ide> } <ide> return nil <ide> } <ide> <del>// byHash is a hack to allow looking up the deviceset's pool by the hash "pool". <del>// FIXME: it seems probably cleaner to register the pool in devices.Devices, <del>// but I am afraid of arcane implications deep in the devicemapper code, <del>// so this will do. <del>func (devices *DeviceSet) byHash(hash string) (devname string, err error) { <del> if hash == "pool" { <del> return devices.getPoolDevName(), nil <del> } <del> info := devices.Devices[hash] <del> if info == nil { <del> return "", fmt.Errorf("hash %s doesn't exists", hash) <del> } <del> return info.Name(), nil <del>} <del> <ide> func (devices *DeviceSet) Shutdown() error { <ide> devices.Lock() <ide> defer devices.Unlock()
1
Javascript
Javascript
use cached peername to resolve remote fields
30666f22ca5041a991ab9e7311916ac218ed4c0f
<ide><path>lib/net.js <ide> function onread(nread, buffer) { <ide> <ide> <ide> Socket.prototype._getpeername = function() { <del> if (!this._handle || !this._handle.getpeername) { <del> return {}; <del> } <ide> if (!this._peername) { <add> if (!this._handle || !this._handle.getpeername) { <add> return {}; <add> } <ide> var out = {}; <ide> var err = this._handle.getpeername(out); <ide> if (err) return {}; // FIXME(bnoordhuis) Throw? <ide> Socket.prototype.connect = function(options, cb) { <ide> this._writableState.errorEmitted = false; <ide> this.destroyed = false; <ide> this._handle = null; <add> this._peername = null; <ide> } <ide> <ide> var self = this; <ide><path>test/parallel/test-net-remote-address-port.js <ide> var server = net.createServer(function(socket) { <ide> socket.on('end', function() { <ide> if (++conns_closed == 2) server.close(); <ide> }); <add> socket.on('close', function() { <add> assert.notEqual(-1, remoteAddrCandidates.indexOf(socket.remoteAddress)); <add> assert.notEqual(-1, remoteFamilyCandidates.indexOf(socket.remoteFamily)); <add> }); <ide> socket.resume(); <ide> }); <ide> <ide> server.listen(common.PORT, 'localhost', function() { <ide> assert.equal(common.PORT, client.remotePort); <ide> client.end(); <ide> }); <add> client.on('close', function() { <add> assert.notEqual(-1, remoteAddrCandidates.indexOf(client.remoteAddress)); <add> assert.notEqual(-1, remoteFamilyCandidates.indexOf(client.remoteFamily)); <add> }); <ide> client2.on('connect', function() { <ide> assert.notEqual(-1, remoteAddrCandidates.indexOf(client2.remoteAddress)); <ide> assert.notEqual(-1, remoteFamilyCandidates.indexOf(client2.remoteFamily)); <ide> assert.equal(common.PORT, client2.remotePort); <ide> client2.end(); <ide> }); <add> client2.on('close', function() { <add> assert.notEqual(-1, remoteAddrCandidates.indexOf(client2.remoteAddress)); <add> assert.notEqual(-1, remoteFamilyCandidates.indexOf(client2.remoteFamily)); <add> }); <ide> }); <ide> <ide> process.on('exit', function() {
2
Javascript
Javascript
add react-dom to dist/ in npm package
cdd80969748273b35e35fdcd066e09b691d2072f
<ide><path>grunt/tasks/npm-react.js <ide> var dist = dest + 'dist/'; <ide> var distFiles = [ <ide> 'react.js', 'react.min.js', 'JSXTransformer.js', <ide> 'react-with-addons.js', 'react-with-addons.min.js', <add> 'react-dom.js', 'react-dom.min.js', <ide> ]; <ide> <ide> function buildRelease() {
1
Python
Python
break some long lines in numpy/f2py/*.py
d676616cb5ddde980734e2b60569a69912983940
<ide><path>numpy/f2py/auxfuncs.py <ide> def debugcapi(var): <ide> <ide> <ide> def _isstring(var): <del> return 'typespec' in var and var['typespec'] == 'character' and (not isexternal(var)) <add> return 'typespec' in var and var['typespec'] == 'character' and \ <add> not isexternal(var) <ide> <ide> <ide> def isstring(var): <ide> def isstringarray(var): <ide> <ide> <ide> def isarrayofstrings(var): <del> # leaving out '*' for now so that <del> # `character*(*) a(m)` and `character a(m,*)` <del> # are treated differently. Luckily `character**` is illegal. <add> # leaving out '*' for now so that `character*(*) a(m)` and `character <add> # a(m,*)` are treated differently. Luckily `character**` is illegal. <ide> return isstringarray(var) and var['dimension'][-1] == '(*)' <ide> <ide> <ide> def isarray(var): <del> return 'dimension' in var and (not isexternal(var)) <add> return 'dimension' in var and not isexternal(var) <ide> <ide> <ide> def isscalar(var): <ide> return not (isarray(var) or isstring(var) or isexternal(var)) <ide> <ide> <ide> def iscomplex(var): <del> return isscalar(var) and var.get('typespec') in ['complex', 'double complex'] <add> return isscalar(var) and \ <add> var.get('typespec') in ['complex', 'double complex'] <ide> <ide> <ide> def islogical(var): <ide> def islong_complex(var): <ide> <ide> <ide> def iscomplexarray(var): <del> return isarray(var) and var.get('typespec') in ['complex', 'double complex'] <add> return isarray(var) and \ <add> var.get('typespec') in ['complex', 'double complex'] <ide> <ide> <ide> def isint1array(var): <ide> def ismoduleroutine(rout): <ide> <ide> <ide> def ismodule(rout): <del> return ('block' in rout and 'module' == rout['block']) <add> return 'block' in rout and 'module' == rout['block'] <ide> <ide> <ide> def isfunction(rout): <del> return ('block' in rout and 'function' == rout['block']) <del> <del># def isfunction_wrap(rout): <del># return wrapfuncs and (iscomplexfunction(rout) or isstringfunction(rout)) <del># and (not isexternal(rout)) <del> <add> return 'block' in rout and 'function' == rout['block'] <ide> <ide> def isfunction_wrap(rout): <ide> if isintent_c(rout): <ide> def isfunction_wrap(rout): <ide> <ide> <ide> def issubroutine(rout): <del> return ('block' in rout and 'subroutine' == rout['block']) <add> return 'block' in rout and 'subroutine' == rout['block'] <ide> <ide> <ide> def issubroutine_wrap(rout): <ide> def hasexternals(rout): <ide> <ide> <ide> def isthreadsafe(rout): <del> return 'f2pyenhancements' in rout and 'threadsafe' in rout['f2pyenhancements'] <add> return 'f2pyenhancements' in rout and \ <add> 'threadsafe' in rout['f2pyenhancements'] <ide> <ide> <ide> def hasvariables(rout): <ide> return 'vars' in rout and rout['vars'] <ide> <ide> <ide> def isoptional(var): <del> return ('attrspec' in var and 'optional' in var['attrspec'] and 'required' not in var['attrspec']) and isintent_nothide(var) <add> return ('attrspec' in var and 'optional' in var['attrspec'] and <add> 'required' not in var['attrspec']) and isintent_nothide(var) <ide> <ide> <ide> def isexternal(var): <del> return ('attrspec' in var and 'external' in var['attrspec']) <add> return 'attrspec' in var and 'external' in var['attrspec'] <ide> <ide> <ide> def isrequired(var): <ide> def isintent_in(var): <ide> <ide> <ide> def isintent_inout(var): <del> return 'intent' in var and ('inout' in var['intent'] or 'outin' in var['intent']) and 'in' not in var['intent'] and 'hide' not in var['intent'] and 'inplace' not in var['intent'] <add> return ('intent' in var and ('inout' in var['intent'] or <add> 'outin' in var['intent']) and 'in' not in var['intent'] and <add> 'hide' not in var['intent'] and 'inplace' not in var['intent']) <ide> <ide> <ide> def isintent_out(var): <ide> return 'out' in var.get('intent', []) <ide> <ide> <ide> def isintent_hide(var): <del> return ('intent' in var and ('hide' in var['intent'] or ('out' in var['intent'] and 'in' not in var['intent'] and (not l_or(isintent_inout, isintent_inplace)(var))))) <del> <add> return ('intent' in var and ('hide' in var['intent'] or <add> ('out' in var['intent'] and 'in' not in var['intent'] and <add> (not l_or(isintent_inout, isintent_inplace)(var))))) <ide> <ide> def isintent_nothide(var): <ide> return not isintent_hide(var) <ide> def getcallprotoargument(rout, cb_map={}): <ide> pass <ide> elif isstring(var): <ide> pass <del> #ctype = 'void*' <ide> else: <ide> ctype = ctype + '*' <ide> if isstring(var) or isarrayofstrings(var): <ide><path>numpy/f2py/capi_maps.py <ide> # c2buildvalue_map=??? <ide> pass <ide> <del>f2cmap_all = {'real': {'': 'float', '4': 'float', '8': 'double', '12': 'long_double', '16': 'long_double'}, <del> 'integer': {'': 'int', '1': 'signed_char', '2': 'short', '4': 'int', '8': 'long_long', <del> '-1': 'unsigned_char', '-2': 'unsigned_short', '-4': 'unsigned', <del> '-8': 'unsigned_long_long'}, <add>f2cmap_all = {'real': {'': 'float', '4': 'float', '8': 'double', <add> '12': 'long_double', '16': 'long_double'}, <add> 'integer': {'': 'int', '1': 'signed_char', '2': 'short', <add> '4': 'int', '8': 'long_long', <add> '-1': 'unsigned_char', '-2': 'unsigned_short', <add> '-4': 'unsigned', '-8': 'unsigned_long_long'}, <ide> 'complex': {'': 'complex_float', '8': 'complex_float', <ide> '16': 'complex_double', '24': 'complex_long_double', <ide> '32': 'complex_long_double'}, <ide> 'complexkind': {'': 'complex_float', '4': 'complex_float', <ide> '8': 'complex_double', '12': 'complex_long_double', <ide> '16': 'complex_long_double'}, <del> 'logical': {'': 'int', '1': 'char', '2': 'short', '4': 'int', '8': 'long_long'}, <add> 'logical': {'': 'int', '1': 'char', '2': 'short', '4': 'int', <add> '8': 'long_long'}, <ide> 'double complex': {'': 'complex_double'}, <ide> 'double precision': {'': 'double'}, <ide> 'byte': {'': 'char'}, <ide> <ide> if os.path.isfile('.f2py_f2cmap'): <ide> # User defined additions to f2cmap_all. <del> # .f2py_f2cmap must contain a dictionary of dictionaries, only. <del> # For example, {'real':{'low':'float'}} means that Fortran 'real(low)' is <del> # interpreted as C 'float'. <del> # This feature is useful for F90/95 users if they use PARAMETERSs <del> # in type specifications. <add> # .f2py_f2cmap must contain a dictionary of dictionaries, only. For <add> # example, {'real':{'low':'float'}} means that Fortran 'real(low)' is <add> # interpreted as C 'float'. This feature is useful for F90/95 users if <add> # they use PARAMETERSs in type specifications. <ide> try: <ide> outmess('Reading .f2py_f2cmap ...\n') <ide> f = open('.f2py_f2cmap', 'r') <ide> except Exception as msg: <ide> errmess( <ide> 'Failed to apply user defined changes from .f2py_f2cmap: %s. Skipping.\n' % (msg)) <add> <ide> cformat_map = {'double': '%g', <ide> 'float': '%g', <ide> 'long_double': '%Lg', <ide> def getstrlength(var): <ide> elif 'len' in a: <ide> len = a['len'] <ide> if re.match(r'\(\s*([*]|[:])\s*\)', len) or re.match(r'([*]|[:])', len): <del> # if len in ['(*)','*','(:)',':']: <ide> if isintent_hide(var): <ide> errmess('getstrlength:intent(hide): expected a string with defined length but got: %s\n' % ( <ide> repr(var))) <ide> def getarrdims(a, var, verbose=0): <ide> ret['rank'] = '0' <ide> ret['dims'] = '' <ide> elif isarray(var): <del> # if not isintent_c(var): <del> # var['dimension'].reverse() <ide> dim = copy.copy(var['dimension']) <ide> ret['size'] = '*'.join(dim) <ide> try: <ide> def sign2map(a, var): <ide> isexternal, 'callback', <ide> isintent_callback, 'callback', <ide> isintent_aux, 'auxiliary', <del> # ismutable,'mutable',l_not(ismutable),'immutable', <ide> ] <ide> rl = [] <ide> for i in range(0, len(il), 2): <ide> def sign2map(a, var): <ide> if isstring(var): <ide> rl.append('slen(%s)=%s' % (a, ret['length'])) <ide> if isarray(var): <del> # if not isintent_c(var): <del> # var['dimension'].reverse() <ide> ddim = ','.join( <ide> map(lambda x, y: '%s|%s' % (x, y), var['dimension'], dim)) <ide> rl.append('dims(%s)' % ddim) <del># if not isintent_c(var): <del># var['dimension'].reverse() <ide> if isexternal(var): <ide> ret['vardebuginfo'] = 'debug-capi:%s=>%s:%s' % ( <ide> a, ret['cbname'], ','.join(rl)) <ide> def routsign2map(rout): <ide> ln = k <ide> break <ide> lcb_map[ln] = un[1] <del> # else: <del> # errmess('routsign2map: cb_map does not contain module "%s" used in "use" statement.\n'%(u)) <ide> elif 'externals' in rout and rout['externals']: <ide> errmess('routsign2map: Confused: function %s has externals %s but no "use" statement.\n' % ( <ide> ret['name'], repr(rout['externals']))) <ide> def modsign2map(m): <ide> ret['restdoc'] = getrestdoc(m) or [] <ide> if hasnote(m): <ide> ret['note'] = m['note'] <del> #m['note']=['See elsewhere.'] <ide> ret['usercode'] = getusercode(m) or '' <ide> ret['usercode1'] = getusercode1(m) or '' <ide> if m['body']: <ide><path>numpy/f2py/cb_rules.py <ide> def buildcallback(rout, um): <ide> 'argname': rd['argname'] <ide> } <ide> outmess('\t %s\n' % (ar['docstrshort'])) <del> # print ar['body'] <ide> return <ide> ################## Build call-back function ############# <ide><path>numpy/f2py/crackfortran.py <ide> def analyzeline(m, case, line): <ide> groupcache[groupcounter - 2]['externals'].append(name) <ide> groupcache[groupcounter]['vars'] = copy.deepcopy( <ide> groupcache[groupcounter - 2]['vars']) <del> # try: del groupcache[groupcounter]['vars'][groupcache[groupcounter-2]['name']] <del> # except: pass <ide> try: <ide> del groupcache[groupcounter]['vars'][name][ <ide> groupcache[groupcounter]['vars'][name]['attrspec'].index('external')] <ide> def analyzeline(m, case, line): <ide> outmess('analyzeline: ignoring program arguments\n') <ide> continue <ide> if k not in groupcache[groupcounter]['args']: <del> #outmess('analyzeline: ignoring external %s (not in arguments list)\n'%(`k`)) <ide> continue <ide> if 'externals' not in groupcache[groupcounter]: <ide> groupcache[groupcounter]['externals'] = [] <ide> def analyzeline(m, case, line): <ide> outmess( <ide> 'analyzeline: implied-DO list "%s" is not supported. Skipping.\n' % l[0]) <ide> continue <del> # if '(' in l[0]: <del> # outmess('analyzeline: ignoring this data statement.\n') <del> # continue <ide> i = 0 <ide> j = 0 <ide> llen = len(l[1]) <ide> def analyzeline(m, case, line): <ide> fc = not fc <ide> i = i + 1 <ide> i = i + 1 <del> # v,l[1][j:i-1]=name,initvalue <ide> if v not in vars: <ide> vars[v] = {} <ide> if '=' in vars[v] and not vars[v]['='] == l[1][j:i - 1]: <ide> def analyzeline(m, case, line): <ide> outmess('analyzeline: No context for multiline block.\n') <ide> return <ide> gc = groupcounter <del> #gc = previous_context[2] <ide> appendmultiline(groupcache[gc], <ide> previous_context[:2], <ide> m.group('this')) <ide> def postcrack(block, args=None, tab=''): <ide> block['body'] = analyzebody(block, args, tab=tab) <ide> <ide> userisdefined = [] <del>## fromuser = [] <ide> if 'use' in block: <ide> useblock = block['use'] <ide> for k in list(useblock.keys()): <ide> if '__user__' in k: <ide> userisdefined.append(k) <del># if 'map' in useblock[k]: <del># for n in useblock[k]['map'].itervalues(): <del>## if n not in fromuser: fromuser.append(n) <ide> else: <ide> useblock = {} <ide> name = '' <ide> def postcrack(block, args=None, tab=''): <ide> interface = {'block': 'interface', 'body': [], <ide> 'vars': {}, 'name': name + '_user_interface'} <ide> for e in block['externals']: <del> # if e in fromuser: <del> # outmess(' Skipping %s that is defined explicitly in another use statement\n'%(`e`)) <del> # continue <ide> if e in interfaced: <ide> edef = [] <ide> j = -1 <ide> def sortvarnames(vars): <ide> for v in list(vars.keys()): <ide> if 'depend' in vars[v] and vars[v]['depend']: <ide> dep.append(v) <del> # print '%s depends on %s'%(v,vars[v]['depend']) <ide> else: <ide> indep.append(v) <ide> n = len(dep) <ide> def sortvarnames(vars): <ide> dep = dep[1:] <ide> n = len(dep) <ide> i = 0 <del> # print indep <ide> return indep <ide> <ide> <ide> def get_parameters(vars, global_params={}): <ide> params[n] = eval(v, g_params, params) <ide> except Exception as msg: <ide> params[n] = v <del> # print params <ide> outmess('get_parameters: got "%s" on %s\n' % (msg, repr(v))) <ide> if isstring(vars[n]) and isinstance(params[n], int): <ide> params[n] = chr(params[n]) <ide> def analyzevars(block): <ide> m = re.match( <ide> r'(?P<before>.*?)\b' + p + r'\b(?P<after>.*)', d, re.I) <ide> if m: <del> #outmess('analyzevars:replacing parameter %s in %s (dimension of %s) with %s\n'%(`p`,`d`,`n`,`params[p]`)) <ide> d = m.group('before') + \ <ide> str(params[p]) + m.group('after') <ide> if d == star: <ide> def analyzevars(block): <ide> vars[n]['check'] = [] <ide> if 'dimension' in vars[n]: <ide> #/----< no check <del> # vars[n]['check'].append('rank(%s)==%s'%(n,len(vars[n]['dimension']))) <ide> i = -1 <ide> ni = len(vars[n]['dimension']) <ide> for d in vars[n]['dimension']: <ide> ddeps = [] # dependecies of 'd' <ide> ad = '' <ide> pd = '' <del> #origd = d <ide> if d not in vars: <ide> if d in savelindims: <ide> pd, ad = '(', savelindims[d][1] <ide> def analyzevars(block): <ide> vars[d]['attrspec'].append('optional') <ide> elif d not in ['*', ':']: <ide> #/----< no check <del> #if ni>1: vars[n]['check'].append('shape(%s,%i)==%s'%(n,i,d)) <del> # else: vars[n]['check'].append('len(%s)>=%s'%(n,d)) <ide> if flag: <ide> if d in vars: <ide> if n not in ddeps: <ide> def crack2fortrangen(block, tab='\n', as_interface=False): <ide> result = ' result (%s)' % block['result'] <ide> if block['result'] not in argsl: <ide> argsl.append(block['result']) <del> # if 'prefix' in block: <del> # prefix=block['prefix']+' ' <ide> body = crack2fortrangen(block['body'], tab + tabchar) <ide> vars = vars2fortran( <ide> block, block['vars'], argsl, tab + tabchar, as_interface=as_interface) <ide> def vars2fortran(block, vars, args, tab='', as_interface=False): <ide> vardef = '%s, %s' % (vardef, ','.join(attr)) <ide> c = ',' <ide> if 'dimension' in vars[a]: <del> # if not isintent_c(vars[a]): <del> # vars[a]['dimension'].reverse() <ide> vardef = '%s%sdimension(%s)' % ( <ide> vardef, c, ','.join(vars[a]['dimension'])) <ide> c = ',' <ide><path>numpy/f2py/f2py_testing.py <ide> def cmdline(): <ide> <ide> def run(runtest, test_functions, repeat=1): <ide> l = [(t, repr(t.__doc__.split('\n')[1].strip())) for t in test_functions] <del> #l = [(t,'') for t in test_functions] <ide> start_memusage = memusage() <ide> diff_memusage = None <ide> start_jiffies = jiffies() <ide><path>numpy/f2py/f90mod_rules.py <ide> def iadd(line, s=ihooks): <ide> if isfunction(b): <ide> fhooks[0] = fhooks[0] + wrap <ide> fargs.append('f2pywrap_%s_%s' % (m['name'], b['name'])) <del> # efargs.append(fargs[-1]) <ide> ifargs.append(func2subr.createfuncwrapper(b, signature=1)) <ide> else: <ide> if wrap: <ide> def iadd(line, s=ihooks): <ide> else: <ide> fargs.append(b['name']) <ide> mfargs.append(fargs[-1]) <del> # if '--external-modroutines' in options and options['--external-modroutines']: <del> # outmess('\t\t\tapplying --external-modroutines for %s\n'%(b['name'])) <del> # efargs.append(fargs[-1]) <ide> api['externroutines'] = [] <ide> ar = applyrules(api, vrd) <ide> ar['docs'] = [] <ide> def iadd(line, s=ihooks): <ide> m['name'], m['name'], m['name'])] + ret['initf90modhooks'] <ide> fadd('') <ide> fadd('subroutine f2pyinit%s(f2pysetupfunc)' % (m['name'])) <del> #fadd('use %s'%(m['name'])) <ide> if mfargs: <ide> for a in undo_rmbadname(mfargs): <ide> fadd('use %s, only : %s' % (m['name'], a)) <ide><path>numpy/f2py/func2subr.py <ide> def add(line, ret=ret): <ide> add('end subroutine f2pywrap_%s_%s' % (rout['modulename'], name)) <ide> else: <ide> add('end') <del> # print '**'*10 <del> # print ret[0] <del> # print '**'*10 <ide> return ret[0] <ide> <ide> <ide> def add(line, ret=ret): <ide> add('end subroutine f2pywrap_%s_%s' % (rout['modulename'], name)) <ide> else: <ide> add('end') <del> # print '**'*10 <del> # print ret[0] <del> # print '**'*10 <ide> return ret[0] <ide> <ide> <ide><path>numpy/f2py/rules.py <ide> {isthreadsafe: '\t\t\tPy_BEGIN_ALLOW_THREADS'}, <ide> {hascallstatement: '''\t\t\t\t#callstatement#; <ide> \t\t\t\t/*(*f2py_func)(#callfortran#);*/'''}, <del> {l_not(l_or(hascallstatement, isdummyroutine)): '\t\t\t\t(*f2py_func)(#callfortran#);'}, <add> {l_not(l_or(hascallstatement, isdummyroutine)) <add> : '\t\t\t\t(*f2py_func)(#callfortran#);'}, <ide> {isthreadsafe: '\t\t\tPy_END_ALLOW_THREADS'}, <ide> {hasexternals: """\t\t}"""} <ide> ], <ide> \t\tf2py_success = 0; <ide> \t} else {"""}, <ide> {isthreadsafe: '\tPy_BEGIN_ALLOW_THREADS'}, <del> {l_not(l_or(hascallstatement, isdummyroutine)): '\t(*f2py_func)(#callfortran#);'}, <add> {l_not(l_or(hascallstatement, isdummyroutine)) <add> : '\t(*f2py_func)(#callfortran#);'}, <ide> {hascallstatement: <ide> '\t#callstatement#;\n\t/*(*f2py_func)(#callfortran#);*/'}, <ide> {isthreadsafe: '\tPy_END_ALLOW_THREADS'}, <ide> \t\tf2py_success = 0; <ide> \t} else {"""}, <ide> {isthreadsafe: '\tPy_BEGIN_ALLOW_THREADS'}, <del> {l_not(l_or(hascallstatement, isdummyroutine)): '\t(*f2py_func)(#callfortran#);'}, <add> {l_not(l_or(hascallstatement, isdummyroutine)) <add> : '\t(*f2py_func)(#callfortran#);'}, <ide> {hascallstatement: <ide> '\t#callstatement#;\n\t/*(*f2py_func)(#callfortran#);*/'}, <ide> {isthreadsafe: '\tPy_END_ALLOW_THREADS'}, <ide> {hascallstatement: '''\t#callstatement#; <ide> /*\t#name#_return_value = (*f2py_func)(#callfortran#);*/ <ide> '''}, <del> {l_not(l_or(hascallstatement, isdummyroutine)): '\t#name#_return_value = (*f2py_func)(#callfortran#);'}, <add> {l_not(l_or(hascallstatement, isdummyroutine)) <add> : '\t#name#_return_value = (*f2py_func)(#callfortran#);'}, <ide> {isthreadsafe: '\tPy_END_ALLOW_THREADS'}, <ide> {hasexternals: '\t}'}, <del> {l_and(debugcapi, iscomplexfunction): '\tfprintf(stderr,"#routdebugshowvalue#\\n",#name#_return_value.r,#name#_return_value.i);'}, <add> {l_and(debugcapi, iscomplexfunction) <add> : '\tfprintf(stderr,"#routdebugshowvalue#\\n",#name#_return_value.r,#name#_return_value.i);'}, <ide> {l_and(debugcapi, l_not(iscomplexfunction)): '\tfprintf(stderr,"#routdebugshowvalue#\\n",#name#_return_value);'}], <ide> 'pyobjfrom': {iscomplexfunction: '\t#name#_return_value_capi = pyobj_from_#ctype#1(#name#_return_value);'}, <ide> 'need': [{l_not(isdummyroutine): 'F_FUNC'}, <ide> }, { # String function # in use for --no-wrap <ide> 'declfortranroutine': 'extern void #F_FUNC#(#fortranname#,#FORTRANNAME#)(#callprotoargument#);', <ide> 'routine_def': {l_not(l_or(ismoduleroutine, isintent_c)): <del> # '\t{\"#name#\",-1,{{-1}},0,(char *)F_FUNC(#fortranname#,#FORTRANNAME#),(void *)#apiname#,doc_#apiname#},', <ide> '\t{\"#name#\",-1,{{-1}},0,(char *)#F_FUNC#(#fortranname#,#FORTRANNAME#),(f2py_init_func)#apiname#,doc_#apiname#},', <ide> l_and(l_not(ismoduleroutine), isintent_c): <del> # '\t{\"#name#\",-1,{{-1}},0,(char *)#fortranname#,(void *)#apiname#,doc_#apiname#},' <ide> '\t{\"#name#\",-1,{{-1}},0,(char *)#fortranname#,(f2py_init_func)#apiname#,doc_#apiname#},' <ide> }, <ide> 'decl': ['\t#ctype# #name#_return_value = NULL;', <ide> } <ide> } <ide> """}, <del> # {l_not(isintent_callback):""" <del> # if (#varname#_capi==Py_None) { <del> # printf(\"hoi\\n\"); <del> # } <del> # """}, <ide> """\ <ide> \t#varname#_nofargs_capi = #cbname#_nofargs; <ide> \tif (create_cb_arglist(#varname#_capi,#varname#_xa_capi,#maxnofargs#,#nofoptargs#,&#cbname#_nofargs,&#varname#_args_capi,\"failed in processing argument list for call-back #varname#.\")) { <ide> }, { <ide> 'need': {hasinitvalue: 'math.h'}, <ide> '_check': l_and(isscalar, l_not(iscomplex)), <del> #'_depend':'' <ide> }, { # Not hidden <ide> 'decl': '\tPyObject *#varname#_capi = Py_None;', <ide> 'argformat': {isrequired: 'O'}, <ide> 'need': {l_not(islogical): '#ctype#_from_pyobj'}, <ide> '_check': l_and(isscalar, l_not(iscomplex), isintent_nothide), <ide> '_depend': '' <del> # },{ # Hidden <del> # '_check':l_and(isscalar,l_not(iscomplex),isintent_hide) <ide> }, { # Hidden <ide> 'frompyobj': {hasinitvalue: '\t#varname# = #init#;'}, <ide> 'need': typedef_need_dict, <ide> '_check': l_and(iscomplex, isintent_nothide) <ide> }, { <ide> 'frompyobj': [{hasinitvalue: '\tif (#varname#_capi==Py_None) {#varname#.r = #init.r#, #varname#.i = #init.i#;} else'}, <del> {l_and(isoptional, l_not(hasinitvalue)): '\tif (#varname#_capi != Py_None)'}, <del> # '\t\tf2py_success = #ctype#_from_pyobj(&#varname#,#varname#_capi,"#ctype#_from_pyobj failed in converting #nth# `#varname#\' of #pyname# to C #ctype#\\n");' <add> {l_and(isoptional, l_not(hasinitvalue)) <add> : '\tif (#varname#_capi != Py_None)'}, <ide> '\t\tf2py_success = #ctype#_from_pyobj(&#varname#,#varname#_capi,"#pyname#() #nth# (#varname#) can\'t be converted to #ctype#");' <ide> '\n\tif (f2py_success) {'], <ide> 'cleanupfrompyobj': '\t} /*if (f2py_success) of #varname# frompyobj*/', <ide> 'callfortran':'#varname#,', <ide> 'callfortranappend':'slen(#varname#),', <ide> 'pyobjfrom':{debugcapi: '\tfprintf(stderr,"#vardebugshowvalue#\\n",slen(#varname#),#varname#);'}, <del> # 'freemem':'\tSTRINGFREE(#varname#);', <ide> 'return': {isintent_out: ',#varname#'}, <ide> 'need': ['len..'], # 'STRINGFREE'], <ide> '_check':isstring <ide> 'keyformat': {isoptional: 'O'}, <ide> 'args_capi': {isrequired: ',&#varname#_capi'}, <ide> 'keys_capi': {isoptional: ',&#varname#_capi'}, <del> # 'pyobjfrom':{isintent_inout:""" <del> # /* Partly because of the following hack, intent(inout) is depreciated, <del> # Use intent(in,out) instead. <del> <del> # \tif ((#varname#_capi != Py_None) && PyArray_Check(#varname#_capi) <del> # \t\t&& (#varname#_capi != (PyObject *)capi_#varname#_tmp)) { <del> # \t\tif (PyArray_NDIM((PyArrayObject *)#varname#_capi) != PyArray_NDIM(capi_#varname#_tmp)) { <del> # \t\t\tif (#varname#_capi != PyArray_BASE(capi_#varname#_tmp)) <del> # \t\t\t\tcopy_ND_array(PyArray_BASE((PyArrayObject *)capi_#varname#_tmp),(PyArrayObject *)#varname#_capi); <del> # \t\t} else <del> # \t\t\tcopy_ND_array(capi_#varname#_tmp,(PyArrayObject *)#varname#_capi); <del> # \t} <del> # */ <del> # """}, <del> # 'need':{isintent_inout:'copy_ND_array'}, <ide> '_check': l_and(isarray, isintent_nothide) <ide> }, { <ide> 'frompyobj': ['\t#setdims#;', <ide> {l_not(l_or(isintent_out, isintent_hide)): """\ <ide> \tif((PyObject *)capi_#varname#_tmp!=#varname#_capi) { <ide> \t\tPy_XDECREF(capi_#varname#_tmp); }"""}, <del> {l_and(isintent_hide, l_not(isintent_out)): """\t\tPy_XDECREF(capi_#varname#_tmp);"""}, <add> {l_and(isintent_hide, l_not(isintent_out)) <add> : """\t\tPy_XDECREF(capi_#varname#_tmp);"""}, <ide> {hasinitvalue: '\t} /*if (f2py_success) of #varname# init*/'}, <ide> ], <ide> '_check': isarray, <ide> '_depend': '' <ide> }, <del> # { # Hidden <del> # 'freemem':{l_not(isintent_out):'\tPy_XDECREF(capi_#varname#_tmp);'}, <del> # '_check':l_and(isarray,isintent_hide) <del> # }, <ide> # Scalararray <ide> { # Common <ide> '_check': l_and(isarray, l_not(iscomplexarray)) <ide> def buildapi(rout): <ide> args, depargs = getargs2(rout) <ide> capi_maps.depargs = depargs <ide> var = rout['vars'] <del> # auxvars = [a for a in var.keys() if isintent_aux(var[a])] <ide> <ide> if ismoduleroutine(rout): <ide> outmess('\t\t\tConstructing wrapper function "%s.%s"...\n' %
8
Javascript
Javascript
wrap the project child elements in panel
8f5415d65769b966192b7123e46cc81467a6adad
<ide><path>editor/js/Sidebar.Project.js <ide> Sidebar.Project = function ( editor ) { <ide> <ide> var container = new UI.Panel(); <ide> container.setBorderTop( '0' ); <add> container.setPadding( '0' ); <ide> container.setPaddingTop( '20px' ); <ide> <add> var projectsettings = new UI.Panel(); <add> projectsettings.setBorderTop( '0' ); <add> <add> container.add( projectsettings ); <add> <ide> // Title <ide> <ide> var titleRow = new UI.Row(); <ide> Sidebar.Project = function ( editor ) { <ide> titleRow.add( new UI.Text( strings.getKey( 'sidebar/project/title' ) ).setWidth( '90px' ) ); <ide> titleRow.add( title ); <ide> <del> container.add( titleRow ); <add> projectsettings.add( titleRow ); <ide> <ide> // Editable <ide> <ide> Sidebar.Project = function ( editor ) { <ide> editableRow.add( new UI.Text( strings.getKey( 'sidebar/project/editable' ) ).setWidth( '90px' ) ); <ide> editableRow.add( editable ); <ide> <del> container.add( editableRow ); <add> projectsettings.add( editableRow ); <ide> <ide> // VR <ide> <ide> Sidebar.Project = function ( editor ) { <ide> vrRow.add( new UI.Text( strings.getKey( 'sidebar/project/vr' ) ).setWidth( '90px' ) ); <ide> vrRow.add( vr ); <ide> <del> container.add( vrRow ); <add> projectsettings.add( vrRow ); <ide> <ide> // Renderer <ide> <ide> Sidebar.Project = function ( editor ) { <ide> rendererTypeRow.add( new UI.Text( strings.getKey( 'sidebar/project/renderer' ) ).setWidth( '90px' ) ); <ide> rendererTypeRow.add( rendererType ); <ide> <del> container.add( rendererTypeRow ); <add> projectsettings.add( rendererTypeRow ); <ide> <ide> if ( config.getKey( 'project/renderer' ) !== undefined ) { <ide> <ide> Sidebar.Project = function ( editor ) { <ide> } ); <ide> rendererPropertiesRow.add( rendererShadows ); <ide> <del> container.add( rendererPropertiesRow ); <add> projectsettings.add( rendererPropertiesRow ); <ide> <ide> // <ide>
1
Javascript
Javascript
implement the fix
43c447edd0c23db5a9acc477befa77a42826d1a3
<ide><path>server/index.js <del>import { resolve, join } from 'path' <add>import { resolve, join, sep } from 'path' <ide> import { parse as parseUrl } from 'url' <ide> import { parse as parseQs } from 'querystring' <ide> import fs from 'fs' <ide> export default class Server { <ide> } <ide> <ide> async serveStatic (req, res, path) { <add> if (!this.isServeableUrl(path)) { <add> return this.render404(req, res) <add> } <add> <ide> try { <ide> return await serveStatic(req, res, path) <ide> } catch (err) { <ide> export default class Server { <ide> } <ide> } <ide> <add> isServeableUrl (path) { <add> const resolved = resolve(path) <add> if ( <add> resolved.indexOf(join(this.dir, this.dist) + sep) !== 0 && <add> resolved.indexOf(join(this.dir, 'static') + sep) !== 0 <add> ) { <add> // Seems like the user is trying to traverse the filesystem. <add> return false <add> } <add> <add> return true <add> } <add> <ide> isInternalUrl (req) { <ide> for (const prefix of internalPrefixes) { <ide> if (prefix.test(req.url)) {
1
Javascript
Javascript
remove usage of require('util')
e5383adb25a74d1d3588cc447b13a158d0b54852
<ide><path>lib/internal/encoding.js <ide> class TextEncoder { <ide> }); <ide> obj.encoding = this.encoding; <ide> // Lazy to avoid circular dependency <del> return require('util').inspect(obj, opts); <add> return require('internal/util/inspect').inspect(obj, opts); <ide> } <ide> } <ide> <ide> function makeTextDecoderJS() { <ide> obj[kHandle] = this[kHandle]; <ide> } <ide> // Lazy to avoid circular dependency <del> return require('util').inspect(obj, opts); <add> return require('internal/util/inspect').inspect(obj, opts); <ide> } <ide> })); <ide> Object.defineProperties(TextDecoder.prototype, {
1
Python
Python
add relative positional embedding to kerasbert
c3c2386ca1b0b7fff18e30b0d055486761de6be0
<ide><path>official/nlp/modeling/layers/position_embedding.py <ide> # from __future__ import google_type_annotations <ide> from __future__ import print_function <ide> <add>import math <add> <ide> import tensorflow as tf <ide> <ide> from official.modeling import tf_utils <ide> def call(self, inputs): <ide> position_embeddings = self._position_embeddings <ide> <ide> return tf.broadcast_to(position_embeddings, input_shape) <add> <add>@tf.keras.utils.register_keras_serializable(package="Text") <add>class RelativePositionEmbedding(tf.keras.layers.Layer): <add> """Creates a positional embedding. <add> This layer calculates the position encoding as a mix of sine and cosine <add> functions with geometrically increasing wavelengths. Defined and formulized in <add> "Attention is All You Need", section 3.5. <add> (https://arxiv.org/abs/1706.03762). <add> Arguments: <add> hidden_size: Size of the hidden layer. <add> min_timescale: Minimum scale that will be applied at each position <add> max_timescale: Maximum scale that will be applied at each position. <add> length: Number of positions. Should be specified if `inputs` is None at <add> `call(self, inputs)` <add> """ <add> <add> def __init__(self, <add> hidden_size, <add> min_timescale=1.0, <add> max_timescale=1.0e4, <add> length=None, <add> **kwargs): <add> # We need to have a default dtype of float32, since the inputs (which Keras <add> # usually uses to infer the dtype) will always be int32. <add> # We compute the positional encoding in float32 even if the model uses <add> # float16, as many of the ops used, like log and exp, are numerically <add> # unstable in float16. <add> if "dtype" not in kwargs: <add> kwargs["dtype"] = "float32" <add> <add> super(RelativePositionEmbedding, self).__init__(**kwargs) <add> self._hidden_size = hidden_size <add> self._min_timescale = min_timescale <add> self._max_timescale = max_timescale <add> self._length = length <add> <add> def get_config(self): <add> config = { <add> "hidden_size": self._hidden_size, <add> "min_timescale": self._min_timescale, <add> "max_timescale": self._max_timescale, <add> "length": self._length, <add> } <add> base_config = super(RelativePositionEmbedding, self).get_config() <add> return dict(list(base_config.items()) + list(config.items())) <add> <add> def build(self, input_shape): <add> """Implements build() for the layer.""" <add> super(RelativePositionEmbedding, self).build(input_shape) <add> <add> def call(self, inputs): <add> """Implements call() for the layer.""" <add> length = self._length <add> if inputs is None and length is None: <add> raise ValueError( <add> "If inputs is None, `length` must be set in " <add> "RelativePositionEmbedding().") <add> if inputs is not None: <add> input_shape = tf_utils.get_shape_list(inputs) <add> if length is not None and length != input_shape[1]: <add> raise ValueError( <add> "If inputs is not None, `length` must equal to input_shape[1]." <add> ) <add> length = input_shape[1] <add> position = tf.cast(tf.range(length), tf.float32) <add> num_timescales = self._hidden_size // 2 <add> min_timescale, max_timescale = self._min_timescale, self._max_timescale <add> log_timescale_increment = ( <add> math.log(float(max_timescale) / float(min_timescale)) / <add> (tf.cast(num_timescales, tf.float32) - 1)) <add> inv_timescales = min_timescale * tf.exp( <add> tf.cast(tf.range(num_timescales), tf.float32) * <add> -log_timescale_increment) <add> scaled_time = tf.expand_dims(position, 1) * tf.expand_dims(inv_timescales, <add> 0) <add> position_embeddings = tf.concat([tf.sin(scaled_time), tf.cos(scaled_time)], <add> axis=1) <add> return position_embeddings <ide><path>official/nlp/modeling/layers/position_embedding_test.py <ide> def test_static_layer_output_shape(self): <ide> sequence_length = 21 <ide> width = 30 <ide> input_tensor = tf.keras.Input(shape=(sequence_length, width)) <del> output_tensor = test_layer(input_tensor) <add> output_tensor = test_layer(input_tensor) # pylint: disable=not-callable <ide> <ide> # When using static positional embedding shapes, the output is expected <ide> # to be the same as the input shape in all dimensions save batch. <ide> def test_float16_dtype(self): <ide> sequence_length = 21 <ide> width = 30 <ide> input_tensor = tf.keras.Input(shape=(sequence_length, width)) <del> output_tensor = test_layer(input_tensor) <add> output_tensor = test_layer(input_tensor) # pylint: disable=not-callable <ide> <ide> # When using static positional embedding shapes, the output is expected <ide> # to be the same as the input shape in all dimensions save batch. <ide> def test_dynamic_layer_output_shape(self): <ide> # Create a 3-dimensional input (the first dimension is implicit). <ide> width = 30 <ide> input_tensor = tf.keras.Input(shape=(None, width)) <del> output_tensor = test_layer(input_tensor) <add> output_tensor = test_layer(input_tensor) # pylint: disable=not-callable <ide> <ide> # When using dynamic positional embedding shapes, the output is expected <ide> # to be the same as the input shape in all dimensions - but may be None if <ide> def test_dynamic_layer_slicing(self): <ide> # Create a 3-dimensional input (the first dimension is implicit). <ide> width = 30 <ide> input_tensor = tf.keras.Input(shape=(None, width)) <del> output_tensor = test_layer(input_tensor) <add> output_tensor = test_layer(input_tensor) # pylint: disable=not-callable <ide> <ide> model = tf.keras.Model(input_tensor, output_tensor) <ide> <ide> def test_dynamic_layer_slicing(self): <ide> <ide> self.assertAllEqual([1, input_length, width], output_data.shape) <ide> <add> def test_relative_tensor_input(self): <add> hidden_size = 8 <add> test_layer = position_embedding.RelativePositionEmbedding( <add> hidden_size=hidden_size) <add> <add> # create a 3-dimensional input for test_layer to infer length as 1. <add> input_tensor = tf.constant([[[0] * hidden_size]]) <add> output_tensor = test_layer(input_tensor) # pylint: disable=not-callable <add> <add> # expected output is the theoretical result of the input based on <add> # sine cosine relative position embedding formula. <add> expected_output_tensor = tf.constant([[0, 0, 0, 0, 1, 1, 1, 1]]) <add> self.assertAllEqual(output_tensor, expected_output_tensor) <add> <add> def test_relative_length_input(self): <add> hidden_size = 8 <add> <add> # When we do not have tensor as input, we explicitly specify length <add> # value when initializing test_layer. <add> test_layer = position_embedding.RelativePositionEmbedding( <add> hidden_size=hidden_size, length=1) <add> input_tensor = None <add> output_tensor = test_layer(input_tensor) # pylint: disable=not-callable <add> <add> # expected output is the theoretical result of the input based on <add> # sine cosine relative position embedding formula. <add> expected_output_tensor = tf.constant([[0, 0, 0, 0, 1, 1, 1, 1]]) <add> self.assertAllEqual(output_tensor, expected_output_tensor) <ide> <ide> if __name__ == "__main__": <ide> tf.test.main() <ide><path>official/nlp/transformer/transformer.py <ide> from __future__ import print_function <ide> <ide> import tensorflow as tf <add>from official.nlp.modeling.layers import position_embedding <ide> from official.nlp.transformer import attention_layer <ide> from official.nlp.transformer import beam_search <ide> from official.nlp.transformer import embedding_layer <ide> def encode(self, inputs, attention_bias, training): <ide> attention_bias = tf.cast(attention_bias, self.params["dtype"]) <ide> <ide> with tf.name_scope("add_pos_encoding"): <del> length = tf.shape(embedded_inputs)[1] <del> pos_encoding = model_utils.get_position_encoding( <del> length, self.params["hidden_size"]) <add> pos_layer = position_embedding.RelativePositionEmbedding( <add> hidden_size=self.params["hidden_size"]) <add> pos_encoding = pos_layer(embedded_inputs) <ide> pos_encoding = tf.cast(pos_encoding, self.params["dtype"]) <ide> encoder_inputs = embedded_inputs + pos_encoding <ide> <ide> def decode(self, targets, encoder_outputs, attention_bias, training): <ide> [[0, 0], [1, 0], [0, 0]])[:, :-1, :] <ide> with tf.name_scope("add_pos_encoding"): <ide> length = tf.shape(decoder_inputs)[1] <del> pos_encoding = model_utils.get_position_encoding( <del> length, self.params["hidden_size"]) <add> pos_layer = position_embedding.RelativePositionEmbedding( <add> hidden_size=self.params["hidden_size"]) <add> pos_encoding = pos_layer(decoder_inputs) <ide> pos_encoding = tf.cast(pos_encoding, self.params["dtype"]) <ide> decoder_inputs += pos_encoding <ide> if training: <ide> def decode(self, targets, encoder_outputs, attention_bias, training): <ide> def _get_symbols_to_logits_fn(self, max_decode_length, training): <ide> """Returns a decoding function that calculates logits of the next tokens.""" <ide> <del> timing_signal = model_utils.get_position_encoding( <del> max_decode_length + 1, self.params["hidden_size"]) <add> pos_layer = position_embedding.RelativePositionEmbedding( <add> hidden_size=self.params["hidden_size"], <add> length=max_decode_length + 1) <add> timing_signal = pos_layer(None) <ide> timing_signal = tf.cast(timing_signal, self.params["dtype"]) <ide> decoder_self_attention_bias = model_utils.get_decoder_self_attention_bias( <ide> max_decode_length, dtype=self.params["dtype"])
3
Python
Python
fix typo in featureextractionpipeline docstring
747907dc5e93a23cf5699bf228752526592884d6
<ide><path>src/transformers/pipelines.py <ide> def _forward(self, inputs, return_tensors=False): <ide> class FeatureExtractionPipeline(Pipeline): <ide> """ <ide> Feature extraction pipeline using Model head. This pipeline extracts the hidden states from the base transformer, <del> which can be used as features in a downstream tasks. <add> which can be used as features in downstream tasks. <ide> <ide> This feature extraction pipeline can currently be loaded from the :func:`~transformers.pipeline` method using <ide> the following task identifier(s):
1
Ruby
Ruby
test clear_changes_information rather than reload
8937c722172595cfad007c686ed5c4a1b79397cc
<ide><path>activemodel/test/cases/attributes_dirty_test.rb <ide> class DirtyModel <ide> def save <ide> changes_applied <ide> end <del> <del> def reload <del> clear_changes_information <del> end <ide> end <ide> <ide> setup do <ide> def reload <ide> assert_predicate @model, :size_changed? <ide> end <ide> <del> test "reload should reset all changes" do <add> test "clear_changes_information should reset all changes" do <ide> @model.name = "Dmitry" <ide> @model.name_changed? <ide> @model.save <ide> def reload <ide> assert_equal [nil, "Dmitry"], @model.previous_changes["name"] <ide> assert_equal "Dmitry", @model.changed_attributes["name"] <ide> <del> @model.reload <add> @model.clear_changes_information <ide> <ide> assert_equal ActiveSupport::HashWithIndifferentAccess.new, @model.previous_changes <ide> assert_equal ActiveSupport::HashWithIndifferentAccess.new, @model.changed_attributes <ide><path>activemodel/test/cases/dirty_test.rb <ide> def status=(val) <ide> def save <ide> changes_applied <ide> end <del> <del> def reload <del> clear_changes_information <del> end <ide> end <ide> <ide> setup do <ide> def reload <ide> assert_predicate @model, :size_changed? <ide> end <ide> <del> test "reload should reset all changes" do <add> test "clear_changes_information should reset all changes" do <ide> @model.name = "Dmitry" <ide> @model.name_changed? <ide> @model.save <ide> def reload <ide> assert_equal [nil, "Dmitry"], @model.previous_changes["name"] <ide> assert_equal "Dmitry", @model.changed_attributes["name"] <ide> <del> @model.reload <add> @model.clear_changes_information <ide> <ide> assert_equal ActiveSupport::HashWithIndifferentAccess.new, @model.previous_changes <ide> assert_equal ActiveSupport::HashWithIndifferentAccess.new, @model.changed_attributes
2
Python
Python
add tf bert files
bffd17a43d74b228670c2146c284023f250fc85f
<ide><path>pytorch_transformers/file_utils.py <ide> def url_to_filename(url, etag=None): <ide> Convert `url` into a hashed filename in a repeatable way. <ide> If `etag` is specified, append its hash to the url's, delimited <ide> by a period. <add> If the url ends with .h5 (Keras HDF5 weights) ands '.h5' to the name <add> so that TF 2.0 can identify it as a HDF5 file <add> (see https://github.com/tensorflow/tensorflow/blob/00fad90125b18b80fe054de1055770cfb8fe4ba3/tensorflow/python/keras/engine/network.py#L1380) <ide> """ <ide> url_bytes = url.encode('utf-8') <ide> url_hash = sha256(url_bytes) <ide> def url_to_filename(url, etag=None): <ide> etag_hash = sha256(etag_bytes) <ide> filename += '.' + etag_hash.hexdigest() <ide> <add> if url.endswith('.h5'): <add> filename += '.h5' <add> <ide> return filename <ide> <ide> <ide><path>pytorch_transformers/modeling_tf_bert.py <add># coding=utf-8 <add># Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team. <add># Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. <add># <add># Licensed under the Apache License, Version 2.0 (the "License"); <add># you may not use this file except in compliance with the License. <add># You may obtain a copy of the License at <add># <add># http://www.apache.org/licenses/LICENSE-2.0 <add># <add># Unless required by applicable law or agreed to in writing, software <add># distributed under the License is distributed on an "AS IS" BASIS, <add># WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. <add># See the License for the specific language governing permissions and <add># limitations under the License. <add>""" TF 2.0 BERT model. """ <add> <add>from __future__ import absolute_import, division, print_function, unicode_literals <add> <add>import json <add>import logging <add>import math <add>import os <add>import sys <add>from io import open <add> <add>import numpy as np <add>import tensorflow as tf <add> <add>from .configuration_bert import BertConfig <add>from .modeling_tf_utils import TFPreTrainedModel <add>from .file_utils import add_start_docstrings <add> <add>logger = logging.getLogger(__name__) <add> <add> <add>TF_BERT_PRETRAINED_MODEL_ARCHIVE_MAP = { <add> 'bert-base-uncased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-tf_model.h5", <add> 'bert-large-uncased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-tf_model.h5", <add> 'bert-base-cased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-tf_model.h5", <add> 'bert-large-cased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-tf_model.h5", <add> 'bert-base-multilingual-uncased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-uncased-tf_model.h5", <add> 'bert-base-multilingual-cased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased-tf_model.h5", <add> 'bert-base-chinese': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese-tf_model.h5", <add> 'bert-base-german-cased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-cased-tf_model.h5", <add> 'bert-large-uncased-whole-word-masking': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-tf_model.h5", <add> 'bert-large-cased-whole-word-masking': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-tf_model.h5", <add> 'bert-large-uncased-whole-word-masking-finetuned-squad': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-finetuned-squad-tf_model.h5", <add> 'bert-large-cased-whole-word-masking-finetuned-squad': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-finetuned-squad-tf_model.h5", <add> 'bert-base-cased-finetuned-mrpc': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-finetuned-mrpc-tf_model.h5", <add>} <add> <add> <add>def load_pt_weights_in_bert(tf_model, config, pytorch_checkpoint_path): <add> """ Load pytorch checkpoints in a TF 2.0 model and save it using HDF5 format <add> We use HDF5 to easily do transfer learning <add> (see https://github.com/tensorflow/tensorflow/blob/ee16fcac960ae660e0e4496658a366e2f745e1f0/tensorflow/python/keras/engine/network.py#L1352-L1357). <add> """ <add> try: <add> import re <add> import torch <add> import numpy <add> from tensorflow.python.keras import backend as K <add> except ImportError: <add> logger.error("Loading a PyTorch model in TensorFlow, requires PyTorch to be installed. Please see " <add> "https://pytorch.org/ for installation instructions.") <add> raise <add> <add> pt_path = os.path.abspath(pytorch_checkpoint_path) <add> logger.info("Loading PyTorch weights from {}".format(pt_path)) <add> # Load pytorch model <add> state_dict = torch.load(pt_path, map_location='cpu') <add> <add> inputs_list = [[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]] <add> tf_inputs = tf.constant(inputs_list) <add> tfo = tf_model(tf_inputs, training=False) # build the network <add> <add> symbolic_weights = tf_model.trainable_weights + tf_model.non_trainable_weights <add> weight_value_tuples = [] <add> for symbolic_weight in symbolic_weights: <add> name = symbolic_weight.name <add> name = name.replace('cls_mlm', 'cls') # We had to split this layer in two in the TF model to be <add> name = name.replace('cls_nsp', 'cls') # able to do transfer learning (Keras only allow to remove full layers) <add> name = name.replace(':0', '') <add> name = name.replace('layer_', 'layer/') <add> name = name.split('/') <add> name = name[1:] <add> <add> transpose = bool(name[-1] == 'kernel') <add> if name[-1] == 'kernel' or name[-1] == 'embeddings': <add> name[-1] = 'weight' <add> <add> name = '.'.join(name) <add> assert name in state_dict <add> array = state_dict[name].numpy() <add> <add> if transpose: <add> array = numpy.transpose(array) <add> <add> try: <add> assert list(symbolic_weight.shape) == list(array.shape) <add> except AssertionError as e: <add> e.args += (symbolic_weight.shape, array.shape) <add> raise e <add> <add> logger.info("Initialize TF weight {}".format(symbolic_weight.name)) <add> <add> weight_value_tuples.append((symbolic_weight, array)) <add> <add> K.batch_set_value(weight_value_tuples) <add> <add> tfo = tf_model(tf_inputs, training=False) # Make sure restore ops are run <add> return tf_model <add> <add> <add>def gelu(x): <add> """Gaussian Error Linear Unit. <add> This is a smoother version of the RELU. <add> Original paper: https://arxiv.org/abs/1606.08415 <add> Args: <add> x: float Tensor to perform activation. <add> Returns: <add> `x` with the GELU activation applied. <add> """ <add> cdf = 0.5 * (1.0 + tf.tanh( <add> (np.sqrt(2 / np.pi) * (x + 0.044715 * tf.pow(x, 3))))) <add> return x * cdf <add> <add> <add>def swish(x): <add> return x * tf.sigmoid(x) <add> <add> <add>ACT2FN = {"gelu": tf.keras.layers.Activation(gelu), <add> "relu": tf.keras.activations.relu, <add> "swish": tf.keras.layers.Activation(swish)} <add> <add> <add>class TFBertEmbeddings(tf.keras.layers.Layer): <add> """Construct the embeddings from word, position and token_type embeddings. <add> """ <add> def __init__(self, config, **kwargs): <add> super(TFBertEmbeddings, self).__init__(**kwargs) <add> self.word_embeddings = tf.keras.layers.Embedding(config.vocab_size, config.hidden_size, name='word_embeddings') <add> self.position_embeddings = tf.keras.layers.Embedding(config.max_position_embeddings, config.hidden_size, name='position_embeddings') <add> self.token_type_embeddings = tf.keras.layers.Embedding(config.type_vocab_size, config.hidden_size, name='token_type_embeddings') <add> <add> # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load <add> # any TensorFlow checkpoint file <add> self.LayerNorm = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name='LayerNorm') <add> self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob) <add> <add> def call(self, inputs, training=False): <add> input_ids, position_ids, token_type_ids = inputs <add> <add> seq_length = tf.shape(input_ids)[1] <add> if position_ids is None: <add> position_ids = tf.range(seq_length, dtype=tf.int32)[tf.newaxis, :] <add> if token_type_ids is None: <add> token_type_ids = tf.fill(tf.shape(input_ids), 0) <add> <add> words_embeddings = self.word_embeddings(input_ids) <add> position_embeddings = self.position_embeddings(position_ids) <add> token_type_embeddings = self.token_type_embeddings(token_type_ids) <add> <add> embeddings = words_embeddings + position_embeddings + token_type_embeddings <add> embeddings = self.LayerNorm(embeddings) <add> if training: <add> embeddings = self.dropout(embeddings) <add> return embeddings <add> <add> <add>class TFBertSelfAttention(tf.keras.layers.Layer): <add> def __init__(self, config, **kwargs): <add> super(TFBertSelfAttention, self).__init__(**kwargs) <add> if config.hidden_size % config.num_attention_heads != 0: <add> raise ValueError( <add> "The hidden size (%d) is not a multiple of the number of attention " <add> "heads (%d)" % (config.hidden_size, config.num_attention_heads)) <add> self.output_attentions = config.output_attentions <add> <add> self.num_attention_heads = config.num_attention_heads <add> assert config.hidden_size % config.num_attention_heads == 0 <add> self.attention_head_size = int(config.hidden_size / config.num_attention_heads) <add> self.all_head_size = self.num_attention_heads * self.attention_head_size <add> <add> self.query = tf.keras.layers.Dense(self.all_head_size, name='query') <add> self.key = tf.keras.layers.Dense(self.all_head_size, name='key') <add> self.value = tf.keras.layers.Dense(self.all_head_size, name='value') <add> <add> self.dropout = tf.keras.layers.Dropout(config.attention_probs_dropout_prob) <add> <add> def transpose_for_scores(self, x, batch_size): <add> x = tf.reshape(x, (batch_size, -1, self.num_attention_heads, self.attention_head_size)) <add> return tf.transpose(x, perm=[0, 2, 1, 3]) <add> <add> def call(self, inputs, training=False): <add> hidden_states, attention_mask, head_mask = inputs <add> <add> batch_size = tf.shape(hidden_states)[0] <add> mixed_query_layer = self.query(hidden_states) <add> mixed_key_layer = self.key(hidden_states) <add> mixed_value_layer = self.value(hidden_states) <add> <add> query_layer = self.transpose_for_scores(mixed_query_layer, batch_size) <add> key_layer = self.transpose_for_scores(mixed_key_layer, batch_size) <add> value_layer = self.transpose_for_scores(mixed_value_layer, batch_size) <add> <add> # Take the dot product between "query" and "key" to get the raw attention scores. <add> attention_scores = tf.matmul(query_layer, key_layer, transpose_b=True) # (batch size, num_heads, seq_len_q, seq_len_k) <add> dk = tf.cast(tf.shape(key_layer)[-1], tf.float32) # scale attention_scores <add> attention_scores = attention_scores / tf.math.sqrt(dk) <add> # Apply the attention mask is (precomputed for all layers in TFBertModel call() function) <add> attention_scores = attention_scores + attention_mask <add> <add> # Normalize the attention scores to probabilities. <add> attention_probs = tf.nn.softmax(attention_scores, axis=-1) <add> <add> if training: <add> # This is actually dropping out entire tokens to attend to, which might <add> # seem a bit unusual, but is taken from the original Transformer paper. <add> attention_probs = self.dropout(attention_probs) <add> <add> # Mask heads if we want to <add> if head_mask is not None: <add> attention_probs = attention_probs * head_mask <add> <add> context_layer = tf.matmul(attention_probs, value_layer) <add> <add> context_layer = tf.transpose(context_layer, perm=[0, 2, 1, 3]) <add> context_layer = tf.reshape(context_layer, <add> (batch_size, -1, self.all_head_size)) # (batch_size, seq_len_q, all_head_size) <add> <add> outputs = (context_layer, attention_probs) if self.output_attentions else (context_layer,) <add> return outputs <add> <add> <add>class TFBertSelfOutput(tf.keras.layers.Layer): <add> def __init__(self, config, **kwargs): <add> super(TFBertSelfOutput, self).__init__(**kwargs) <add> self.dense = tf.keras.layers.Dense(config.hidden_size, name='dense') <add> self.LayerNorm = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name='LayerNorm') <add> self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob) <add> <add> def call(self, inputs, training=False): <add> hidden_states, input_tensor = inputs <add> <add> hidden_states = self.dense(hidden_states) <add> if training: <add> hidden_states = self.dropout(hidden_states) <add> hidden_states = self.LayerNorm(hidden_states + input_tensor) <add> return hidden_states <add> <add> <add>class TFBertAttention(tf.keras.layers.Layer): <add> def __init__(self, config, **kwargs): <add> super(TFBertAttention, self).__init__(**kwargs) <add> self.self_attention = TFBertSelfAttention(config, name='self') <add> self.dense_output = TFBertSelfOutput(config, name='output') <add> <add> def prune_heads(self, heads): <add> raise NotImplementedError <add> <add> def call(self, inputs, training=False): <add> input_tensor, attention_mask, head_mask = inputs <add> <add> self_outputs = self.self_attention([input_tensor, attention_mask, head_mask], training=training) <add> attention_output = self.dense_output([self_outputs[0], input_tensor], training=training) <add> outputs = (attention_output,) + self_outputs[1:] # add attentions if we output them <add> return outputs <add> <add> <add>class TFBertIntermediate(tf.keras.layers.Layer): <add> def __init__(self, config, **kwargs): <add> super(TFBertIntermediate, self).__init__(**kwargs) <add> self.dense = tf.keras.layers.Dense(config.intermediate_size, name='dense') <add> if isinstance(config.hidden_act, str) or (sys.version_info[0] == 2 and isinstance(config.hidden_act, unicode)): <add> self.intermediate_act_fn = ACT2FN[config.hidden_act] <add> else: <add> self.intermediate_act_fn = config.hidden_act <add> <add> def call(self, hidden_states): <add> hidden_states = self.dense(hidden_states) <add> hidden_states = self.intermediate_act_fn(hidden_states) <add> return hidden_states <add> <add> <add>class TFBertOutput(tf.keras.layers.Layer): <add> def __init__(self, config, **kwargs): <add> super(TFBertOutput, self).__init__(**kwargs) <add> self.dense = tf.keras.layers.Dense(config.hidden_size, name='dense') <add> self.LayerNorm = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name='LayerNorm') <add> self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob) <add> <add> def call(self, inputs, training=False): <add> hidden_states, input_tensor = inputs <add> <add> hidden_states = self.dense(hidden_states) <add> if training: <add> hidden_states = self.dropout(hidden_states) <add> hidden_states = self.LayerNorm(hidden_states + input_tensor) <add> return hidden_states <add> <add> <add>class TFBertLayer(tf.keras.layers.Layer): <add> def __init__(self, config, **kwargs): <add> super(TFBertLayer, self).__init__(**kwargs) <add> self.attention = TFBertAttention(config, name='attention') <add> self.intermediate = TFBertIntermediate(config, name='intermediate') <add> self.bert_output = TFBertOutput(config, name='output') <add> <add> def call(self, inputs, training=False): <add> hidden_states, attention_mask, head_mask = inputs <add> <add> attention_outputs = self.attention([hidden_states, attention_mask, head_mask], training=training) <add> attention_output = attention_outputs[0] <add> intermediate_output = self.intermediate(attention_output) <add> layer_output = self.bert_output([intermediate_output, attention_output], training=training) <add> outputs = (layer_output,) + attention_outputs[1:] # add attentions if we output them <add> return outputs <add> <add> <add>class TFBertEncoder(tf.keras.layers.Layer): <add> def __init__(self, config, **kwargs): <add> super(TFBertEncoder, self).__init__(**kwargs) <add> self.output_attentions = config.output_attentions <add> self.output_hidden_states = config.output_hidden_states <add> self.layer = [TFBertLayer(config, name='layer_{}'.format(i)) for i in range(config.num_hidden_layers)] <add> <add> def call(self, inputs, training=False): <add> hidden_states, attention_mask, head_mask = inputs <add> <add> all_hidden_states = () <add> all_attentions = () <add> for i, layer_module in enumerate(self.layer): <add> if self.output_hidden_states: <add> all_hidden_states = all_hidden_states + (hidden_states,) <add> <add> layer_outputs = layer_module([hidden_states, attention_mask, head_mask[i]], training=training) <add> hidden_states = layer_outputs[0] <add> <add> if self.output_attentions: <add> all_attentions = all_attentions + (layer_outputs[1],) <add> <add> # Add last layer <add> if self.output_hidden_states: <add> all_hidden_states = all_hidden_states + (hidden_states,) <add> <add> outputs = (hidden_states,) <add> if self.output_hidden_states: <add> outputs = outputs + (all_hidden_states,) <add> if self.output_attentions: <add> outputs = outputs + (all_attentions,) <add> return outputs # outputs, (hidden states), (attentions) <add> <add> <add>class TFBertPooler(tf.keras.layers.Layer): <add> def __init__(self, config, **kwargs): <add> super(TFBertPooler, self).__init__(**kwargs) <add> self.dense = tf.keras.layers.Dense(config.hidden_size, activation='tanh', name='dense') <add> <add> def call(self, hidden_states): <add> # We "pool" the model by simply taking the hidden state corresponding <add> # to the first token. <add> first_token_tensor = hidden_states[:, 0] <add> pooled_output = self.dense(first_token_tensor) <add> return pooled_output <add> <add> <add>class TFBertPredictionHeadTransform(tf.keras.layers.Layer): <add> def __init__(self, config, **kwargs): <add> super(TFBertPredictionHeadTransform, self).__init__(**kwargs) <add> self.dense = tf.keras.layers.Dense(config.hidden_size, name='dense') <add> if isinstance(config.hidden_act, str) or (sys.version_info[0] == 2 and isinstance(config.hidden_act, unicode)): <add> self.transform_act_fn = ACT2FN[config.hidden_act] <add> else: <add> self.transform_act_fn = config.hidden_act <add> self.LayerNorm = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name='LayerNorm') <add> <add> def call(self, hidden_states): <add> hidden_states = self.dense(hidden_states) <add> hidden_states = self.transform_act_fn(hidden_states) <add> hidden_states = self.LayerNorm(hidden_states) <add> return hidden_states <add> <add> <add>class TFBertLMPredictionHead(tf.keras.layers.Layer): <add> def __init__(self, config, **kwargs): <add> super(TFBertLMPredictionHead, self).__init__(**kwargs) <add> self.vocab_size = config.vocab_size <add> self.transform = TFBertPredictionHeadTransform(config, name='transform') <add> <add> # The output weights are the same as the input embeddings, but there is <add> # an output-only bias for each token. <add> self.decoder = tf.keras.layers.Dense(config.vocab_size, use_bias=False, name='decoder') <add> <add> def build(self, input_shape): <add> self.bias = self.add_weight(shape=(self.vocab_size,), <add> initializer='zeros', <add> trainable=True, <add> name='bias') <add> <add> def call(self, hidden_states): <add> hidden_states = self.transform(hidden_states) <add> hidden_states = self.decoder(hidden_states) + self.bias <add> return hidden_states <add> <add> <add>class TFBertMLMHead(tf.keras.layers.Layer): <add> def __init__(self, config, **kwargs): <add> super(TFBertMLMHead, self).__init__(**kwargs) <add> self.predictions = TFBertLMPredictionHead(config, name='predictions') <add> <add> def call(self, sequence_output): <add> prediction_scores = self.predictions(sequence_output) <add> return prediction_scores <add> <add> <add>class TFBertNSPHead(tf.keras.layers.Layer): <add> def __init__(self, config, **kwargs): <add> super(TFBertNSPHead, self).__init__(**kwargs) <add> self.seq_relationship = tf.keras.layers.Dense(2, name='seq_relationship') <add> <add> def call(self, pooled_output): <add> seq_relationship_score = self.seq_relationship(pooled_output) <add> return seq_relationship_score <add> <add> <add>class TFBertMainLayer(tf.keras.layers.Layer): <add> def __init__(self, config, **kwargs): <add> super(TFBertMainLayer, self).__init__(**kwargs) <add> self.num_hidden_layers = config.num_hidden_layers <add> <add> self.embeddings = TFBertEmbeddings(config, name='embeddings') <add> self.encoder = TFBertEncoder(config, name='encoder') <add> self.pooler = TFBertPooler(config, name='pooler') <add> <add> # self.apply(self.init_weights) # TODO check weights initialization <add> <add> def _resize_token_embeddings(self, new_num_tokens): <add> raise NotImplementedError <add> <add> def _prune_heads(self, heads_to_prune): <add> """ Prunes heads of the model. <add> heads_to_prune: dict of {layer_num: list of heads to prune in this layer} <add> See base class PreTrainedModel <add> """ <add> raise NotImplementedError <add> <add> def call(self, inputs, training=False): <add> if not isinstance(inputs, (dict, tuple, list)): <add> input_ids = inputs <add> attention_mask, head_mask, position_ids, token_type_ids = None, None, None, None <add> elif isinstance(inputs, (tuple, list)): <add> input_ids = inputs[0] <add> attention_mask = inputs[1] if len(inputs) > 1 else None <add> token_type_ids = inputs[2] if len(inputs) > 2 else None <add> position_ids = inputs[3] if len(inputs) > 3 else None <add> head_mask = inputs[4] if len(inputs) > 4 else None <add> assert len(inputs) <= 5, "Too many inputs." <add> else: <add> input_ids = inputs.pop('input_ids') <add> attention_mask = inputs.pop('attention_mask', None) <add> token_type_ids = inputs.pop('token_type_ids', None) <add> position_ids = inputs.pop('position_ids', None) <add> head_mask = inputs.pop('head_mask', None) <add> assert len(inputs) == 0, "Unexpected inputs detected: {}. Check inputs dict key names.".format(list(inputs.keys())) <add> <add> if attention_mask is None: <add> attention_mask = tf.fill(tf.shape(input_ids), 1) <add> if token_type_ids is None: <add> token_type_ids = tf.fill(tf.shape(input_ids), 0) <add> <add> # We create a 3D attention mask from a 2D tensor mask. <add> # Sizes are [batch_size, 1, 1, to_seq_length] <add> # So we can broadcast to [batch_size, num_heads, from_seq_length, to_seq_length] <add> # this attention mask is more simple than the triangular masking of causal attention <add> # used in OpenAI GPT, we just need to prepare the broadcast dimension here. <add> extended_attention_mask = attention_mask[:, tf.newaxis, tf.newaxis, :] <add> <add> # Since attention_mask is 1.0 for positions we want to attend and 0.0 for <add> # masked positions, this operation will create a tensor which is 0.0 for <add> # positions we want to attend and -10000.0 for masked positions. <add> # Since we are adding it to the raw scores before the softmax, this is <add> # effectively the same as removing these entirely. <add> <add> extended_attention_mask = tf.cast(extended_attention_mask, tf.float32) <add> extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0 <add> <add> # Prepare head mask if needed <add> # 1.0 in head_mask indicate we keep the head <add> # attention_probs has shape bsz x n_heads x N x N <add> # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] <add> # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] <add> if not head_mask is None: <add> raise NotImplementedError <add> else: <add> head_mask = [None] * self.num_hidden_layers <add> # head_mask = tf.constant([0] * self.num_hidden_layers) <add> <add> embedding_output = self.embeddings([input_ids, position_ids, token_type_ids], training=training) <add> encoder_outputs = self.encoder([embedding_output, extended_attention_mask, head_mask], training=training) <add> <add> sequence_output = encoder_outputs[0] <add> pooled_output = self.pooler(sequence_output) <add> <add> outputs = (sequence_output, pooled_output,) + encoder_outputs[1:] # add hidden_states and attentions if they are here <add> return outputs # sequence_output, pooled_output, (hidden_states), (attentions) <add> <add>class TFBertPreTrainedModel(TFPreTrainedModel): <add> """ An abstract class to handle weights initialization and <add> a simple interface for dowloading and loading pretrained models. <add> """ <add> config_class = BertConfig <add> pretrained_model_archive_map = TF_BERT_PRETRAINED_MODEL_ARCHIVE_MAP <add> load_pt_weights = load_pt_weights_in_bert <add> base_model_prefix = "bert" <add> <add> def __init__(self, *inputs, **kwargs): <add> super(TFBertPreTrainedModel, self).__init__(*inputs, **kwargs) <add> <add> def init_weights(self, module): <add> """ Initialize the weights. <add> """ <add> raise NotImplementedError <add> <add> <add>BERT_START_DOCSTRING = r""" The BERT model was proposed in <add> `BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding`_ <add> by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. It's a bidirectional transformer <add> pre-trained using a combination of masked language modeling objective and next sentence prediction <add> on a large corpus comprising the Toronto Book Corpus and Wikipedia. <add> <add> This model is a tf.keras.Model `tf.keras.Model`_ sub-class. Use it as a regular TF 2.0 Keras Model and <add> refer to the TF 2.0 documentation for all matter related to general usage and behavior. <add> <add> .. _`BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding`: <add> https://arxiv.org/abs/1810.04805 <add> <add> .. _`tf.keras.Model`: <add> https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model <add> <add> Important note on the model inputs: <add> The inputs of the TF 2.0 models are slightly different from the PyTorch ones since <add> TF 2.0 Keras doesn't accept named arguments with defaults values for input Tensor. <add> More precisely, input Tensors are gathered in the first arguments of the model call function: `model(inputs)`. <add> There are three possibilities to gather and feed the inputs to the model: <add> <add> - a single Tensor with input_ids only and nothing else: `model(inputs_ids) <add> - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: <add> `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` <add> - a dictionary with one or several input Tensors associaed to the input names given in the docstring: <add> `model({'input_ids': input_ids, 'token_type_ids': token_type_ids})` <add> <add> Parameters: <add> config (:class:`~pytorch_transformers.BertConfig`): Model configuration class with all the parameters of the model. <add> Initializing with a config file does not load the weights associated with the model, only the configuration. <add> Check out the :meth:`~pytorch_transformers.PreTrainedModel.from_pretrained` method to load the model weights. <add>""" <add> <add>BERT_INPUTS_DOCSTRING = r""" <add> Inputs: <add> **input_ids**: ``torch.LongTensor`` of shape ``(batch_size, sequence_length)``: <add> Indices of input sequence tokens in the vocabulary. <add> To match pre-training, BERT input sequence should be formatted with [CLS] and [SEP] tokens as follows: <add> <add> (a) For sequence pairs: <add> <add> ``tokens: [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP]`` <add> <add> ``token_type_ids: 0 0 0 0 0 0 0 0 1 1 1 1 1 1`` <add> <add> (b) For single sequences: <add> <add> ``tokens: [CLS] the dog is hairy . [SEP]`` <add> <add> ``token_type_ids: 0 0 0 0 0 0 0`` <add> <add> Bert is a model with absolute position embeddings so it's usually advised to pad the inputs on <add> the right rather than the left. <add> <add> Indices can be obtained using :class:`pytorch_transformers.BertTokenizer`. <add> See :func:`pytorch_transformers.PreTrainedTokenizer.encode` and <add> :func:`pytorch_transformers.PreTrainedTokenizer.convert_tokens_to_ids` for details. <add> **attention_mask**: (`optional`) ``torch.FloatTensor`` of shape ``(batch_size, sequence_length)``: <add> Mask to avoid performing attention on padding token indices. <add> Mask values selected in ``[0, 1]``: <add> ``1`` for tokens that are NOT MASKED, ``0`` for MASKED tokens. <add> **token_type_ids**: (`optional`) ``torch.LongTensor`` of shape ``(batch_size, sequence_length)``: <add> Segment token indices to indicate first and second portions of the inputs. <add> Indices are selected in ``[0, 1]``: ``0`` corresponds to a `sentence A` token, ``1`` <add> corresponds to a `sentence B` token <add> (see `BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding`_ for more details). <add> **position_ids**: (`optional`) ``torch.LongTensor`` of shape ``(batch_size, sequence_length)``: <add> Indices of positions of each input sequence tokens in the position embeddings. <add> Selected in the range ``[0, config.max_position_embeddings - 1]``. <add> **head_mask**: (`optional`) ``torch.FloatTensor`` of shape ``(num_heads,)`` or ``(num_layers, num_heads)``: <add> Mask to nullify selected heads of the self-attention modules. <add> Mask values selected in ``[0, 1]``: <add> ``1`` indicates the head is **not masked**, ``0`` indicates the head is **masked**. <add>""" <add> <add>@add_start_docstrings("The bare Bert Model transformer outputing raw hidden-states without any specific head on top.", <add> BERT_START_DOCSTRING, BERT_INPUTS_DOCSTRING) <add>class TFBertModel(TFBertPreTrainedModel): <add> r""" <add> Outputs: `Tuple` comprising various elements depending on the configuration (config) and inputs: <add> **last_hidden_state**: ``torch.FloatTensor`` of shape ``(batch_size, sequence_length, hidden_size)`` <add> Sequence of hidden-states at the output of the last layer of the model. <add> **pooler_output**: ``torch.FloatTensor`` of shape ``(batch_size, hidden_size)`` <add> Last layer hidden-state of the first token of the sequence (classification token) <add> further processed by a Linear layer and a Tanh activation function. The Linear <add> layer weights are trained from the next sentence prediction (classification) <add> objective during Bert pretraining. This output is usually *not* a good summary <add> of the semantic content of the input, you're often better with averaging or pooling <add> the sequence of hidden-states for the whole input sequence. <add> **hidden_states**: (`optional`, returned when ``config.output_hidden_states=True``) <add> list of ``torch.FloatTensor`` (one for the output of each layer + the output of the embeddings) <add> of shape ``(batch_size, sequence_length, hidden_size)``: <add> Hidden-states of the model at the output of each layer plus the initial embedding outputs. <add> **attentions**: (`optional`, returned when ``config.output_attentions=True``) <add> list of ``torch.FloatTensor`` (one for each layer) of shape ``(batch_size, num_heads, sequence_length, sequence_length)``: <add> Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. <add> <add> Examples:: <add> <add> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') <add> model = TFBertModel.from_pretrained('bert-base-uncased') <add> input_ids = tf.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) # Batch size 1 <add> outputs = model(input_ids) <add> last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple <add> <add> """ <add> def __init__(self, config): <add> super(TFBertModel, self).__init__(config) <add> self.bert = TFBertMainLayer(config, name='bert') <add> <add> def call(self, inputs, training=False): <add> outputs = self.bert(inputs, training=training) <add> return outputs <add> <add> <add>@add_start_docstrings("""Bert Model with two heads on top as done during the pre-training: <add> a `masked language modeling` head and a `next sentence prediction (classification)` head. """, <add> BERT_START_DOCSTRING, BERT_INPUTS_DOCSTRING) <add>class TFBertForPreTraining(TFBertPreTrainedModel): <add> r""" <add> **masked_lm_labels**: (`optional`) ``torch.LongTensor`` of shape ``(batch_size, sequence_length)``: <add> Labels for computing the masked language modeling loss. <add> Indices should be in ``[-1, 0, ..., config.vocab_size]`` (see ``input_ids`` docstring) <add> Tokens with indices set to ``-1`` are ignored (masked), the loss is only computed for the tokens with labels <add> in ``[0, ..., config.vocab_size]`` <add> **next_sentence_label**: (`optional`) ``torch.LongTensor`` of shape ``(batch_size,)``: <add> Labels for computing the next sequence prediction (classification) loss. Input should be a sequence pair (see ``input_ids`` docstring) <add> Indices should be in ``[0, 1]``. <add> ``0`` indicates sequence B is a continuation of sequence A, <add> ``1`` indicates sequence B is a random sequence. <add> <add> Outputs: `Tuple` comprising various elements depending on the configuration (config) and inputs: <add> **loss**: (`optional`, returned when both ``masked_lm_labels`` and ``next_sentence_label`` are provided) ``torch.FloatTensor`` of shape ``(1,)``: <add> Total loss as the sum of the masked language modeling loss and the next sequence prediction (classification) loss. <add> **prediction_scores**: ``torch.FloatTensor`` of shape ``(batch_size, sequence_length, config.vocab_size)`` <add> Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). <add> **seq_relationship_scores**: ``torch.FloatTensor`` of shape ``(batch_size, sequence_length, 2)`` <add> Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax). <add> **hidden_states**: (`optional`, returned when ``config.output_hidden_states=True``) <add> list of ``torch.FloatTensor`` (one for the output of each layer + the output of the embeddings) <add> of shape ``(batch_size, sequence_length, hidden_size)``: <add> Hidden-states of the model at the output of each layer plus the initial embedding outputs. <add> **attentions**: (`optional`, returned when ``config.output_attentions=True``) <add> list of ``torch.FloatTensor`` (one for each layer) of shape ``(batch_size, num_heads, sequence_length, sequence_length)``: <add> Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. <add> <add> Examples:: <add> <add> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') <add> model = TFBertForPreTraining.from_pretrained('bert-base-uncased') <add> input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) # Batch size 1 <add> outputs = model(input_ids) <add> prediction_scores, seq_relationship_scores = outputs[:2] <add> <add> """ <add> def __init__(self, config): <add> super(TFBertForPreTraining, self).__init__(config) <add> <add> self.bert = TFBertMainLayer(config, name='bert') <add> self.cls_mlm = TFBertMLMHead(config, name='cls_mlm') <add> self.cls_nsp = TFBertNSPHead(config, name='cls_nsp') <add> <add> # self.apply(self.init_weights) # TODO check added weights initialization <add> self.tie_weights() <add> <add> def tie_weights(self): <add> """ Make sure we are sharing the input and output embeddings. <add> """ <add> pass # TODO add weights tying <add> <add> def call(self, inputs, training=False): <add> outputs = self.bert(inputs, training=training) <add> <add> sequence_output, pooled_output = outputs[:2] <add> prediction_scores = self.cls_mlm(sequence_output) <add> seq_relationship_score = self.cls_nsp(pooled_output) <add> <add> outputs = (prediction_scores, seq_relationship_score,) + outputs[2:] # add hidden states and attention if they are here <add> <add> # if masked_lm_labels is not None and next_sentence_label is not None: <add> # loss_fct = CrossEntropyLoss(ignore_index=-1) <add> # masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), masked_lm_labels.view(-1)) <add> # next_sentence_loss = loss_fct(seq_relationship_score.view(-1, 2), next_sentence_label.view(-1)) <add> # total_loss = masked_lm_loss + next_sentence_loss <add> # outputs = (total_loss,) + outputs <add> # TODO add example with losses using model.compile and a dictionary of losses (give names to the output layers) <add> <add> return outputs # prediction_scores, seq_relationship_score, (hidden_states), (attentions) <add> <add> <add>@add_start_docstrings("""Bert Model with a `language modeling` head on top. """, <add> BERT_START_DOCSTRING, BERT_INPUTS_DOCSTRING) <add>class TFBertForMaskedLM(TFBertPreTrainedModel): <add> r""" <add> **masked_lm_labels**: (`optional`) ``torch.LongTensor`` of shape ``(batch_size, sequence_length)``: <add> Labels for computing the masked language modeling loss. <add> Indices should be in ``[-1, 0, ..., config.vocab_size]`` (see ``input_ids`` docstring) <add> Tokens with indices set to ``-1`` are ignored (masked), the loss is only computed for the tokens with labels <add> in ``[0, ..., config.vocab_size]`` <add> <add> Outputs: `Tuple` comprising various elements depending on the configuration (config) and inputs: <add> **loss**: (`optional`, returned when ``masked_lm_labels`` is provided) ``torch.FloatTensor`` of shape ``(1,)``: <add> Masked language modeling loss. <add> **prediction_scores**: ``torch.FloatTensor`` of shape ``(batch_size, sequence_length, config.vocab_size)`` <add> Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). <add> **hidden_states**: (`optional`, returned when ``config.output_hidden_states=True``) <add> list of ``torch.FloatTensor`` (one for the output of each layer + the output of the embeddings) <add> of shape ``(batch_size, sequence_length, hidden_size)``: <add> Hidden-states of the model at the output of each layer plus the initial embedding outputs. <add> **attentions**: (`optional`, returned when ``config.output_attentions=True``) <add> list of ``torch.FloatTensor`` (one for each layer) of shape ``(batch_size, num_heads, sequence_length, sequence_length)``: <add> Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. <add> <add> Examples:: <add> <add> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') <add> model = TFBertForMaskedLM.from_pretrained('bert-base-uncased') <add> input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) # Batch size 1 <add> outputs = model(input_ids, masked_lm_labels=input_ids) <add> loss, prediction_scores = outputs[:2] <add> <add> """ <add> def __init__(self, config): <add> super(TFBertForMaskedLM, self).__init__(config) <add> <add> self.bert = TFBertMainLayer(config, name='bert') <add> self.cls_mlm = TFBertMLMHead(config, name='cls_mlm') <add> <add> # self.apply(self.init_weights) <add> self.tie_weights() <add> <add> def tie_weights(self): <add> """ Make sure we are sharing the input and output embeddings. <add> """ <add> pass # TODO add weights tying <add> <add> def call(self, inputs, training=False): <add> outputs = self.bert(inputs, training=training) <add> <add> sequence_output = outputs[0] <add> prediction_scores = self.cls_mlm(sequence_output) <add> <add> outputs = (prediction_scores,) + outputs[2:] # Add hidden states and attention if they are here <add> # if masked_lm_labels is not None: <add> # loss_fct = CrossEntropyLoss(ignore_index=-1) <add> # masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), masked_lm_labels.view(-1)) <add> # outputs = (masked_lm_loss,) + outputs <add> # TODO example with losses <add> <add> return outputs # prediction_scores, (hidden_states), (attentions) <add> <add> <add>@add_start_docstrings("""Bert Model with a `next sentence prediction (classification)` head on top. """, <add> BERT_START_DOCSTRING, BERT_INPUTS_DOCSTRING) <add>class TFBertForNextSentencePrediction(TFBertPreTrainedModel): <add> r""" <add> **next_sentence_label**: (`optional`) ``torch.LongTensor`` of shape ``(batch_size,)``: <add> Labels for computing the next sequence prediction (classification) loss. Input should be a sequence pair (see ``input_ids`` docstring) <add> Indices should be in ``[0, 1]``. <add> ``0`` indicates sequence B is a continuation of sequence A, <add> ``1`` indicates sequence B is a random sequence. <add> <add> Outputs: `Tuple` comprising various elements depending on the configuration (config) and inputs: <add> **loss**: (`optional`, returned when ``next_sentence_label`` is provided) ``torch.FloatTensor`` of shape ``(1,)``: <add> Next sequence prediction (classification) loss. <add> **seq_relationship_scores**: ``torch.FloatTensor`` of shape ``(batch_size, sequence_length, 2)`` <add> Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax). <add> **hidden_states**: (`optional`, returned when ``config.output_hidden_states=True``) <add> list of ``torch.FloatTensor`` (one for the output of each layer + the output of the embeddings) <add> of shape ``(batch_size, sequence_length, hidden_size)``: <add> Hidden-states of the model at the output of each layer plus the initial embedding outputs. <add> **attentions**: (`optional`, returned when ``config.output_attentions=True``) <add> list of ``torch.FloatTensor`` (one for each layer) of shape ``(batch_size, num_heads, sequence_length, sequence_length)``: <add> Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. <add> <add> Examples:: <add> <add> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') <add> model = TFBertForNextSentencePrediction.from_pretrained('bert-base-uncased') <add> input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) # Batch size 1 <add> outputs = model(input_ids) <add> seq_relationship_scores = outputs[0] <add> <add> """ <add> def __init__(self, config): <add> super(TFBertForNextSentencePrediction, self).__init__(config) <add> <add> self.bert = TFBertMainLayer(config, name='bert') <add> self.cls_nsp = TFBertNSPHead(config, name='cls_nsp') <add> <add> # self.apply(self.init_weights) <add> <add> def call(self, inputs, training=False): <add> outputs = self.bert(inputs, training=training) <add> <add> pooled_output = outputs[1] <add> seq_relationship_score = self.cls_nsp(pooled_output) <add> <add> outputs = (seq_relationship_score,) + outputs[2:] # add hidden states and attention if they are here <add> # if next_sentence_label is not None: <add> # loss_fct = CrossEntropyLoss(ignore_index=-1) <add> # next_sentence_loss = loss_fct(seq_relationship_score.view(-1, 2), next_sentence_label.view(-1)) <add> # outputs = (next_sentence_loss,) + outputs <add> <add> return outputs # seq_relationship_score, (hidden_states), (attentions) <ide><path>pytorch_transformers/modeling_tf_utils.py <add># coding=utf-8 <add># Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team. <add># Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. <add># <add># Licensed under the Apache License, Version 2.0 (the "License"); <add># you may not use this file except in compliance with the License. <add># You may obtain a copy of the License at <add># <add># http://www.apache.org/licenses/LICENSE-2.0 <add># <add># Unless required by applicable law or agreed to in writing, software <add># distributed under the License is distributed on an "AS IS" BASIS, <add># WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. <add># See the License for the specific language governing permissions and <add># limitations under the License. <add>"""TF general model utils.""" <add> <add>from __future__ import (absolute_import, division, print_function, <add> unicode_literals) <add> <add>import logging <add>import os <add> <add>import tensorflow as tf <add> <add>from .configuration_utils import PretrainedConfig <add>from .file_utils import cached_path, WEIGHTS_NAME, TF_WEIGHTS_NAME <add> <add>logger = logging.getLogger(__name__) <add> <add> <add>class TFPreTrainedModel(tf.keras.Model): <add> r""" Base class for all TF models. <add> <add> :class:`~pytorch_transformers.TFPreTrainedModel` takes care of storing the configuration of the models and handles methods for loading/downloading/saving models <add> as well as a few methods commons to all models to (i) resize the input embeddings and (ii) prune heads in the self-attention heads. <add> <add> Class attributes (overridden by derived classes): <add> - ``config_class``: a class derived from :class:`~pytorch_transformers.PretrainedConfig` to use as configuration class for this model architecture. <add> - ``pretrained_model_archive_map``: a python ``dict`` of with `short-cut-names` (string) as keys and `url` (string) of associated pretrained weights as values. <add> - ``load_tf_weights``: a python ``method`` for loading a TensorFlow checkpoint in a PyTorch model, taking as arguments: <add> <add> - ``model``: an instance of the relevant subclass of :class:`~pytorch_transformers.PreTrainedModel`, <add> - ``config``: an instance of the relevant subclass of :class:`~pytorch_transformers.PretrainedConfig`, <add> - ``path``: a path (string) to the TensorFlow checkpoint. <add> <add> - ``base_model_prefix``: a string indicating the attribute associated to the base model in derived classes of the same architecture adding modules on top of the base model. <add> """ <add> config_class = None <add> pretrained_model_archive_map = {} <add> load_pt_weights = lambda model, config, path: None <add> base_model_prefix = "" <add> <add> def __init__(self, config, *inputs, **kwargs): <add> super(TFPreTrainedModel, self).__init__() <add> if not isinstance(config, PretrainedConfig): <add> raise ValueError( <add> "Parameter config in `{}(config)` should be an instance of class `PretrainedConfig`. " <add> "To create a model from a pretrained model use " <add> "`model = {}.from_pretrained(PRETRAINED_MODEL_NAME)`".format( <add> self.__class__.__name__, self.__class__.__name__ <add> )) <add> # Save config in model <add> self.config = config <add> <add> def _get_resized_embeddings(self, old_embeddings, new_num_tokens=None): <add> """ Build a resized Embedding Module from a provided token Embedding Module. <add> Increasing the size will add newly initialized vectors at the end <add> Reducing the size will remove vectors from the end <add> <add> Args: <add> new_num_tokens: (`optional`) int <add> New number of tokens in the embedding matrix. <add> Increasing the size will add newly initialized vectors at the end <add> Reducing the size will remove vectors from the end <add> If not provided or None: return the provided token Embedding Module. <add> Return: ``torch.nn.Embeddings`` <add> Pointer to the resized Embedding Module or the old Embedding Module if new_num_tokens is None <add> """ <add> raise NotImplementedError <add> <add> def _tie_or_clone_weights(self, first_module, second_module): <add> """ Tie or clone module weights depending of weither we are using TorchScript or not <add> """ <add> raise NotImplementedError <add> <add> def resize_token_embeddings(self, new_num_tokens=None): <add> """ Resize input token embeddings matrix of the model if new_num_tokens != config.vocab_size. <add> Take care of tying weights embeddings afterwards if the model class has a `tie_weights()` method. <add> <add> Arguments: <add> <add> new_num_tokens: (`optional`) int: <add> New number of tokens in the embedding matrix. Increasing the size will add newly initialized vectors at the end. Reducing the size will remove vectors from the end. <add> If not provided or None: does nothing and just returns a pointer to the input tokens ``torch.nn.Embeddings`` Module of the model. <add> <add> Return: ``torch.nn.Embeddings`` <add> Pointer to the input tokens Embeddings Module of the model <add> """ <add> raise NotImplementedError <add> <add> def prune_heads(self, heads_to_prune): <add> """ Prunes heads of the base model. <add> <add> Arguments: <add> <add> heads_to_prune: dict with keys being selected layer indices (`int`) and associated values being the list of heads to prune in said layer (list of `int`). <add> """ <add> raise NotImplementedError <add> <add> def save_pretrained(self, save_directory): <add> """ Save a model and its configuration file to a directory, so that it <add> can be re-loaded using the `:func:`~pytorch_transformers.PreTrainedModel.from_pretrained`` class method. <add> """ <add> raise NotImplementedError <add> <add> @classmethod <add> def from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs): <add> r"""Instantiate a pretrained pytorch model from a pre-trained model configuration. <add> <add> The model is set in evaluation mode by default using ``model.eval()`` (Dropout modules are deactivated) <add> To train the model, you should first set it back in training mode with ``model.train()`` <add> <add> The warning ``Weights from XXX not initialized from pretrained model`` means that the weights of XXX do not come pre-trained with the rest of the model. <add> It is up to you to train those weights with a downstream fine-tuning task. <add> <add> The warning ``Weights from XXX not used in YYY`` means that the layer XXX is not used by YYY, therefore those weights are discarded. <add> <add> Parameters: <add> pretrained_model_name_or_path: either: <add> <add> - a string with the `shortcut name` of a pre-trained model to load from cache or download, e.g.: ``bert-base-uncased``. <add> - a path to a `directory` containing model weights saved using :func:`~pytorch_transformers.PreTrainedModel.save_pretrained`, e.g.: ``./my_model_directory/``. <add> - a path or url to a `PyTorch state_dict save file` (e.g. `./pt_model/pytorch_model.bin`). In this case, ``from_pt`` should be set to True and a configuration object should be provided as ``config`` argument. This loading path is slower than converting the PyTorch checkpoint in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. <add> <add> model_args: (`optional`) Sequence of positional arguments: <add> All remaning positional arguments will be passed to the underlying model's ``__init__`` method <add> <add> config: (`optional`) instance of a class derived from :class:`~pytorch_transformers.PretrainedConfig`: <add> Configuration for the model to use instead of an automatically loaded configuation. Configuration can be automatically loaded when: <add> <add> - the model is a model provided by the library (loaded with the ``shortcut-name`` string of a pretrained model), or <add> - the model was saved using :func:`~pytorch_transformers.PreTrainedModel.save_pretrained` and is reloaded by suppling the save directory. <add> - the model is loaded by suppling a local directory as ``pretrained_model_name_or_path`` and a configuration JSON file named `config.json` is found in the directory. <add> <add> from_pt: (`optional`) boolean, default False: <add> Load the model weights from a PyTorch state_dict save file (see docstring of pretrained_model_name_or_path argument). <add> <add> cache_dir: (`optional`) string: <add> Path to a directory in which a downloaded pre-trained model <add> configuration should be cached if the standard cache should not be used. <add> <add> force_download: (`optional`) boolean, default False: <add> Force to (re-)download the model weights and configuration files and override the cached versions if they exists. <add> <add> proxies: (`optional`) dict, default None: <add> A dictionary of proxy servers to use by protocol or endpoint, e.g.: {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. <add> The proxies are used on each request. <add> <add> output_loading_info: (`optional`) boolean: <add> Set to ``True`` to also return a dictionnary containing missing keys, unexpected keys and error messages. <add> <add> kwargs: (`optional`) Remaining dictionary of keyword arguments: <add> Can be used to update the configuration object (after it being loaded) and initiate the model. (e.g. ``output_attention=True``). Behave differently depending on whether a `config` is provided or automatically loaded: <add> <add> - If a configuration is provided with ``config``, ``**kwargs`` will be directly passed to the underlying model's ``__init__`` method (we assume all relevant updates to the configuration have already been done) <add> - If a configuration is not provided, ``kwargs`` will be first passed to the configuration class initialization function (:func:`~pytorch_transformers.PretrainedConfig.from_pretrained`). Each key of ``kwargs`` that corresponds to a configuration attribute will be used to override said attribute with the supplied ``kwargs`` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's ``__init__`` function. <add> <add> Examples:: <add> <add> model = BertModel.from_pretrained('bert-base-uncased') # Download model and configuration from S3 and cache. <add> model = BertModel.from_pretrained('./test/saved_model/') # E.g. model was saved using `save_pretrained('./test/saved_model/')` <add> model = BertModel.from_pretrained('bert-base-uncased', output_attention=True) # Update configuration during loading <add> assert model.config.output_attention == True <add> # Loading from a TF checkpoint file instead of a PyTorch model (slower) <add> config = BertConfig.from_json_file('./tf_model/my_tf_model_config.json') <add> model = BertModel.from_pretrained('./tf_model/my_tf_checkpoint.ckpt.index', from_pt=True, config=config) <add> <add> """ <add> config = kwargs.pop('config', None) <add> cache_dir = kwargs.pop('cache_dir', None) <add> from_pt = kwargs.pop('from_pt', False) <add> force_download = kwargs.pop('force_download', False) <add> proxies = kwargs.pop('proxies', None) <add> output_loading_info = kwargs.pop('output_loading_info', False) <add> <add> # Load config <add> if config is None: <add> config, model_kwargs = cls.config_class.from_pretrained( <add> pretrained_model_name_or_path, *model_args, <add> cache_dir=cache_dir, return_unused_kwargs=True, <add> force_download=force_download, <add> **kwargs <add> ) <add> else: <add> model_kwargs = kwargs <add> <add> # Load model <add> if pretrained_model_name_or_path in cls.pretrained_model_archive_map: <add> archive_file = cls.pretrained_model_archive_map[pretrained_model_name_or_path] <add> elif os.path.isdir(pretrained_model_name_or_path): <add> if from_pt: <add> # Load from a PyTorch checkpoint <add> archive_file = os.path.join(pretrained_model_name_or_path, WEIGHTS_NAME) <add> else: <add> archive_file = os.path.join(pretrained_model_name_or_path, TF_WEIGHTS_NAME) <add> else: <add> archive_file = pretrained_model_name_or_path <add> # redirect to the cache, if necessary <add> try: <add> resolved_archive_file = cached_path(archive_file, cache_dir=cache_dir, force_download=force_download, proxies=proxies) <add> except EnvironmentError: <add> if pretrained_model_name_or_path in cls.pretrained_model_archive_map: <add> logger.error( <add> "Couldn't reach server at '{}' to download pretrained weights.".format( <add> archive_file)) <add> else: <add> logger.error( <add> "Model name '{}' was not found in model name list ({}). " <add> "We assumed '{}' was a path or url but couldn't find any file " <add> "associated to this path or url.".format( <add> pretrained_model_name_or_path, <add> ', '.join(cls.pretrained_model_archive_map.keys()), <add> archive_file)) <add> return None <add> if resolved_archive_file == archive_file: <add> logger.info("loading weights file {}".format(archive_file)) <add> else: <add> logger.info("loading weights file {} from cache at {}".format( <add> archive_file, resolved_archive_file)) <add> <add> # Instantiate model. <add> model = cls(config, *model_args, **model_kwargs) <add> <add> if from_pt: <add> # Load from a PyTorch checkpoint <add> return cls.load_pt_weights(model, config, resolved_archive_file) <add> <add> inputs = tf.constant([[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]]) <add> ret = model(inputs, training=False) # build the network with dummy inputs <add> <add> # 'by_name' allow us to do transfer learning by skipping/adding layers <add> # see https://github.com/tensorflow/tensorflow/blob/00fad90125b18b80fe054de1055770cfb8fe4ba3/tensorflow/python/keras/engine/network.py#L1339-L1357 <add> model.load_weights(resolved_archive_file, by_name=True) <add> <add> ret = model(inputs, training=False) # Make sure restore ops are run <add> <add> # if hasattr(model, 'tie_weights'): <add> # model.tie_weights() # TODO make sure word embedding weights are still tied <add> <add> if output_loading_info: <add> loading_info = {"missing_keys": missing_keys, "unexpected_keys": unexpected_keys, "error_msgs": error_msgs} <add> return model, loading_info <add> <add> return model <ide><path>pytorch_transformers/tests/modeling_tf_common_test.py <add># coding=utf-8 <add># Copyright 2019 HuggingFace Inc. <add># <add># Licensed under the Apache License, Version 2.0 (the "License"); <add># you may not use this file except in compliance with the License. <add># You may obtain a copy of the License at <add># <add># http://www.apache.org/licenses/LICENSE-2.0 <add># <add># Unless required by applicable law or agreed to in writing, software <add># distributed under the License is distributed on an "AS IS" BASIS, <add># WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. <add># See the License for the specific language governing permissions and <add># limitations under the License. <add>from __future__ import absolute_import <add>from __future__ import division <add>from __future__ import print_function <add> <add>import copy <add>import os <add>import shutil <add>import json <add>import random <add>import uuid <add> <add>import unittest <add>import logging <add> <add>import tensorflow as tf <add> <add>from pytorch_transformers import TFPreTrainedModel <add># from pytorch_transformers.modeling_bert import BertModel, BertConfig, BERT_PRETRAINED_MODEL_ARCHIVE_MAP <add> <add> <add>def _config_zero_init(config): <add> configs_no_init = copy.deepcopy(config) <add> for key in configs_no_init.__dict__.keys(): <add> if '_range' in key or '_std' in key: <add> setattr(configs_no_init, key, 0.0) <add> return configs_no_init <add> <add>class TFCommonTestCases: <add> <add> class TFCommonModelTester(unittest.TestCase): <add> <add> model_tester = None <add> all_model_classes = () <add> test_torchscript = True <add> test_pruning = True <add> test_resize_embeddings = True <add> <add> def test_initialization(self): <add> pass <add> # config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() <add> <add> # configs_no_init = _config_zero_init(config) <add> # for model_class in self.all_model_classes: <add> # model = model_class(config=configs_no_init) <add> # for name, param in model.named_parameters(): <add> # if param.requires_grad: <add> # self.assertIn(param.data.mean().item(), [0.0, 1.0], <add> # msg="Parameter {} of model {} seems not properly initialized".format(name, model_class)) <add> <add> <add> def test_attention_outputs(self): <add> pass <add> # config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() <add> <add> # for model_class in self.all_model_classes: <add> # config.output_attentions = True <add> # config.output_hidden_states = False <add> # model = model_class(config) <add> # model.eval() <add> # outputs = model(**inputs_dict) <add> # attentions = outputs[-1] <add> # self.assertEqual(model.config.output_attentions, True) <add> # self.assertEqual(model.config.output_hidden_states, False) <add> # self.assertEqual(len(attentions), self.model_tester.num_hidden_layers) <add> # self.assertListEqual( <add> # list(attentions[0].shape[-3:]), <add> # [self.model_tester.num_attention_heads, <add> # self.model_tester.seq_length, <add> # self.model_tester.key_len if hasattr(self.model_tester, 'key_len') else self.model_tester.seq_length]) <add> # out_len = len(outputs) <add> <add> # # Check attention is always last and order is fine <add> # config.output_attentions = True <add> # config.output_hidden_states = True <add> # model = model_class(config) <add> # model.eval() <add> # outputs = model(**inputs_dict) <add> # self.assertEqual(out_len+1, len(outputs)) <add> # self.assertEqual(model.config.output_attentions, True) <add> # self.assertEqual(model.config.output_hidden_states, True) <add> <add> # attentions = outputs[-1] <add> # self.assertEqual(len(attentions), self.model_tester.num_hidden_layers) <add> # self.assertListEqual( <add> # list(attentions[0].shape[-3:]), <add> # [self.model_tester.num_attention_heads, <add> # self.model_tester.seq_length, <add> # self.model_tester.key_len if hasattr(self.model_tester, 'key_len') else self.model_tester.seq_length]) <add> <add> <add> def test_headmasking(self): <add> pass <add> # config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() <add> <add> # config.output_attentions = True <add> # config.output_hidden_states = True <add> # configs_no_init = _config_zero_init(config) # To be sure we have no Nan <add> # for model_class in self.all_model_classes: <add> # model = model_class(config=configs_no_init) <add> # model.eval() <add> <add> # # Prepare head_mask <add> # # Set require_grad after having prepared the tensor to avoid error (leaf variable has been moved into the graph interior) <add> # head_mask = torch.ones(self.model_tester.num_hidden_layers, self.model_tester.num_attention_heads) <add> # head_mask[0, 0] = 0 <add> # head_mask[-1, :-1] = 0 <add> # head_mask.requires_grad_(requires_grad=True) <add> # inputs = inputs_dict.copy() <add> # inputs['head_mask'] = head_mask <add> <add> # outputs = model(**inputs) <add> <add> # # Test that we can get a gradient back for importance score computation <add> # output = sum(t.sum() for t in outputs[0]) <add> # output = output.sum() <add> # output.backward() <add> # multihead_outputs = head_mask.grad <add> <add> # attentions = outputs[-1] <add> # hidden_states = outputs[-2] <add> <add> # # Remove Nan <add> <add> # self.assertIsNotNone(multihead_outputs) <add> # self.assertEqual(len(multihead_outputs), self.model_tester.num_hidden_layers) <add> # self.assertAlmostEqual( <add> # attentions[0][..., 0, :, :].flatten().sum().item(), 0.0) <add> # self.assertNotEqual( <add> # attentions[0][..., -1, :, :].flatten().sum().item(), 0.0) <add> # self.assertNotEqual( <add> # attentions[1][..., 0, :, :].flatten().sum().item(), 0.0) <add> # self.assertAlmostEqual( <add> # attentions[-1][..., -2, :, :].flatten().sum().item(), 0.0) <add> # self.assertNotEqual( <add> # attentions[-1][..., -1, :, :].flatten().sum().item(), 0.0) <add> <add> <add> def test_head_pruning(self): <add> pass <add> # if not self.test_pruning: <add> # return <add> <add> # config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() <add> <add> # for model_class in self.all_model_classes: <add> # config.output_attentions = True <add> # config.output_hidden_states = False <add> # model = model_class(config=config) <add> # model.eval() <add> # heads_to_prune = {0: list(range(1, self.model_tester.num_attention_heads)), <add> # -1: [0]} <add> # model.prune_heads(heads_to_prune) <add> # outputs = model(**inputs_dict) <add> <add> # attentions = outputs[-1] <add> <add> # self.assertEqual( <add> # attentions[0].shape[-3], 1) <add> # self.assertEqual( <add> # attentions[1].shape[-3], self.model_tester.num_attention_heads) <add> # self.assertEqual( <add> # attentions[-1].shape[-3], self.model_tester.num_attention_heads - 1) <add> <add> <add> def test_hidden_states_output(self): <add> pass <add> # config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() <add> <add> # for model_class in self.all_model_classes: <add> # config.output_hidden_states = True <add> # config.output_attentions = False <add> # model = model_class(config) <add> # model.eval() <add> # outputs = model(**inputs_dict) <add> # hidden_states = outputs[-1] <add> # self.assertEqual(model.config.output_attentions, False) <add> # self.assertEqual(model.config.output_hidden_states, True) <add> # self.assertEqual(len(hidden_states), self.model_tester.num_hidden_layers + 1) <add> # self.assertListEqual( <add> # list(hidden_states[0].shape[-2:]), <add> # [self.model_tester.seq_length, self.model_tester.hidden_size]) <add> <add> <add> def test_resize_tokens_embeddings(self): <add> pass <add> # original_config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() <add> # if not self.test_resize_embeddings: <add> # return <add> <add> # for model_class in self.all_model_classes: <add> # config = copy.deepcopy(original_config) <add> # model = model_class(config) <add> <add> # model_vocab_size = config.vocab_size <add> # # Retrieve the embeddings and clone theme <add> # model_embed = model.resize_token_embeddings(model_vocab_size) <add> # cloned_embeddings = model_embed.weight.clone() <add> <add> # # Check that resizing the token embeddings with a larger vocab size increases the model's vocab size <add> # model_embed = model.resize_token_embeddings(model_vocab_size + 10) <add> # self.assertEqual(model.config.vocab_size, model_vocab_size + 10) <add> # # Check that it actually resizes the embeddings matrix <add> # self.assertEqual(model_embed.weight.shape[0], cloned_embeddings.shape[0] + 10) <add> <add> # # Check that resizing the token embeddings with a smaller vocab size decreases the model's vocab size <add> # model_embed = model.resize_token_embeddings(model_vocab_size - 15) <add> # self.assertEqual(model.config.vocab_size, model_vocab_size - 15) <add> # # Check that it actually resizes the embeddings matrix <add> # self.assertEqual(model_embed.weight.shape[0], cloned_embeddings.shape[0] - 15) <add> <add> # # Check that adding and removing tokens has not modified the first part of the embedding matrix. <add> # models_equal = True <add> # for p1, p2 in zip(cloned_embeddings, model_embed.weight): <add> # if p1.data.ne(p2.data).sum() > 0: <add> # models_equal = False <add> <add> # self.assertTrue(models_equal) <add> <add> <add> def test_tie_model_weights(self): <add> pass <add> # config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() <add> <add> # def check_same_values(layer_1, layer_2): <add> # equal = True <add> # for p1, p2 in zip(layer_1.weight, layer_2.weight): <add> # if p1.data.ne(p2.data).sum() > 0: <add> # equal = False <add> # return equal <add> <add> # for model_class in self.all_model_classes: <add> # if not hasattr(model_class, 'tie_weights'): <add> # continue <add> <add> # config.torchscript = True <add> # model_not_tied = model_class(config) <add> # params_not_tied = list(model_not_tied.parameters()) <add> <add> # config_tied = copy.deepcopy(config) <add> # config_tied.torchscript = False <add> # model_tied = model_class(config_tied) <add> # params_tied = list(model_tied.parameters()) <add> <add> # # Check that the embedding layer and decoding layer are the same in size and in value <add> # self.assertGreater(len(params_not_tied), len(params_tied)) <add> <add> # # Check that after resize they remain tied. <add> # model_tied.resize_token_embeddings(config.vocab_size + 10) <add> # params_tied_2 = list(model_tied.parameters()) <add> # self.assertGreater(len(params_not_tied), len(params_tied)) <add> # self.assertEqual(len(params_tied_2), len(params_tied)) <add> <add> <add>def ids_tensor(shape, vocab_size, rng=None, name=None): <add> """Creates a random int32 tensor of the shape within the vocab size.""" <add> if rng is None: <add> rng = random.Random() <add> <add> total_dims = 1 <add> for dim in shape: <add> total_dims *= dim <add> <add> values = [] <add> for _ in range(total_dims): <add> values.append(rng.randint(0, vocab_size - 1)) <add> <add> return tf.constant(values, shape=shape) <add> <add> <add>class TFModelUtilsTest(unittest.TestCase): <add> def test_model_from_pretrained(self): <add> pass <add> # logging.basicConfig(level=logging.INFO) <add> # for model_name in list(BERT_PRETRAINED_MODEL_ARCHIVE_MAP.keys())[:1]: <add> # config = BertConfig.from_pretrained(model_name) <add> # self.assertIsNotNone(config) <add> # self.assertIsInstance(config, PretrainedConfig) <add> <add> # model = BertModel.from_pretrained(model_name) <add> # model, loading_info = BertModel.from_pretrained(model_name, output_loading_info=True) <add> # self.assertIsNotNone(model) <add> # self.assertIsInstance(model, PreTrainedModel) <add> # for value in loading_info.values(): <add> # self.assertEqual(len(value), 0) <add> <add> # config = BertConfig.from_pretrained(model_name, output_attentions=True, output_hidden_states=True) <add> # model = BertModel.from_pretrained(model_name, output_attentions=True, output_hidden_states=True) <add> # self.assertEqual(model.config.output_attentions, True) <add> # self.assertEqual(model.config.output_hidden_states, True) <add> # self.assertEqual(model.config, config) <add> <add> <add>if __name__ == "__main__": <add> unittest.main() <ide><path>pytorch_transformers/tests/modeling_tf_test.py <add># coding=utf-8 <add># Copyright 2018 The Google AI Language Team Authors. <add># <add># Licensed under the Apache License, Version 2.0 (the "License"); <add># you may not use this file except in compliance with the License. <add># You may obtain a copy of the License at <add># <add># http://www.apache.org/licenses/LICENSE-2.0 <add># <add># Unless required by applicable law or agreed to in writing, software <add># distributed under the License is distributed on an "AS IS" BASIS, <add># WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. <add># See the License for the specific language governing permissions and <add># limitations under the License. <add>from __future__ import absolute_import <add>from __future__ import division <add>from __future__ import print_function <add> <add>import unittest <add>import shutil <add>import pytest <add> <add>import tensorflow as tf <add> <add>from pytorch_transformers import (BertConfig) <add>from pytorch_transformers.modeling_tf_bert import TFBertModel, TF_BERT_PRETRAINED_MODEL_ARCHIVE_MAP <add> <add>from .modeling_tf_common_test import (TFCommonTestCases, ids_tensor) <add>from .configuration_common_test import ConfigTester <add> <add> <add>class TFBertModelTest(TFCommonTestCases.TFCommonModelTester): <add> <add> all_model_classes = (TFBertModel,) <add> # BertForMaskedLM, BertForNextSentencePrediction, <add> # BertForPreTraining, BertForQuestionAnswering, BertForSequenceClassification, <add> # BertForTokenClassification) <add> <add> class TFBertModelTester(object): <add> <add> def __init__(self, <add> parent, <add> batch_size=13, <add> seq_length=7, <add> is_training=True, <add> use_input_mask=True, <add> use_token_type_ids=True, <add> use_labels=True, <add> vocab_size=99, <add> hidden_size=32, <add> num_hidden_layers=5, <add> num_attention_heads=4, <add> intermediate_size=37, <add> hidden_act="gelu", <add> hidden_dropout_prob=0.1, <add> attention_probs_dropout_prob=0.1, <add> max_position_embeddings=512, <add> type_vocab_size=16, <add> type_sequence_label_size=2, <add> initializer_range=0.02, <add> num_labels=3, <add> num_choices=4, <add> scope=None, <add> ): <add> self.parent = parent <add> self.batch_size = batch_size <add> self.seq_length = seq_length <add> self.is_training = is_training <add> self.use_input_mask = use_input_mask <add> self.use_token_type_ids = use_token_type_ids <add> self.use_labels = use_labels <add> self.vocab_size = vocab_size <add> self.hidden_size = hidden_size <add> self.num_hidden_layers = num_hidden_layers <add> self.num_attention_heads = num_attention_heads <add> self.intermediate_size = intermediate_size <add> self.hidden_act = hidden_act <add> self.hidden_dropout_prob = hidden_dropout_prob <add> self.attention_probs_dropout_prob = attention_probs_dropout_prob <add> self.max_position_embeddings = max_position_embeddings <add> self.type_vocab_size = type_vocab_size <add> self.type_sequence_label_size = type_sequence_label_size <add> self.initializer_range = initializer_range <add> self.num_labels = num_labels <add> self.num_choices = num_choices <add> self.scope = scope <add> <add> def prepare_config_and_inputs(self): <add> input_ids = ids_tensor([self.batch_size, self.seq_length], self.vocab_size) <add> <add> input_mask = None <add> if self.use_input_mask: <add> input_mask = ids_tensor([self.batch_size, self.seq_length], vocab_size=2) <add> <add> token_type_ids = None <add> if self.use_token_type_ids: <add> token_type_ids = ids_tensor([self.batch_size, self.seq_length], self.type_vocab_size) <add> <add> sequence_labels = None <add> token_labels = None <add> choice_labels = None <add> if self.use_labels: <add> sequence_labels = ids_tensor([self.batch_size], self.type_sequence_label_size) <add> token_labels = ids_tensor([self.batch_size, self.seq_length], self.num_labels) <add> choice_labels = ids_tensor([self.batch_size], self.num_choices) <add> <add> config = BertConfig( <add> vocab_size_or_config_json_file=self.vocab_size, <add> hidden_size=self.hidden_size, <add> num_hidden_layers=self.num_hidden_layers, <add> num_attention_heads=self.num_attention_heads, <add> intermediate_size=self.intermediate_size, <add> hidden_act=self.hidden_act, <add> hidden_dropout_prob=self.hidden_dropout_prob, <add> attention_probs_dropout_prob=self.attention_probs_dropout_prob, <add> max_position_embeddings=self.max_position_embeddings, <add> type_vocab_size=self.type_vocab_size, <add> initializer_range=self.initializer_range) <add> <add> return config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels <add> <add> def check_loss_output(self, result): <add> self.parent.assertListEqual( <add> list(result["loss"].size()), <add> []) <add> <add> def create_and_check_bert_model(self, config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels): <add> model = TFBertModel(config=config) <add> # model.eval() <add> inputs = {'input_ids': input_ids, <add> 'attention_mask': input_mask, <add> 'token_type_ids': token_type_ids} <add> sequence_output, pooled_output = model(inputs) <add> <add> inputs = [input_ids, input_mask] <add> sequence_output, pooled_output = model(inputs) <add> <add> sequence_output, pooled_output = model(input_ids) <add> <add> result = { <add> "sequence_output": sequence_output.numpy(), <add> "pooled_output": pooled_output.numpy(), <add> } <add> self.parent.assertListEqual( <add> list(result["sequence_output"].shape), <add> [self.batch_size, self.seq_length, self.hidden_size]) <add> self.parent.assertListEqual(list(result["pooled_output"].shape), [self.batch_size, self.hidden_size]) <add> <add> <add> def create_and_check_bert_for_masked_lm(self, config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels): <add> pass <add> # model = BertForMaskedLM(config=config) <add> # model.eval() <add> # loss, prediction_scores = model(input_ids, token_type_ids, input_mask, token_labels) <add> # result = { <add> # "loss": loss, <add> # "prediction_scores": prediction_scores, <add> # } <add> # self.parent.assertListEqual( <add> # list(result["prediction_scores"].size()), <add> # [self.batch_size, self.seq_length, self.vocab_size]) <add> # self.check_loss_output(result) <add> <add> <add> def create_and_check_bert_for_next_sequence_prediction(self, config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels): <add> pass <add> # model = BertForNextSentencePrediction(config=config) <add> # model.eval() <add> # loss, seq_relationship_score = model(input_ids, token_type_ids, input_mask, sequence_labels) <add> # result = { <add> # "loss": loss, <add> # "seq_relationship_score": seq_relationship_score, <add> # } <add> # self.parent.assertListEqual( <add> # list(result["seq_relationship_score"].size()), <add> # [self.batch_size, 2]) <add> # self.check_loss_output(result) <add> <add> <add> def create_and_check_bert_for_pretraining(self, config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels): <add> pass <add> # model = BertForPreTraining(config=config) <add> # model.eval() <add> # loss, prediction_scores, seq_relationship_score = model(input_ids, token_type_ids, input_mask, token_labels, sequence_labels) <add> # result = { <add> # "loss": loss, <add> # "prediction_scores": prediction_scores, <add> # "seq_relationship_score": seq_relationship_score, <add> # } <add> # self.parent.assertListEqual( <add> # list(result["prediction_scores"].size()), <add> # [self.batch_size, self.seq_length, self.vocab_size]) <add> # self.parent.assertListEqual( <add> # list(result["seq_relationship_score"].size()), <add> # [self.batch_size, 2]) <add> # self.check_loss_output(result) <add> <add> <add> def create_and_check_bert_for_question_answering(self, config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels): <add> pass <add> # model = BertForQuestionAnswering(config=config) <add> # model.eval() <add> # loss, start_logits, end_logits = model(input_ids, token_type_ids, input_mask, sequence_labels, sequence_labels) <add> # result = { <add> # "loss": loss, <add> # "start_logits": start_logits, <add> # "end_logits": end_logits, <add> # } <add> # self.parent.assertListEqual( <add> # list(result["start_logits"].size()), <add> # [self.batch_size, self.seq_length]) <add> # self.parent.assertListEqual( <add> # list(result["end_logits"].size()), <add> # [self.batch_size, self.seq_length]) <add> # self.check_loss_output(result) <add> <add> <add> def create_and_check_bert_for_sequence_classification(self, config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels): <add> pass <add> # config.num_labels = self.num_labels <add> # model = BertForSequenceClassification(config) <add> # model.eval() <add> # loss, logits = model(input_ids, token_type_ids, input_mask, sequence_labels) <add> # result = { <add> # "loss": loss, <add> # "logits": logits, <add> # } <add> # self.parent.assertListEqual( <add> # list(result["logits"].size()), <add> # [self.batch_size, self.num_labels]) <add> # self.check_loss_output(result) <add> <add> <add> def create_and_check_bert_for_token_classification(self, config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels): <add> pass <add> # config.num_labels = self.num_labels <add> # model = BertForTokenClassification(config=config) <add> # model.eval() <add> # loss, logits = model(input_ids, token_type_ids, input_mask, token_labels) <add> # result = { <add> # "loss": loss, <add> # "logits": logits, <add> # } <add> # self.parent.assertListEqual( <add> # list(result["logits"].size()), <add> # [self.batch_size, self.seq_length, self.num_labels]) <add> # self.check_loss_output(result) <add> <add> <add> def create_and_check_bert_for_multiple_choice(self, config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels): <add> pass <add> # config.num_choices = self.num_choices <add> # model = BertForMultipleChoice(config=config) <add> # model.eval() <add> # multiple_choice_inputs_ids = input_ids.unsqueeze(1).expand(-1, self.num_choices, -1).contiguous() <add> # multiple_choice_token_type_ids = token_type_ids.unsqueeze(1).expand(-1, self.num_choices, -1).contiguous() <add> # multiple_choice_input_mask = input_mask.unsqueeze(1).expand(-1, self.num_choices, -1).contiguous() <add> # loss, logits = model(multiple_choice_inputs_ids, <add> # multiple_choice_token_type_ids, <add> # multiple_choice_input_mask, <add> # choice_labels) <add> # result = { <add> # "loss": loss, <add> # "logits": logits, <add> # } <add> # self.parent.assertListEqual( <add> # list(result["logits"].size()), <add> # [self.batch_size, self.num_choices]) <add> # self.check_loss_output(result) <add> <add> <add> def prepare_config_and_inputs_for_common(self): <add> config_and_inputs = self.prepare_config_and_inputs() <add> (config, input_ids, token_type_ids, input_mask, <add> sequence_labels, token_labels, choice_labels) = config_and_inputs <add> inputs_dict = {'input_ids': input_ids, 'token_type_ids': token_type_ids, 'attention_mask': input_mask} <add> return config, inputs_dict <add> <add> def setUp(self): <add> self.model_tester = TFBertModelTest.TFBertModelTester(self) <add> self.config_tester = ConfigTester(self, config_class=BertConfig, hidden_size=37) <add> <add> def test_config(self): <add> self.config_tester.run_common_tests() <add> <add> def test_bert_model(self): <add> config_and_inputs = self.model_tester.prepare_config_and_inputs() <add> self.model_tester.create_and_check_bert_model(*config_and_inputs) <add> <add> def test_for_masked_lm(self): <add> config_and_inputs = self.model_tester.prepare_config_and_inputs() <add> self.model_tester.create_and_check_bert_for_masked_lm(*config_and_inputs) <add> <add> def test_for_multiple_choice(self): <add> config_and_inputs = self.model_tester.prepare_config_and_inputs() <add> self.model_tester.create_and_check_bert_for_multiple_choice(*config_and_inputs) <add> <add> def test_for_next_sequence_prediction(self): <add> config_and_inputs = self.model_tester.prepare_config_and_inputs() <add> self.model_tester.create_and_check_bert_for_next_sequence_prediction(*config_and_inputs) <add> <add> def test_for_pretraining(self): <add> config_and_inputs = self.model_tester.prepare_config_and_inputs() <add> self.model_tester.create_and_check_bert_for_pretraining(*config_and_inputs) <add> <add> def test_for_question_answering(self): <add> config_and_inputs = self.model_tester.prepare_config_and_inputs() <add> self.model_tester.create_and_check_bert_for_question_answering(*config_and_inputs) <add> <add> def test_for_sequence_classification(self): <add> config_and_inputs = self.model_tester.prepare_config_and_inputs() <add> self.model_tester.create_and_check_bert_for_sequence_classification(*config_and_inputs) <add> <add> def test_for_token_classification(self): <add> config_and_inputs = self.model_tester.prepare_config_and_inputs() <add> self.model_tester.create_and_check_bert_for_token_classification(*config_and_inputs) <add> <add> @pytest.mark.slow <add> def test_model_from_pretrained(self): <add> cache_dir = "/tmp/pytorch_transformers_test/" <add> for model_name in list(TF_BERT_PRETRAINED_MODEL_ARCHIVE_MAP.keys())[:1]: <add> model = TFBertModel.from_pretrained(model_name, cache_dir=cache_dir) <add> shutil.rmtree(cache_dir) <add> self.assertIsNotNone(model) <add> <add>if __name__ == "__main__": <add> unittest.main()
5
Go
Go
improve validation for volume specs
120241937e67238fd98b839886751bda70129ba8
<ide><path>cli/compose/convert/volume.go <ide> func convertVolumeToMount(volumeSpec string, stackVolumes volumes, namespace Nam <ide> // TODO: split Windows path mappings properly <ide> parts := strings.SplitN(volumeSpec, ":", 3) <ide> <add> for _, part := range parts { <add> if strings.TrimSpace(part) == "" { <add> return mount.Mount{}, fmt.Errorf("invalid volume: %s", volumeSpec) <add> } <add> } <add> <ide> switch len(parts) { <ide> case 3: <ide> source = parts[0] <ide> func convertVolumeToMount(volumeSpec string, stackVolumes volumes, namespace Nam <ide> target = parts[1] <ide> case 1: <ide> target = parts[0] <del> default: <del> return mount.Mount{}, fmt.Errorf("invalid volume: %s", volumeSpec) <ide> } <ide> <ide> if source == "" { <ide><path>cli/compose/convert/volume_test.go <ide> func TestConvertVolumeToMountAnonymousVolume(t *testing.T) { <ide> assert.DeepEqual(t, mount, expected) <ide> } <ide> <add>func TestConvertVolumeToMountInvalidFormat(t *testing.T) { <add> namespace := NewNamespace("foo") <add> invalids := []string{"::", "::cc", ":bb:", "aa::", "aa::cc", "aa:bb:", " : : ", " : :cc", " :bb: ", "aa: : ", "aa: :cc", "aa:bb: "} <add> for _, vol := range invalids { <add> _, err := convertVolumeToMount(vol, volumes{}, namespace) <add> assert.Error(t, err, "invalid volume: "+vol) <add> } <add>} <add> <ide> func TestConvertVolumeToMountNamedVolume(t *testing.T) { <ide> stackVolumes := volumes{ <ide> "normal": composetypes.VolumeConfig{
2
PHP
PHP
add cli tools for loading/unloading of plugins
2107d61c6c244175105a87d3860483096b8e4d20
<ide><path>src/Shell/PluginShell.php <ide> */ <ide> class PluginShell extends Shell <ide> { <del> <ide> /** <del> * Contains tasks to load and instantiate <add> * Tasks to load <ide> * <ide> * @var array <ide> */ <del> public $tasks = ['Assets']; <add> public $tasks = [ <add> 'Assets', <add> 'Load', <add> 'Unload', <add> ]; <ide> <ide> /** <ide> * Gets the option parser instance and configures it. <ide> class PluginShell extends Shell <ide> public function getOptionParser() <ide> { <ide> $parser = parent::getOptionParser(); <del> <del> $parser->description( <del> 'Plugin Shell perform various tasks related to plugin.' <del> )->addSubcommand('assets', [ <del> 'help' => 'Symlink / copy plugin assets to app\'s webroot', <del> 'parser' => $this->Assets->getOptionParser() <del> ]); <add> <add> $parser->description('Plugin Shell perform various tasks related to plugin.') <add> ->addSubcommand('assets', [ <add> 'help' => 'Symlink / copy plugin assets to app\'s webroot', <add> 'parser' => $this->Assets->getOptionParser() <add> ])->addSubcommand('load', [ <add> 'help' => 'Loads a plugin', <add> 'parser' => $this->Load->getOptionParser(), <add> ]) <add> ->addSubcommand('unload', [ <add> 'help' => 'Unloads a plugin', <add> 'parser' => $this->Unload->getOptionParser(), <add> ]); <ide> <ide> return $parser; <ide> } <ide><path>src/Shell/Task/LoadTask.php <add><?php <add>/** <add> * CakePHP(tm) : Rapid Development Framework (http://cakephp.org) <add> * Copyright (c) Cake Software Foundation, Inc. (http://cakefoundation.org) <add> * <add> * Licensed under The MIT License <add> * For full copyright and license information, please see the LICENSE.txt <add> * Redistributions of files must retain the above copyright notice. <add> * <add> * @copyright Copyright (c) Cake Software Foundation, Inc. (http://cakefoundation.org) <add> * @link http://cakephp.org CakePHP(tm) Project <add> * @since 3.0.0 <add> * @license http://www.opensource.org/licenses/mit-license.php MIT License <add> */ <add>namespace Cake\Shell\Task; <add> <add>use Cake\Console\Shell; <add>use Cake\Filesystem\File; <add> <add>/** <add> * Task for loading plugins. <add> * <add> */ <add>class LoadTask extends Shell <add>{ <add> /** <add> * Path to the bootstrap file. <add> * <add> * @var string <add> */ <add> public $bootstrap = null; <add> <add> /** <add> * Execution method always used for tasks. <add> * <add> * @param string $plugin The plugin name. <add> * @return bool <add> */ <add> public function main($plugin = null) <add> { <add> $this->bootstrap = ROOT . DS . 'config' . DS . 'bootstrap.php'; <add> <add> if (empty($plugin)) { <add> $this->err('<error>You must provide a plugin name in CamelCase format.</error>'); <add> $this->err('To load an "Example" plugin, run <info>`cake plugin load Example`</info>.'); <add> return false; <add> } <add> <add> return $this->_modifyBootstrap( <add> $plugin, <add> $this->params['bootstrap'], <add> $this->params['routes'], <add> $this->params['autoload'] <add> ); <add> } <add> <add> /** <add> * Update the applications bootstrap.php file. <add> * <add> * @param string $plugin Name of plugin. <add> * @param bool $hasBootstrap Whether or not bootstrap should be loaded. <add> * @param bool $hasRoutes Whether or not routes should be loaded. <add> * @param bool $hasAutoloader Whether or not there is an autoloader configured for <add> * the plugin. <add> * @return bool If modify passed. <add> */ <add> protected function _modifyBootstrap($plugin, $hasBootstrap, $hasRoutes, $hasAutoloader) <add> { <add> $bootstrap = new File($this->bootstrap, false); <add> $contents = $bootstrap->read(); <add> if (!preg_match("@\n\s*Plugin::loadAll@", $contents)) { <add> $autoloadString = $hasAutoloader ? "'autoload' => true" : ''; <add> $bootstrapString = $hasBootstrap ? "'bootstrap' => true" : ''; <add> $routesString = $hasRoutes ? "'routes' => true" : ''; <add> <add> $append = "\nPlugin::load('%s', [%s]);\n"; <add> $options = implode(', ', array_filter([$autoloadString, $bootstrapString, $routesString])); <add> <add> $bootstrap->append(sprintf($append, $plugin, $options)); <add> $this->out(''); <add> $this->out(sprintf('%s modified', $this->bootstrap)); <add> return true; <add> } <add> return false; <add> } <add> <add> /** <add> * GetOptionParser method. <add> * <add> * @return \Cake\Console\ConsoleOptionParser <add> */ <add> public function getOptionParser() <add> { <add> $parser = parent::getOptionParser(); <add> <add> $parser->addOption('bootstrap', [ <add> 'short' => 'b', <add> 'help' => 'Will load bootstrap.php from plugin.', <add> 'boolean' => true, <add> 'default' => false, <add> ]) <add> ->addOption('routes', [ <add> 'short' => 'r', <add> 'help' => 'Will load routes.php from plugin.', <add> 'boolean' => true, <add> 'default' => false, <add> ]) <add> ->addOption('autoload', [ <add> 'help' => 'Will autoload the plugin using CakePHP. ' . <add> 'Set to true if you are not using composer to autoload your plugin.', <add> 'boolean' => true, <add> 'default' => false, <add> ]) <add> ->addArgument('plugin', [ <add> 'help' => 'Name of the plugin to load.', <add> ]); <add> <add> return $parser; <add> } <add>} <ide><path>src/Shell/Task/UnloadTask.php <add><?php <add>/** <add> * CakePHP(tm) : Rapid Development Framework (http://cakephp.org) <add> * Copyright (c) Cake Software Foundation, Inc. (http://cakefoundation.org) <add> * <add> * Licensed under The MIT License <add> * For full copyright and license information, please see the LICENSE.txt <add> * Redistributions of files must retain the above copyright notice. <add> * <add> * @copyright Copyright (c) Cake Software Foundation, Inc. (http://cakefoundation.org) <add> * @link http://cakephp.org CakePHP(tm) Project <add> * @since 3.0.0 <add> * @license http://www.opensource.org/licenses/mit-license.php MIT License <add> */ <add>namespace Cake\Shell\Task; <add> <add>use Cake\Console\Shell; <add>use Cake\Filesystem\File; <add> <add>/** <add> * Task for unloading plugins. <add> * <add> */ <add>class UnloadTask extends Shell <add>{ <add> /** <add> * Path to the bootstrap file. <add> * <add> * @var string <add> */ <add> public $bootstrap = null; <add> <add> /** <add> * Execution method always used for tasks. <add> * <add> * @param string $plugin The plugin name. <add> * @return boolean if action passed. <add> */ <add> public function main($plugin = null) <add> { <add> $this->bootstrap = ROOT . DS . 'config' . DS . 'bootstrap.php'; <add> <add> if (empty($plugin)) { <add> $this->err('<error>You must provide a plugin name in CamelCase format.</error>'); <add> $this->err('To unload an "Example" plugin, run <info>`cake plugin unload Example`</info>.'); <add> return false; <add> } <add> <add> return (bool)$this->_modifyBootstrap($plugin); <add> } <add> <add> /** <add> * Update the applications bootstrap.php file. <add> * <add> * @param string $plugin Name of plugin. <add> * @return bool If modify passed. <add> */ <add> protected function _modifyBootstrap($plugin) <add> { <add> $finder = "/\nPlugin::load\((.|.\n|\n\s\s|\n\t|)+'$plugin'(.|.\n|)+\);\n/"; <add> <add> $bootstrap = new File($this->bootstrap, false); <add> $contents = $bootstrap->read(); <add> <add> if (!preg_match("@\n\s*Plugin::loadAll@", $contents)) { <add> $contents = preg_replace($finder, "", $contents); <add> <add> $bootstrap->write($contents); <add> <add> $this->out(''); <add> $this->out(sprintf('%s modified', $this->bootstrap)); <add> <add> return true; <add> } <add> return false; <add> } <add> <add> /** <add> * GetOptionParser method. <add> * <add> * @return \Cake\Console\ConsoleOptionParser <add> */ <add> public function getOptionParser() <add> { <add> $parser = parent::getOptionParser(); <add> <add> $parser->addArgument('plugin', [ <add> 'help' => 'Name of the plugin to load.', <add> ]); <add> <add> return $parser; <add> } <add>} <ide><path>tests/TestCase/Shell/Task/LoadTaskTest.php <add><?php <add>/** <add> * CakePHP : Rapid Development Framework (http://cakephp.org) <add> * Copyright (c) Cake Software Foundation, Inc. (http://cakefoundation.org) <add> * <add> * Licensed under The MIT License <add> * For full copyright and license information, please see the LICENSE.txt <add> * Redistributions of files must retain the above copyright notice. <add> * <add> * @copyright Copyright (c) Cake Software Foundation, Inc. (http://cakefoundation.org) <add> * @link http://cakephp.org CakePHP Project <add> * @license http://www.opensource.org/licenses/mit-license.php MIT License <add> */ <add>namespace Cake\Test\TestCase\Shell\Task; <add> <add>use Cake\Core\Plugin; <add>use Cake\Filesystem\File; <add>use Cake\TestSuite\TestCase; <add> <add>/** <add> * LoadTaskTest class. <add> * <add> */ <add>class LoadTaskTest extends TestCase <add>{ <add> <add> /** <add> * setUp method <add> * <add> * @return void <add> */ <add> public function setUp() <add> { <add> parent::setUp(); <add> <add> $this->io = $this->getMock('Cake\Console\ConsoleIo', [], [], '', false); <add> <add> $this->Task = $this->getMock('Cake\Shell\Task\LoadTask', ['in', 'out', 'err', '_stop'], [$this->io]); <add> <add> $this->bootstrap = ROOT . DS . 'config' . DS . 'bootstrap.php'; <add> <add> $bootstrap = new File($this->bootstrap, false); <add> $this->originalBootstrapContent = $bootstrap->read(); <add> } <add> <add> /** <add> * tearDown method <add> * <add> * @return void <add> */ <add> public function tearDown() <add> { <add> parent::tearDown(); <add> unset($this->shell); <add> Plugin::unload(); <add> <add> $bootstrap = new File($this->bootstrap, false); <add> $bootstrap->write($this->originalBootstrapContent); <add> } <add> <add> /** <add> * testLoad <add> * <add> * @return void <add> */ <add> public function testLoad() <add> { <add> $this->Task->params = [ <add> 'bootstrap' => false, <add> 'routes' => false, <add> 'autoload' => true, <add> ]; <add> <add> $action = $this->Task->main('TestPlugin'); <add> <add> $this->assertTrue($action); <add> <add> $expected = "Plugin::load('TestPlugin', ['autoload' => true]);"; <add> $bootstrap = new File($this->bootstrap, false); <add> $this->assertContains($expected, $bootstrap->read()); <add> } <add> <add> /** <add> * testLoadWithBootstrap <add> * <add> * @return void <add> */ <add> public function testLoadWithBootstrap() <add> { <add> $this->Task->params = [ <add> 'bootstrap' => true, <add> 'routes' => false, <add> 'autoload' => true, <add> ]; <add> <add> $action = $this->Task->main('TestPlugin'); <add> <add> $this->assertTrue($action); <add> <add> $expected = "Plugin::load('TestPlugin', ['autoload' => true, 'bootstrap' => true]);"; <add> $bootstrap = new File($this->bootstrap, false); <add> $this->assertContains($expected, $bootstrap->read()); <add> } <add> <add> /** <add> * testLoadWithRoutes <add> * <add> * @return void <add> */ <add> public function testLoadWithRoutes() <add> { <add> $this->Task->params = [ <add> 'bootstrap' => false, <add> 'routes' => true, <add> 'autoload' => true, <add> ]; <add> <add> $action = $this->Task->main('TestPlugin'); <add> <add> $this->assertTrue($action); <add> <add> $expected = "Plugin::load('TestPlugin', ['autoload' => true, 'routes' => true]);"; <add> $bootstrap = new File($this->bootstrap, false); <add> $this->assertContains($expected, $bootstrap->read()); <add> } <add> <add> /** <add> * test load no autoload <add> * <add> * @return void <add> */ <add> public function testLoadNoAutoload() <add> { <add> $this->Task->params = [ <add> 'bootstrap' => false, <add> 'routes' => true, <add> 'autoload' => false, <add> ]; <add> <add> $action = $this->Task->main('TestPlugin'); <add> <add> $this->assertTrue($action); <add> <add> $expected = "Plugin::load('TestPlugin', ['routes' => true]);"; <add> $bootstrap = new File($this->bootstrap, false); <add> $this->assertContains($expected, $bootstrap->read()); <add> } <add>} <ide><path>tests/TestCase/Shell/Task/UnloadTaskTest.php <add><?php <add>/** <add> * CakePHP : Rapid Development Framework (http://cakephp.org) <add> * Copyright (c) Cake Software Foundation, Inc. (http://cakefoundation.org) <add> * <add> * Licensed under The MIT License <add> * For full copyright and license information, please see the LICENSE.txt <add> * Redistributions of files must retain the above copyright notice. <add> * <add> * @copyright Copyright (c) Cake Software Foundation, Inc. (http://cakefoundation.org) <add> * @link http://cakephp.org CakePHP Project <add> * @license http://www.opensource.org/licenses/mit-license.php MIT License <add> */ <add>namespace Cake\Test\TestCase\Shell\Task; <add> <add>use Cake\Core\Plugin; <add>use Cake\Filesystem\File; <add>use Cake\TestSuite\TestCase; <add> <add>/** <add> * UnloadTaskTest class <add> * <add> */ <add>class UnloadTaskTest extends TestCase <add>{ <add> <add> /** <add> * setUp method <add> * <add> * @return void <add> */ <add> public function setUp() <add> { <add> parent::setUp(); <add> <add> $this->io = $this->getMock('Cake\Console\ConsoleIo', [], [], '', false); <add> <add> $this->Task = $this->getMock('Cake\Shell\Task\UnloadTask', ['in', 'out', 'err', '_stop'], [$this->io]); <add> <add> $this->bootstrap = ROOT . DS . 'config' . DS . 'bootstrap.php'; <add> <add> $bootstrap = new File($this->bootstrap, false); <add> <add> $this->originalBootstrapContent = $bootstrap->read(); <add> } <add> <add> /** <add> * tearDown method <add> * <add> * @return void <add> */ <add> public function tearDown() <add> { <add> parent::tearDown(); <add> unset($this->shell); <add> Plugin::unload(); <add> <add> $bootstrap = new File($this->bootstrap, false); <add> <add> $bootstrap->write($this->originalBootstrapContent); <add> } <add> <add> /** <add> * testUnload <add> * <add> * @return void <add> */ <add> public function testUnload() <add> { <add> $bootstrap = new File($this->bootstrap, false); <add> <add> $this->_addPluginToBootstrap("TestPlugin"); <add> <add> $this->_addPluginToBootstrap("TestPluginSecond"); <add> <add> $expected = "Plugin::load('TestPlugin', ['autoload' => true, 'bootstrap' => false, 'routes' => false]);"; <add> $this->assertContains($expected, $bootstrap->read()); <add> <add> $action = $this->Task->main('TestPlugin'); <add> <add> $this->assertTrue($action); <add> $expected = "Plugin::load('TestPlugin', ['autoload' => true, 'bootstrap' => false, 'routes' => false]);"; <add> $this->assertNotContains($expected, $bootstrap->read()); <add> $expected = "Plugin::load('TestPluginSecond', ['autoload' => true, 'bootstrap' => false, 'routes' => false]);"; <add> $this->assertContains($expected, $bootstrap->read()); <add> } <add> <add> /** <add> * testRegularExpressions <add> * <add> * This method will tests multiple notations of plugin loading. <add> */ <add> public function testRegularExpressions() <add> { <add> $bootstrap = new File($this->bootstrap, false); <add> <add> // Plugin::load('TestPlugin', [ <add> // 'boostrap' => false <add> // ]); <add> $bootstrap->append("\nPlugin::load('TestPlugin', [\n\t'boostrap' => false\n]);\n"); <add> $this->Task->main('TestPlugin'); <add> $this->assertNotContains("Plugin::load('TestPlugin', [\n\t'boostrap' => false\n]);", $bootstrap->read()); <add> $this->_clearBootstrap(); <add> <add> // Plugin::load( <add> // 'TestPlugin', <add> // [ 'boostrap' => false] <add> // ); <add> $bootstrap->append("\nPlugin::load(\n\t'TestPlugin',\n\t[ 'boostrap' => false]\n);\n"); <add> $this->Task->main('TestPlugin'); <add> $this->assertNotContains("Plugin::load(\n\t'TestPlugin',\n\t[ 'boostrap' => false]\n);", $bootstrap->read()); <add> $this->_clearBootstrap(); <add> <add> // Plugin::load( <add> // 'Foo', <add> // [ <add> // 'boostrap' => false <add> // ] <add> // ); <add> $bootstrap->append("\nPlugin::load(\n\t'TestPlugin',\n\t[\n\t\t'boostrap' => false\n\t]\n);\n"); <add> $this->Task->main('TestPlugin'); <add> $this->assertNotContains("Plugin::load(\n\t'TestPlugin',\n\t[\n\t\t'boostrap' => false\n\t]\n);", $bootstrap->read()); <add> $this->_clearBootstrap(); <add> <add> // Plugin::load('Test', [ <add> // 'autoload' => false, <add> // 'bootstrap' => true, <add> // 'routes' => true <add> // ]); <add> $bootstrap->append("\nPlugin::load('TestPlugin', [\n\t'autoload' => false,\n\t'bootstrap' => true,\n\t'routes' => true\n]);\n"); <add> $this->Task->main('TestPlugin'); <add> $this->assertNotContains("Plugin::load('TestPlugin', [\n\t'autoload' => false,\n\t'bootstrap' => true,\n\t'routes' => true\n]);", $bootstrap->read()); <add> $this->_clearBootstrap(); <add> <add> // Plugin::load('Test', <add> // [ <add> // 'bootstrap' => true, <add> // 'routes' => true <add> // ] <add> // ); <add> $bootstrap->append("\nPlugin::load('TestPlugin',\n\t[\n\t\t'bootstrap' => true,\n\t\t'routes' => true\n\t]\n);\n"); <add> $this->Task->main('TestPlugin'); <add> $this->assertNotContains("Plugin::load('TestPlugin',\n\t[\n\t\t'bootstrap' => true,\n\t\t'routes' => true\n\t]\n);", $bootstrap->read()); <add> $this->_clearBootstrap(); <add> <add> // Plugin::load('Test', <add> // [ <add> // <add> // ] <add> // ); <add> $bootstrap->append("\nPlugin::load('TestPlugin',\n\t[\n\t\n\t]\n);\n"); <add> $this->Task->main('TestPlugin'); <add> $this->assertNotContains("Plugin::load('TestPlugin',\n\t[\n\t\n\t]\n);", $bootstrap->read()); <add> $this->_clearBootstrap(); <add> <add> // Plugin::load('Test'); <add> $bootstrap->append("\nPlugin::load('TestPlugin');\n"); <add> $this->Task->main('TestPlugin'); <add> $this->assertNotContains("Plugin::load('TestPlugin');", $bootstrap->read()); <add> $this->_clearBootstrap(); <add> <add> // Plugin::load('Test', ['bootstrap' => true, 'route' => false]); <add> $bootstrap->append("\nPlugin::load('TestPlugin', ['bootstrap' => true, 'route' => false]);\n"); <add> $this->Task->main('TestPlugin'); <add> $this->assertNotContains("Plugin::load('TestPlugin', ['bootstrap' => true, 'route' => false]);", $bootstrap->read()); <add> } <add> <add> /** <add> * _addPluginToBootstrap <add> * <add> * Quick method to add a plugin to the bootstrap file. <add> * This is useful for the tests <add> * <add> * @param string $name <add> */ <add> protected function _addPluginToBootstrap($name) <add> { <add> $bootstrap = new File($this->bootstrap, false); <add> $bootstrap->append("\n\nPlugin::load('$name', ['autoload' => true, 'bootstrap' => false, 'routes' => false]);\n"); <add> } <add> <add> /** <add> * clearBootstrap <add> * <add> * Helper to clear the bootstrap file. <add> * <add> * @return void <add> */ <add> protected function _clearBootstrap() <add> { <add> $bootstrap = new File($this->bootstrap, false); <add> <add> $bootstrap->write($this->originalBootstrapContent); <add> } <add>}
5
Java
Java
add replyto annotation
55dae74f156c1c8326e182e5f1eb7a162cba56a8
<add><path>spring-messaging/src/main/java/org/springframework/messaging/handler/annotation/ReplyTo.java <del><path>spring-messaging/src/main/java/org/springframework/messaging/simp/MessageHolder.java <ide> * limitations under the License. <ide> */ <ide> <del>package org.springframework.messaging.simp; <add>package org.springframework.messaging.handler.annotation; <ide> <del>import org.springframework.core.NamedThreadLocal; <del>import org.springframework.messaging.Message; <add>import java.lang.annotation.Documented; <add>import java.lang.annotation.ElementType; <add>import java.lang.annotation.Retention; <add>import java.lang.annotation.RetentionPolicy; <add>import java.lang.annotation.Target; <ide> <ide> <del>// TODO: remove? <del> <ide> /** <ide> * @author Rossen Stoyanchev <ide> * @since 4.0 <ide> */ <del>public class MessageHolder { <del> <del> private static final NamedThreadLocal<Message<?>> messageHolder = <del> new NamedThreadLocal<Message<?>>("Current message"); <del> <del> <del> public static void setMessage(Message<?> message) { <del> messageHolder.set(message); <del> } <add>@Target(ElementType.METHOD) <add>@Retention(RetentionPolicy.RUNTIME) <add>@Documented <add>public @interface ReplyTo { <ide> <del> public static Message<?> getMessage() { <del> return messageHolder.get(); <del> } <ide> <del> public static void reset() { <del> messageHolder.remove(); <del> } <add> /** <add> * The destination value for the reply. <add> */ <add> String value(); <ide> <ide> } <add><path>spring-messaging/src/main/java/org/springframework/messaging/handler/method/MissingSessionUserException.java <del><path>spring-messaging/src/main/java/org/springframework/messaging/handler/method/InvalidMessageMethodParameterException.java <ide> <ide> package org.springframework.messaging.handler.method; <ide> <del>import org.springframework.core.MethodParameter; <ide> import org.springframework.messaging.Message; <ide> import org.springframework.messaging.MessagingException; <ide> <ide> * @author Rossen Stoyanchev <ide> * @since 4.0 <ide> */ <del>public class InvalidMessageMethodParameterException extends MessagingException { <add>public class MissingSessionUserException extends MessagingException { <ide> <ide> private static final long serialVersionUID = -6905878930083523161L; <ide> <del> private final MethodParameter parameter; <ide> <del> <del> public InvalidMessageMethodParameterException(Message<?> message, String description, <del> MethodParameter parameter, Throwable cause) { <del> super(message, description, cause); <del> this.parameter = parameter; <del> } <del> <del> public InvalidMessageMethodParameterException(Message<?> message, String description, <del> MethodParameter parameter) { <del> <del> super(message, description); <del> this.parameter = parameter; <del> } <del> <del> <del> public MethodParameter getParameter() { <del> return this.parameter; <add> public MissingSessionUserException(Message<?> message) { <add> super(message, "No \"user\" header in message"); <ide> } <ide> <ide> } <add><path>spring-messaging/src/main/java/org/springframework/messaging/simp/annotation/support/DefaultMessageReturnValueHandler.java <del><path>spring-messaging/src/main/java/org/springframework/messaging/simp/annotation/support/MessageSendingReturnValueHandler.java <ide> <ide> package org.springframework.messaging.simp.annotation.support; <ide> <add>import java.security.Principal; <add> <ide> import org.springframework.core.MethodParameter; <ide> import org.springframework.messaging.Message; <ide> import org.springframework.messaging.MessageChannel; <add>import org.springframework.messaging.handler.annotation.ReplyTo; <ide> import org.springframework.messaging.handler.method.MessageReturnValueHandler; <add>import org.springframework.messaging.handler.method.MissingSessionUserException; <ide> import org.springframework.messaging.simp.SimpMessageHeaderAccessor; <ide> import org.springframework.messaging.support.MessageBuilder; <ide> import org.springframework.messaging.support.converter.MessageConverter; <ide> import org.springframework.util.Assert; <ide> <ide> <ide> /** <add> * Expects return values to be either a {@link Message} or the payload of a message to be <add> * converted and sent on a {@link MessageChannel}. <add> * <add> * <p>This {@link MessageReturnValueHandler} should be ordered last as it supports all <add> * return value types. <add> * <ide> * @author Rossen Stoyanchev <ide> * @since 4.0 <ide> */ <del>public class MessageSendingReturnValueHandler implements MessageReturnValueHandler { <add>public class DefaultMessageReturnValueHandler implements MessageReturnValueHandler { <add> <add> private MessageChannel inboundChannel; <ide> <ide> private MessageChannel outboundChannel; <ide> <ide> private final MessageConverter converter; <ide> <ide> <del> public MessageSendingReturnValueHandler(MessageChannel outboundChannel, MessageConverter<?> converter) { <add> public DefaultMessageReturnValueHandler(MessageChannel inboundChannel, MessageChannel outboundChannel, <add> MessageConverter<?> converter) { <add> <add> Assert.notNull(inboundChannel, "inboundChannel is required"); <ide> Assert.notNull(outboundChannel, "outboundChannel is required"); <ide> Assert.notNull(converter, "converter is required"); <add> <add> this.inboundChannel = inboundChannel; <ide> this.outboundChannel = outboundChannel; <ide> this.converter = converter; <ide> } <ide> public void handleReturnValue(Object returnValue, MethodParameter returnType, Me <ide> } <ide> <ide> SimpMessageHeaderAccessor inputHeaders = SimpMessageHeaderAccessor.wrap(message); <add> <ide> Message<?> returnMessage = (returnValue instanceof Message) ? (Message<?>) returnValue : null; <ide> Object returnPayload = (returnMessage != null) ? returnMessage.getPayload() : returnValue; <ide> <ide> public void handleReturnValue(Object returnValue, MethodParameter returnType, Me <ide> <ide> returnHeaders.setSessionId(inputHeaders.getSessionId()); <ide> returnHeaders.setSubscriptionId(inputHeaders.getSubscriptionId()); <del> if (returnHeaders.getDestination() == null) { <del> returnHeaders.setDestination(inputHeaders.getDestination()); <del> } <add> <add> String destination = getDestination(message, returnType, inputHeaders, returnHeaders); <add> returnHeaders.setDestination(destination); <ide> <ide> returnMessage = this.converter.toMessage(returnPayload); <ide> returnMessage = MessageBuilder.fromMessage(returnMessage).copyHeaders(returnHeaders.toMap()).build(); <ide> <del> this.outboundChannel.send(returnMessage); <add> if (destination.startsWith("/user/")) { <add> this.inboundChannel.send(returnMessage); <add> } <add> else { <add> this.outboundChannel.send(returnMessage); <add> } <ide> } <ide> <add> protected String getDestination(Message<?> inputMessage, MethodParameter returnType, <add> SimpMessageHeaderAccessor inputHeaders, SimpMessageHeaderAccessor returnHeaders) { <add> <add> ReplyTo annot = returnType.getMethodAnnotation(ReplyTo.class); <add> <add> if (returnHeaders.getDestination() != null) { <add> return returnHeaders.getDestination(); <add> } <add> else if (annot != null) { <add> Principal user = inputHeaders.getUser(); <add> if (user == null) { <add> throw new MissingSessionUserException(inputMessage); <add> } <add> return "/user/" + user.getName() + annot.value(); <add> } <add> else if (inputHeaders.getDestination() != null) { <add> return inputHeaders.getDestination(); <add> } <add> else { <add> return null; <add> } <add> <add> } <add> <ide> } <ide><path>spring-messaging/src/main/java/org/springframework/messaging/simp/annotation/support/PrincipalMessageArgumentResolver.java <ide> <ide> import org.springframework.core.MethodParameter; <ide> import org.springframework.messaging.Message; <del>import org.springframework.messaging.handler.method.InvalidMessageMethodParameterException; <ide> import org.springframework.messaging.handler.method.MessageArgumentResolver; <add>import org.springframework.messaging.handler.method.MissingSessionUserException; <ide> import org.springframework.messaging.simp.SimpMessageHeaderAccessor; <ide> <ide> <ide> public Object resolveArgument(MethodParameter parameter, Message<?> message) thr <ide> SimpMessageHeaderAccessor headers = SimpMessageHeaderAccessor.wrap(message); <ide> Principal user = headers.getUser(); <ide> if (user == null) { <del> throw new InvalidMessageMethodParameterException(message, "User not available", parameter); <add> throw new MissingSessionUserException(message); <ide> } <ide> return user; <ide> } <ide><path>spring-messaging/src/main/java/org/springframework/messaging/simp/handler/AnnotationMethodMessageHandler.java <ide> import org.springframework.messaging.handler.method.InvocableMessageHandlerMethod; <ide> import org.springframework.messaging.handler.method.MessageArgumentResolverComposite; <ide> import org.springframework.messaging.handler.method.MessageReturnValueHandlerComposite; <del>import org.springframework.messaging.simp.MessageHolder; <ide> import org.springframework.messaging.simp.SimpMessageHeaderAccessor; <ide> import org.springframework.messaging.simp.SimpMessageType; <ide> import org.springframework.messaging.simp.annotation.SubscribeEvent; <ide> import org.springframework.messaging.simp.annotation.UnsubscribeEvent; <del>import org.springframework.messaging.simp.annotation.support.MessageSendingReturnValueHandler; <add>import org.springframework.messaging.simp.annotation.support.DefaultMessageReturnValueHandler; <ide> import org.springframework.messaging.simp.annotation.support.PrincipalMessageArgumentResolver; <ide> import org.springframework.messaging.support.converter.MessageConverter; <ide> import org.springframework.stereotype.Controller; <ide> public class AnnotationMethodMessageHandler implements MessageHandler, Applicati <ide> <ide> private static final Log logger = LogFactory.getLog(AnnotationMethodMessageHandler.class); <ide> <add> private final MessageChannel inboundChannel; <add> <ide> private final MessageChannel outboundChannel; <ide> <ide> private MessageConverter<?> messageConverter; <ide> public class AnnotationMethodMessageHandler implements MessageHandler, Applicati <ide> * @param inboundChannel a channel for processing incoming messages from clients <ide> * @param outboundChannel a channel for messages going out to clients <ide> */ <del> public AnnotationMethodMessageHandler(MessageChannel outboundChannel) { <add> public AnnotationMethodMessageHandler(MessageChannel inboundChannel, MessageChannel outboundChannel) { <add> Assert.notNull(inboundChannel, "inboundChannel is required"); <ide> Assert.notNull(outboundChannel, "outboundChannel is required"); <add> this.inboundChannel = inboundChannel; <ide> this.outboundChannel = outboundChannel; <ide> } <ide> <ide> public void afterPropertiesSet() { <ide> this.argumentResolvers.addResolver(new PrincipalMessageArgumentResolver()); <ide> this.argumentResolvers.addResolver(new MessageBodyArgumentResolver(this.messageConverter)); <ide> <del> this.returnValueHandlers.addHandler( <del> new MessageSendingReturnValueHandler(this.outboundChannel, this.messageConverter)); <add> this.returnValueHandlers.addHandler(new DefaultMessageReturnValueHandler( <add> this.inboundChannel, this.outboundChannel, this.messageConverter)); <ide> } <ide> <ide> protected void initHandlerMethods() { <ide> private void handleMessageInternal(final Message<?> message, Map<MappingInfo, Ha <ide> invocableHandlerMethod.setMessageMethodArgumentResolvers(this.argumentResolvers); <ide> <ide> try { <del> MessageHolder.setMessage(message); <del> <del> Object value = invocableHandlerMethod.invoke(message); <add> Object returnValue = invocableHandlerMethod.invoke(message); <ide> <ide> MethodParameter returnType = handlerMethod.getReturnType(); <ide> if (void.class.equals(returnType.getParameterType())) { <ide> return; <ide> } <del> <del> this.returnValueHandlers.handleReturnValue(value, returnType, message); <add> this.returnValueHandlers.handleReturnValue(returnValue, returnType, message); <ide> } <ide> catch (Exception ex) { <ide> invokeExceptionHandler(message, handlerMethod, ex); <ide> private void handleMessageInternal(final Message<?> message, Map<MappingInfo, Ha <ide> // TODO <ide> ex.printStackTrace(); <ide> } <del> finally { <del> MessageHolder.reset(); <del> } <ide> } <ide> <ide> private void invokeExceptionHandler(Message<?> message, HandlerMethod handlerMethod, Exception ex) { <ide> <del> InvocableMessageHandlerMethod invocableHandlerMethod; <add> InvocableMessageHandlerMethod exceptionHandlerMethod; <ide> Class<?> beanType = handlerMethod.getBeanType(); <ide> MessageExceptionHandlerMethodResolver resolver = this.exceptionHandlerCache.get(beanType); <ide> if (resolver == null) { <ide> private void invokeExceptionHandler(Message<?> message, HandlerMethod handlerMet <ide> return; <ide> } <ide> <del> invocableHandlerMethod = new InvocableMessageHandlerMethod(handlerMethod.getBean(), method); <del> invocableHandlerMethod.setMessageMethodArgumentResolvers(this.argumentResolvers); <add> exceptionHandlerMethod = new InvocableMessageHandlerMethod(handlerMethod.getBean(), method); <add> exceptionHandlerMethod.setMessageMethodArgumentResolvers(this.argumentResolvers); <ide> <ide> try { <del> invocableHandlerMethod.invoke(message, ex); <add> Object returnValue = exceptionHandlerMethod.invoke(message, ex); <add> <add> MethodParameter returnType = exceptionHandlerMethod.getReturnType(); <add> if (void.class.equals(returnType.getParameterType())) { <add> return; <add> } <add> this.returnValueHandlers.handleReturnValue(returnValue, returnType, message); <ide> } <ide> catch (Throwable t) { <ide> logger.error("Error while handling exception", t); <add><path>spring-messaging/src/main/java/org/springframework/messaging/simp/handler/SimpleUserSessionResolver.java <del><path>spring-messaging/src/main/java/org/springframework/messaging/simp/handler/InMemoryUserSessionResolver.java <ide> * @author Rossen Stoyanchev <ide> * @since 4.0 <ide> */ <del>public class InMemoryUserSessionResolver implements UserSessionResolver, UserSessionStore { <add>public class SimpleUserSessionResolver implements UserSessionResolver, UserSessionStore { <ide> <ide> // userId -> sessionId's <ide> private final Map<String, Set<String>> userSessionIds = new ConcurrentHashMap<String, Set<String>>(); <ide><path>spring-messaging/src/main/java/org/springframework/messaging/simp/handler/UserDestinationMessageHandler.java <ide> public class UserDestinationMessageHandler implements MessageHandler { <ide> <ide> private String prefix = "/user/"; <ide> <del> private UserSessionResolver userSessionResolver = new InMemoryUserSessionResolver(); <add> private UserSessionResolver userSessionResolver = new SimpleUserSessionResolver(); <ide> <ide> <ide> public UserDestinationMessageHandler(MessageSendingOperations<String> messagingTemplate) {
7
PHP
PHP
fix several autoloads in tests
5fe2b0be1e06ef608ebc158fcad1956ba90b7f8c
<ide><path>tests/Auth/AuthAccessGateTest.php <ide> use Illuminate\Auth\Access\Response; <ide> use Illuminate\Auth\Access\HandlesAuthorization; <ide> <del>class GateTest extends TestCase <add>class AuthAccessGateTest extends TestCase <ide> { <ide> /** <ide> * @expectedException \InvalidArgumentException <ide><path>tests/Foundation/Testing/Concerns/MakesHttpRequestsTest.php <ide> <?php <ide> <del>namespace Illuminate\Tests\Foundation\Http\Middleware; <add>namespace Illuminate\Tests\Foundation\Testing\Concerns; <ide> <ide> use Orchestra\Testbench\TestCase; <ide> <ide><path>tests/Foundation/Testing/Constraints/SeeInOrderTest.php <ide> <?php <ide> <del>namespace Illuminate\Tests\Foundation\Constraints; <add>namespace Illuminate\Tests\Foundation\Testing\Constraints; <ide> <ide> use PHPUnit\Framework\TestCase; <ide> use Illuminate\Tests\Foundation\FoundationTestResponseTest; <ide><path>tests/Integration/Foundation/FoundationHelpersTest.php <ide> /** <ide> * @group integration <ide> */ <del>class HelpersTest extends TestCase <add>class FoundationHelpersTest extends TestCase <ide> { <ide> public function test_rescue() <ide> { <ide><path>tests/Integration/Mail/SendingMailWithLocaleTest.php <ide> <?php <ide> <del>namespace Illuminate\Tests\Integration\Mail\SendingMailWithLocale; <add>namespace Illuminate\Tests\Integration\Mail; <ide> <ide> use Mockery; <ide> use Illuminate\Mail\Mailable; <ide><path>tests/Integration/Routing/FluentRoutingTest.php <ide> <?php <ide> <del>namespace Illuminate\Tests\Integration\Routing\FluentRoutingTest; <add>namespace Illuminate\Tests\Integration\Routing; <ide> <ide> use Orchestra\Testbench\TestCase; <ide> use Illuminate\Support\Facades\Route; <ide><path>tests/Support/SupportTestingMailFakeTest.php <ide> use PHPUnit\Framework\ExpectationFailedException; <ide> use PHPUnit\Framework\Constraint\ExceptionMessage; <ide> <del>class MailFakeTest extends TestCase <add>class SupportTestingMailFakeTest extends TestCase <ide> { <ide> protected function setUp() <ide> { <ide><path>tests/Support/SupportTestingNotificationFakeTest.php <ide> use PHPUnit\Framework\Constraint\ExceptionMessage; <ide> use Illuminate\Support\Testing\Fakes\NotificationFake; <ide> <del>class NotificationFakeTest extends TestCase <add>class SupportTestingNotificationFakeTest extends TestCase <ide> { <ide> protected function setUp() <ide> { <ide><path>tests/Support/SupportTestingQueueFakeTest.php <ide> use PHPUnit\Framework\ExpectationFailedException; <ide> use PHPUnit\Framework\Constraint\ExceptionMessage; <ide> <del>class QueueFakeTest extends TestCase <add>class SupportTestingQueueFakeTest extends TestCase <ide> { <ide> protected function setUp() <ide> { <ide><path>tests/View/Blade/BladeElseAuthStatementsTest.php <ide> <?php <ide> <del>namespace Illuminate\Tests\Blade; <add>namespace Illuminate\Tests\View\Blade; <ide> <ide> use Mockery as m; <ide> use PHPUnit\Framework\TestCase; <ide><path>tests/View/Blade/BladeElseGuestStatementsTest.php <ide> <?php <ide> <del>namespace Illuminate\Tests\Blade; <add>namespace Illuminate\Tests\View\Blade; <ide> <ide> use Mockery as m; <ide> use PHPUnit\Framework\TestCase; <ide><path>tests/View/Blade/BladeIfAuthStatementsTest.php <ide> <?php <ide> <del>namespace Illuminate\Tests\Blade; <add>namespace Illuminate\Tests\View\Blade; <ide> <ide> use Mockery as m; <ide> use PHPUnit\Framework\TestCase; <ide><path>tests/View/Blade/BladeIfGuestStatementsTest.php <ide> <?php <ide> <del>namespace Illuminate\Tests\Blade; <add>namespace Illuminate\Tests\View\Blade; <ide> <ide> use Mockery as m; <ide> use PHPUnit\Framework\TestCase;
13
Ruby
Ruby
display detailed information in inline reporting
4f8c36ab707b7a262cd9b37d4a71e6234d9f8f3c
<ide><path>railties/lib/rails/test_unit/reporter.rb <ide> def record(result) <ide> if output_inline? && result.failure && (!result.skipped? || options[:verbose]) <ide> io.puts <ide> io.puts <del> io.puts result.failures.map(&:message) <add> io.puts format_failures(result) <ide> io.puts <ide> io.puts format_rerun_snippet(result) <ide> io.puts <ide> def fail_fast? <ide> options[:fail_fast] <ide> end <ide> <add> def format_failures(result) <add> result.failures.map do |failure| <add> "#{failure.result_label}:\n#{result.class}##{result.name}:\n#{failure.message}\n" <add> end <add> end <add> <ide> def format_rerun_snippet(result) <ide> # Try to extract path to assertion from backtrace. <ide> if result.location =~ /\[(.*)\]\z/ <ide><path>railties/test/application/test_runner_test.rb <ide> def test_output_inline_by_default <ide> create_test_file :models, 'post', pass: false <ide> <ide> output = run_test_command('test/models/post_test.rb') <del> assert_match %r{Running:\n\nPostTest\nF\n\nwups!\n\nbin/rails test test/models/post_test.rb:6}, output <add> expect = %r{Running:\n\nPostTest\nF\n\nFailure:\nPostTest#test_truth:\nwups!\n\nbin/rails test test/models/post_test.rb:6\n\n\n\n} <add> assert_match expect, output <ide> end <ide> <ide> def test_only_inline_failure_output <ide><path>railties/test/generators/plugin_test_runner_test.rb <ide> def test_output_inline_by_default <ide> create_test_file 'post', pass: false <ide> <ide> output = run_test_command('test/post_test.rb') <del> assert_match %r{Running:\n\nPostTest\nF\n\nwups!\n\nbin/test (/private)?#{plugin_path}/test/post_test.rb:6}, output <add> expect = %r{Running:\n\nPostTest\nF\n\nFailure:\nPostTest#test_truth:\nwups!\n\nbin/test (/private)?#{plugin_path}/test/post_test.rb:6} <add> assert_match expect, output <ide> end <ide> <ide> def test_only_inline_failure_output <ide><path>railties/test/generators/test_runner_in_engine_test.rb <ide> def test_rerun_snippet_is_relative_path <ide> create_test_file 'post', pass: false <ide> <ide> output = run_test_command('test/post_test.rb') <del> assert_match %r{Running:\n\nPostTest\nF\n\nwups!\n\nbin/rails test test/post_test.rb:6}, output <add> expect = %r{Running:\n\nPostTest\nF\n\nFailure:\nPostTest#test_truth:\nwups!\n\nbin/rails test test/post_test.rb:6} <add> assert_match expect, output <ide> end <ide> <ide> private <ide><path>railties/test/test_unit/reporter_test.rb <ide> def woot; end <ide> @reporter.record(failed_test) <ide> @reporter.report <ide> <del> assert_match %r{\A\n\nboo\n\nbin/rails test .*test/test_unit/reporter_test.rb:\d+\n\n\z}, @output.string <add> expect = %r{\A\n\nFailure:\nTestUnitReporterTest::ExampleTest#woot:\nboo\n\nbin/rails test test/test_unit/reporter_test.rb:\d+\n\n\z} <add> assert_match expect, @output.string <ide> end <ide> <ide> test "outputs errors inline" do <ide> @reporter.record(errored_test) <ide> @reporter.report <ide> <del> assert_match %r{\A\n\nArgumentError: wups\n No backtrace\n\nbin/rails test .*test/test_unit/reporter_test.rb:6\n\n\z}, @output.string <add> expect = %r{\A\n\nError:\nTestUnitReporterTest::ExampleTest#woot:\nArgumentError: wups\n No backtrace\n\nbin/rails test .*test/test_unit/reporter_test.rb:6\n\n\z} <add> assert_match expect, @output.string <ide> end <ide> <ide> test "outputs skipped tests inline if verbose" do <ide> verbose = Rails::TestUnitReporter.new @output, verbose: true, output_inline: true <ide> verbose.record(skipped_test) <ide> verbose.report <ide> <del> assert_match %r{\A\n\nskipchurches, misstemples\n\nbin/rails test .*test/test_unit/reporter_test.rb:\d+\n\n\z}, @output.string <add> expect = %r{\A\n\nSkipped:\nTestUnitReporterTest::ExampleTest#woot:\nskipchurches, misstemples\n\nbin/rails test test/test_unit/reporter_test.rb:\d+\n\n\z} <add> assert_match expect, @output.string <ide> end <ide> <ide> test "does not output rerun snippets after run" do
5
Python
Python
fix zerigo tests
d679c372f098e94b79c8f1e167c3c17a29d07d4e
<ide><path>libcloud/test/dns/test_zerigo.py <ide> def _api_1_1_zones_xml_INVALID_CREDS(self, method, url, body, headers): <ide> <ide> def _api_1_1_zones_xml(self, method, url, body, headers): <ide> body = self.fixtures.load('list_zones.xml') <del> return (httplib.OK, body, {'x-query-count': 1}, <add> return (httplib.OK, body, {'x-query-count': '1'}, <ide> httplib.responses[httplib.OK]) <ide> <ide> def _api_1_1_zones_xml_NO_RESULTS(self, method, url, body, headers): <ide> def _api_1_1_zones_xml_NO_RESULTS(self, method, url, body, headers): <ide> <ide> def _api_1_1_zones_12345678_hosts_xml(self, method, url, body, headers): <ide> body = self.fixtures.load('list_records.xml') <del> return (httplib.OK, body, {'x-query-count': 1}, <add> return (httplib.OK, body, {'x-query-count': '1'}, <ide> httplib.responses[httplib.OK]) <ide> <ide> def _api_1_1_zones_12345678_hosts_xml_NO_RESULTS(self, method, url, body, <ide> headers): <ide> body = self.fixtures.load('list_records_no_results.xml') <del> return (httplib.OK, body, {'x-query-count': 0}, <add> return (httplib.OK, body, {'x-query-count': '0'}, <ide> httplib.responses[httplib.OK]) <ide> <ide> def _api_1_1_zones_12345678_hosts_xml_ZONE_DOES_NOT_EXIST(self, method,
1
Ruby
Ruby
remove another unused require
bba4d2dd8ce6a0616a0b5d6458a528abd06caa98
<ide><path>activestorage/lib/active_storage/service/gcs_service.rb <ide> # frozen_string_literal: true <ide> <ide> gem "google-cloud-storage", "~> 1.11" <del> <ide> require "google/cloud/storage" <del>require "net/http" <ide> <ide> module ActiveStorage <ide> # Wraps the Google Cloud Storage as an Active Storage service. See ActiveStorage::Service for the generic API
1
Ruby
Ruby
add a couple more version tests
b7066b6e74c5ee2d6d0029e4f988134cdb1e7a77
<ide><path>Library/Homebrew/test/test_versions.rb <ide> def test_wwwoffle_version <ide> def test_synergy_version <ide> assert_version_detected '1.3.6p2', 'http://synergy.googlecode.com/files/synergy-1.3.6p2-MacOSX-Universal.zip' <ide> end <add> <add> def test_fontforge_version <add> assert_version_detected '20120731', 'http://downloads.sourceforge.net/project/fontforge/fontforge-source/fontforge_full-20120731-b.tar.bz2' <add> end <add> <add> def test_ezlupdate_version <add> assert_version_detected '2011.10', 'https://github.com/downloads/ezsystems/ezpublish-legacy/ezpublish_community_project-2011.10-with_ezc.tar.bz2' <add> end <ide> end
1
Text
Text
add f3n67u to triagers
440d95a878a1a19bf72a2685fc8fc0f47100b510
<ide><path>README.md <ide> maintaining the Node.js project. <ide> <ide> * [Ayase-252](https://github.com/Ayase-252) - <ide> **Qingyu Deng** <<i@ayase-lab.com>> <add>* [F3n67u](https://github.com/F3n67u) - <add> **Feng Yu** <<F3n67u@outlook.com>> (he/him) <ide> * [himadriganguly](https://github.com/himadriganguly) - <ide> **Himadri Ganguly** <<himadri.tech@gmail.com>> (he/him) <ide> * [iam-frankqiu](https://github.com/iam-frankqiu) -
1
Text
Text
fix toc title
4952cf90600ae810c1ed11599e02c59da9aa72e8
<ide><path>threejs/lessons/threejs-backgrounds.md <ide> Title: Three.js Backgrounds and Skyboxes <ide> Description: How to add a background in THREE.js <ide> Category: solutions <del>TOC: Load a .GLTF file <add>TOC: Add a Background or Skybox <ide> <ide> Most of the articles here use a solid color for a background. <ide>
1
Ruby
Ruby
add test for prepare/prepack removal
027419effe57b424eb76ca2151e7effa26e33c8c
<ide><path>Library/Homebrew/test/language/node_spec.rb <ide> end <ide> end <ide> <add> describe "#std_pack_for_installation" do <add> npm_pack_cmd = "npm pack --ignore-scripts" <add> <add> it "removes prepare and prepack scripts" do <add> path = Pathname("package.json") <add> path.atomic_write("{\"scripts\":{\"prepare\": \"ls\", \"prepack\": \"ls\", \"test\": \"ls\"}}") <add> allow(Utils).to receive(:popen_read).with(npm_pack_cmd).and_return(`echo pack.tgz`) <add> subject.pack_for_installation <add> expect(path.read).not_to include("prepare") <add> expect(path.read).not_to include("prepack") <add> expect(path.read).to include("test") <add> end <add> end <add> <ide> describe "#std_npm_install_args" do <ide> npm_install_arg = Pathname("libexec") <ide> npm_pack_cmd = "npm pack --ignore-scripts"
1
PHP
PHP
fix some namespacing bugs
94d7982f78fa1c1d8581117feae9992ea62dc21a
<ide><path>src/Illuminate/Console/GeneratorCommand.php <ide> protected function getNamespaceWithSuffix($type, $name) <ide> { <ide> $suffix = $this->getNamespaceSuffix($name); <ide> <del> return trim($this->laravel['config']['namespaces.'.$type].'\\'.$suffix, '\\'); <add> return trim($this->laravel['config']['namespaces.'.$type].$suffix, '\\'); <ide> } <ide> <ide> /** <ide> protected function getNamespaceSuffix($name) <ide> */ <ide> protected function replaceClass($stub, $name) <ide> { <add> $name = str_replace($this->getNamespaceSuffix($name).'\\', '', $name); <add> <ide> return str_replace('{{class}}', $name, $stub); <ide> } <ide>
1
Javascript
Javascript
improve tests that use discrete events
9c32622cf04e70dd40463ece25e83b7cc311b0ed
<ide><path>packages/react-dom/src/__tests__/ReactDOMHooks-test.js <ide> let React; <ide> let ReactDOM; <ide> let Scheduler; <add>let act; <ide> <ide> describe('ReactDOMHooks', () => { <ide> let container; <ide> describe('ReactDOMHooks', () => { <ide> React = require('react'); <ide> ReactDOM = require('react-dom'); <ide> Scheduler = require('scheduler'); <add> act = require('react-dom/test-utils').unstable_concurrentAct; <ide> <ide> container = document.createElement('div'); <ide> document.body.appendChild(container); <ide> describe('ReactDOMHooks', () => { <ide> }); <ide> <ide> // @gate experimental <del> it('should not bail out when an update is scheduled from within an event handler in Concurrent Mode', () => { <add> it('should not bail out when an update is scheduled from within an event handler in Concurrent Mode', async () => { <ide> const {createRef, useCallback, useState} = React; <ide> <ide> const Example = ({inputRef, labelRef}) => { <ide> describe('ReactDOMHooks', () => { <ide> Scheduler.unstable_flushAll(); <ide> <ide> inputRef.current.value = 'abc'; <del> inputRef.current.dispatchEvent( <del> new Event('input', {bubbles: true, cancelable: true}), <del> ); <del> <del> Scheduler.unstable_flushAll(); <add> await act(async () => { <add> inputRef.current.dispatchEvent( <add> new Event('input', { <add> bubbles: true, <add> cancelable: true, <add> }), <add> ); <add> }); <ide> <ide> expect(labelRef.current.innerHTML).toBe('abc'); <ide> }); <ide><path>packages/react-dom/src/__tests__/ReactTestUtilsAct-test.js <ide> function runActTests(label, render, unmount, rerender) { <ide> expect(Scheduler).toHaveYielded([100]); <ide> }); <ide> <del> it('flushes effects on every call', () => { <add> it('flushes effects on every call', async () => { <ide> function App() { <ide> const [ctr, setCtr] = React.useState(0); <ide> React.useEffect(() => { <ide> function runActTests(label, render, unmount, rerender) { <ide> button.dispatchEvent(new MouseEvent('click', {bubbles: true})); <ide> } <ide> <del> act(() => { <add> await act(async () => { <ide> click(); <ide> click(); <ide> click(); <ide> }); <ide> // it consolidates the 3 updates, then fires the effect <ide> expect(Scheduler).toHaveYielded([3]); <del> act(click); <add> await act(async () => click()); <ide> expect(Scheduler).toHaveYielded([4]); <del> act(click); <add> await act(async () => click()); <ide> expect(Scheduler).toHaveYielded([5]); <ide> expect(button.innerHTML).toBe('5'); <ide> }); <ide><path>packages/react-dom/src/events/plugins/__tests__/ChangeEventPlugin-test.js <ide> describe('ChangeEventPlugin', () => { <ide> }); <ide> <ide> // @gate experimental <del> it('is async for non-input events', () => { <add> it('is async for non-input events', async () => { <ide> const root = ReactDOM.unstable_createRoot(container); <ide> let input; <ide> <ide><path>packages/react-dom/src/events/plugins/__tests__/SimpleEventPlugin-test.js <ide> describe('SimpleEventPlugin', function() { <ide> let React; <ide> let ReactDOM; <ide> let Scheduler; <add> let TestUtils; <ide> <ide> let onClick; <ide> let container; <ide> describe('SimpleEventPlugin', function() { <ide> React = require('react'); <ide> ReactDOM = require('react-dom'); <ide> Scheduler = require('scheduler'); <add> TestUtils = require('react-dom/test-utils'); <ide> <ide> onClick = jest.fn(); <ide> }); <ide> describe('SimpleEventPlugin', function() { <ide> }); <ide> <ide> // @gate experimental <del> it('end result of many interactive updates is deterministic', () => { <add> it('end result of many interactive updates is deterministic', async () => { <ide> container = document.createElement('div'); <ide> const root = ReactDOM.unstable_createRoot(container); <ide> document.body.appendChild(container); <ide> describe('SimpleEventPlugin', function() { <ide> expect(button.textContent).toEqual('Count: 0'); <ide> <ide> // Click the button many more times <del> click(); <del> click(); <del> click(); <del> click(); <del> click(); <del> click(); <add> await TestUtils.act(async () => { <add> click(); <add> click(); <add> click(); <add> click(); <add> click(); <add> click(); <add> }); <ide> <ide> // Flush the remaining work <ide> Scheduler.unstable_flushAll(); <ide> describe('SimpleEventPlugin', function() { <ide> }); <ide> <ide> // @gate experimental <del> it('flushes discrete updates in order', () => { <add> it('flushes discrete updates in order', async () => { <ide> container = document.createElement('div'); <ide> document.body.appendChild(container); <ide> <ide><path>packages/react-reconciler/src/__tests__/ReactIncrementalErrorHandling-test.internal.js <ide> describe('ReactIncrementalErrorHandling', () => { <ide> expect(ReactNoop.getChildren()).toEqual([span('Caught an error: oops!')]); <ide> }); <ide> <del> it("retries at a lower priority if there's additional pending work", () => { <add> // @gate experimental <add> it("retries at a lower priority if there's additional pending work", async () => { <ide> function App(props) { <ide> if (props.isBroken) { <ide> Scheduler.unstable_yieldValue('error'); <ide> describe('ReactIncrementalErrorHandling', () => { <ide> }); <ide> } <ide> <del> ReactNoop.discreteUpdates(() => { <del> ReactNoop.render(<App isBroken={true} />, onCommit); <del> }); <add> ReactNoop.render(<App isBroken={true} />, onCommit); <ide> expect(Scheduler).toFlushAndYieldThrough(['error']); <ide> interrupt(); <ide> <del> // This update is in a separate batch <del> ReactNoop.render(<App isBroken={false} />, onCommit); <add> React.unstable_startTransition(() => { <add> // This update is in a separate batch <add> ReactNoop.render(<App isBroken={false} />, onCommit); <add> }); <ide> <ide> expect(Scheduler).toFlushAndYieldThrough([ <ide> // The first render fails. But because there's a lower priority pending <ide> describe('ReactIncrementalErrorHandling', () => { <ide> }); <ide> } <ide> <del> ReactNoop.discreteUpdates(() => { <del> ReactNoop.render(<App isBroken={true} />, onCommit); <del> }); <add> ReactNoop.render(<App isBroken={true} />, onCommit); <ide> expect(Scheduler).toFlushAndYieldThrough(['error']); <ide> interrupt(); <ide> <ide> expect(ReactNoop).toMatchRenderedOutput(null); <ide> <del> // This update is in a separate batch <del> ReactNoop.render(<App isBroken={false} />, onCommit); <add> React.unstable_startTransition(() => { <add> // This update is in a separate batch <add> ReactNoop.render(<App isBroken={false} />, onCommit); <add> }); <ide> <ide> expect(Scheduler).toFlushAndYieldThrough([ <ide> // The first render fails. But because there's a lower priority pending <ide> describe('ReactIncrementalErrorHandling', () => { <ide> }); <ide> } <ide> <add> // @gate experimental <ide> it('uncaught errors should be discarded if the render is aborted', async () => { <ide> const root = ReactNoop.createRoot(); <ide> <ide> describe('ReactIncrementalErrorHandling', () => { <ide> } <ide> <ide> await ReactNoop.act(async () => { <del> ReactNoop.discreteUpdates(() => { <del> root.render(<Oops />); <del> }); <add> root.render(<Oops />); <add> <ide> // Render past the component that throws, then yield. <ide> expect(Scheduler).toFlushAndYieldThrough(['Oops']); <ide> expect(root).toMatchRenderedOutput(null); <ide> // Interleaved update. When the root completes, instead of throwing the <ide> // error, it should try rendering again. This update will cause it to <ide> // recover gracefully. <del> root.render('Everything is fine.'); <add> React.unstable_startTransition(() => { <add> root.render('Everything is fine.'); <add> }); <ide> }); <ide> <ide> // Should finish without throwing. <ide> expect(root).toMatchRenderedOutput('Everything is fine.'); <ide> }); <ide> <add> // @gate experimental <ide> it('uncaught errors are discarded if the render is aborted, case 2', async () => { <ide> const {useState} = React; <ide> const root = ReactNoop.createRoot(); <ide> describe('ReactIncrementalErrorHandling', () => { <ide> }); <ide> <ide> await ReactNoop.act(async () => { <del> // Schedule a high pri and a low pri update on the root. <del> ReactNoop.discreteUpdates(() => { <del> root.render(<Oops />); <add> // Schedule a default pri and a low pri update on the root. <add> root.render(<Oops />); <add> React.unstable_startTransition(() => { <add> root.render(<AllGood />); <ide> }); <del> root.render(<AllGood />); <del> // Render through just the high pri update. The low pri update remains on <add> <add> // Render through just the default pri update. The low pri update remains on <ide> // the queue. <ide> expect(Scheduler).toFlushAndYieldThrough(['Everything is fine.']); <ide> <del> // Schedule a high pri update on a child that triggers an error. <add> // Schedule a default pri update on a child that triggers an error. <ide> // The root should capture this error. But since there's still a pending <ide> // update on the root, the error should be suppressed. <del> ReactNoop.discreteUpdates(() => { <del> setShouldThrow(true); <del> }); <add> setShouldThrow(true); <ide> }); <ide> // Should render the final state without throwing the error. <ide> expect(Scheduler).toHaveYielded(['Everything is fine.']); <ide><path>packages/react-reconciler/src/__tests__/ReactSuspenseWithNoopRenderer-test.js <ide> describe('ReactSuspenseWithNoopRenderer', () => { <ide> ); <ide> } <ide> <del> // Schedule a high pri update and a low pri update, without rendering in <del> // between. <del> ReactNoop.discreteUpdates(() => { <del> // High pri <del> ReactNoop.render(<App />); <del> }); <add> // Schedule a default pri update and a low pri update, without rendering in between. <add> // Default pri <add> ReactNoop.render(<App />); <ide> // Low pri <del> ReactNoop.render(<App hide={true} />); <add> React.unstable_startTransition(() => { <add> ReactNoop.render(<App hide={true} />); <add> }); <ide> <ide> expect(Scheduler).toFlushAndYield([ <ide> // The first update suspends <ide> describe('ReactSuspenseWithNoopRenderer', () => { <ide> ReactNoop.render(<Foo />); <ide> expect(Scheduler).toFlushAndYield(['Foo']); <ide> <del> ReactNoop.discreteUpdates(() => <del> ReactNoop.render(<Foo renderContent={true} />), <del> ); <add> ReactNoop.render(<Foo renderContent={true} />); <ide> expect(Scheduler).toFlushAndYieldThrough(['Foo']); <ide> <ide> // Advance some time. <ide> describe('ReactSuspenseWithNoopRenderer', () => { <ide> // Schedule an update inside the Suspense boundary that suspends. <ide> setAppText('B'); <ide> expect(Scheduler).toFlushAndYield(['Suspend! [B]', 'Loading...']); <add> }); <ide> <del> // Commit the placeholder <del> await advanceTimers(250); <del> expect(root).toMatchRenderedOutput( <del> <> <del> <span hidden={true} prop="A" /> <del> <span prop="Loading..." /> <del> </>, <del> ); <add> expect(root).toMatchRenderedOutput( <add> <> <add> <span hidden={true} prop="A" /> <add> <span prop="Loading..." /> <add> </>, <add> ); <ide> <del> // Schedule a high pri update on the boundary, and a lower pri update <del> // on the fallback. We're testing to make sure the fallback can still <del> // update even though the primary tree is suspended.{ <del> ReactNoop.discreteUpdates(() => { <del> setAppText('C'); <add> // Schedule a default pri update on the boundary, and a lower pri update <add> // on the fallback. We're testing to make sure the fallback can still <add> // update even though the primary tree is suspended. <add> await ReactNoop.act(async () => { <add> setAppText('C'); <add> React.unstable_startTransition(() => { <add> setFallbackText('Still loading...'); <ide> }); <del> setFallbackText('Still loading...'); <add> }); <ide> <del> expect(Scheduler).toFlushAndYield([ <del> // First try to render the high pri update. Still suspended. <del> 'Suspend! [C]', <del> 'Loading...', <add> expect(Scheduler).toHaveYielded([ <add> // First try to render the high pri update. Still suspended. <add> 'Suspend! [C]', <add> 'Loading...', <ide> <del> // In the expiration times model, once the high pri update suspends, <del> // we can't be sure if there's additional work at a lower priority <del> // that might unblock the tree. We do know that there's a lower <del> // priority update *somehwere* in the entire root, though (the update <del> // to the fallback). So we try rendering one more time, just in case. <del> // TODO: We shouldn't need to do this with lanes, because we always <del> // know exactly which lanes have pending work in each tree. <del> 'Suspend! [C]', <del> <del> // Then complete the update to the fallback. <del> 'Still loading...', <del> ]); <del> expect(root).toMatchRenderedOutput( <del> <> <del> <span hidden={true} prop="A" /> <del> <span prop="Still loading..." /> <del> </>, <del> ); <del> }); <add> // In the expiration times model, once the high pri update suspends, <add> // we can't be sure if there's additional work at a lower priority <add> // that might unblock the tree. We do know that there's a lower <add> // priority update *somehwere* in the entire root, though (the update <add> // to the fallback). So we try rendering one more time, just in case. <add> // TODO: We shouldn't need to do this with lanes, because we always <add> // know exactly which lanes have pending work in each tree. <add> 'Suspend! [C]', <add> <add> // Then complete the update to the fallback. <add> 'Still loading...', <add> ]); <add> expect(root).toMatchRenderedOutput( <add> <> <add> <span hidden={true} prop="A" /> <add> <span prop="Still loading..." /> <add> </>, <add> ); <ide> }, <ide> ); <ide> <ide><path>packages/react-reconciler/src/__tests__/useMutableSource-test.internal.js <ide> describe('useMutableSource', () => { <ide> // Now mutate A. Both hooks should update. <ide> // This is at high priority so that it doesn't get batched with default <ide> // priority updates that might fire during the passive effect <del> ReactNoop.discreteUpdates(() => { <del> mutateA('a1'); <add> await ReactNoop.act(async () => { <add> ReactNoop.discreteUpdates(() => { <add> mutateA('a1'); <add> }); <ide> }); <del> expect(Scheduler).toFlushUntilNextPaint([]); <ide> <del> expect(root.getChildrenAsJSX()).toEqual('first: a1, second: a1'); <add> expect(root).toMatchRenderedOutput('first: a1, second: a1'); <ide> }); <ide> <ide> expect(root.getChildrenAsJSX()).toEqual('first: a1, second: a1'); <ide><path>packages/react-refresh/src/__tests__/ReactFresh-test.js <ide> describe('ReactFresh', () => { <ide> } <ide> }); <ide> <del> it('can hot reload offscreen components', () => { <add> it('can hot reload offscreen components', async () => { <ide> if (__DEV__ && __EXPERIMENTAL__) { <ide> const AppV1 = prepare(() => { <ide> function Hello() { <ide> describe('ReactFresh', () => { <ide> expect(el.firstChild.textContent).toBe('0'); <ide> expect(el.firstChild.style.color).toBe('red'); <ide> <del> el.firstChild.dispatchEvent(new MouseEvent('click', {bubbles: true})); <del> expect(el.firstChild.textContent).toBe('0'); <del> expect(el.firstChild.style.color).toBe('red'); <del> expect(Scheduler).toFlushAndYieldThrough(['Hello#layout']); <add> await act(async () => { <add> el.firstChild.dispatchEvent( <add> new MouseEvent('click', { <add> bubbles: true, <add> }), <add> ); <add> }); <add> <add> expect(Scheduler).toHaveYielded(['Hello#layout']); <ide> expect(el.firstChild.textContent).toBe('1'); <ide> expect(el.firstChild.style.color).toBe('red'); <ide>
8
Python
Python
start a paver file to help for release purpose
874a4055ec84ea9886abc4252fe994c9cf5ac168
<ide><path>pavement.py <add>import os <add>import subprocess <add>try: <add> from hash import md5 <add>except ImportError: <add> import md5 <add> <add>import sphinx <add> <add>import distutils <add>import numpy.distutils <add> <add>try: <add> from paver.tasks import VERSION as _PVER <add> if not _PVER >= '1.0': <add> raise RuntimeError("paver version >= 1.0 required (was %s)" % _PVER) <add>except ImportError, e: <add> raise RuntimeError("paver version >= 1.0 required") <add> <add>import paver <add>import paver.doctools <add>import paver.path <add>from paver.easy import options, Bunch, task, needs, dry, sh, call_task <add>from paver.setuputils import setup <add> <add># NOTES/Changelog stuff <add>RELEASE = 'doc/release/1.3.0-notes.rst' <add>LOG_START = 'tags/1.2.0' <add>LOG_END = 'master' <add> <add>def compute_md5(): <add> released = paver.path.path('installers').listdir() <add> checksums = [] <add> for f in released: <add> m = md5.md5(open(f, 'r').read()) <add> checksums.append('%s %s' % (m.hexdigest(), f)) <add> <add> return checksums <add> <add>def write_release_task(filename='NOTES.txt'): <add> source = paver.path.path(RELEASE) <add> target = paver.path.path(filename) <add> if target.exists(): <add> target.remove() <add> source.copy(target) <add> ftarget = open(str(target), 'a') <add> ftarget.writelines(""" <add>Checksums <add>========= <add> <add>""") <add> ftarget.writelines(['%s\n' % c for c in compute_md5()]) <add> <add>def write_log_task(filename='Changelog'): <add> st = subprocess.Popen( <add> ['git', 'svn', 'log', '%s..%s' % (LOG_START, LOG_END)], <add> stdout=subprocess.PIPE) <add> <add> out = st.communicate()[0] <add> a = open(filename, 'w') <add> a.writelines(out) <add> a.close() <add> <add>@task <add>def write_release(): <add> write_release_task() <add> <add>@task <add>def write_log(): <add> write_log_task()
1
Python
Python
fix task id deduplication in @task_group
f881e1887c9126408098919ecad61f94e7a9661c
<ide><path>airflow/decorators/base.py <ide> Collection, <ide> Dict, <ide> Generic, <add> Iterator, <ide> Mapping, <ide> Optional, <ide> Sequence, <ide> from airflow.utils.task_group import TaskGroup, TaskGroupContext <ide> <ide> <del>def validate_python_callable(python_callable): <add>def validate_python_callable(python_callable: Any) -> None: <ide> """ <ide> Validate that python callable can be wrapped by operator. <ide> Raises exception if invalid. <ide> def validate_python_callable(python_callable): <ide> <ide> <ide> def get_unique_task_id( <del> task_id: str, dag: Optional[DAG] = None, task_group: Optional[TaskGroup] = None <add> task_id: str, <add> dag: Optional[DAG] = None, <add> task_group: Optional[TaskGroup] = None, <ide> ) -> str: <ide> """ <ide> Generate unique task id given a DAG (or if run in a DAG context) <ide> def get_unique_task_id( <ide> <ide> if tg_task_id not in dag.task_ids: <ide> return task_id <del> core = re.split(r'__\d+$', task_id)[0] <del> suffixes = sorted( <del> int(re.split(r'^.+__', task_id)[1]) <del> for task_id in dag.task_ids <del> if re.match(rf'^{core}__\d+$', task_id) <del> ) <del> if not suffixes: <del> return f'{core}__1' <del> return f'{core}__{suffixes[-1] + 1}' <add> <add> def _find_id_suffixes(dag: DAG) -> Iterator[int]: <add> prefix = re.split(r"__\d+$", tg_task_id)[0] <add> for task_id in dag.task_ids: <add> match = re.match(rf"^{prefix}__(\d+)$", task_id) <add> if match is None: <add> continue <add> yield int(match.group(1)) <add> yield 0 # Default if there's no matching task ID. <add> <add> core = re.split(r"__\d+$", task_id)[0] <add> return f"{core}__{max(_find_id_suffixes(dag)) + 1}" <ide> <ide> <ide> class DecoratedOperator(BaseOperator): <ide> def __init__( <ide> kwargs_to_upstream: Optional[Dict[str, Any]] = None, <ide> **kwargs, <ide> ) -> None: <del> kwargs['task_id'] = get_unique_task_id(task_id, kwargs.get('dag'), kwargs.get('task_group')) <add> task_id = get_unique_task_id(task_id, kwargs.get('dag'), kwargs.get('task_group')) <ide> self.python_callable = python_callable <ide> kwargs_to_upstream = kwargs_to_upstream or {} <ide> op_args = op_args or [] <ide> def __init__( <ide> self.multiple_outputs = multiple_outputs <ide> self.op_args = op_args <ide> self.op_kwargs = op_kwargs <del> super().__init__(**kwargs_to_upstream, **kwargs) <add> super().__init__(task_id=task_id, **kwargs_to_upstream, **kwargs) <ide> <ide> def execute(self, context: Context): <ide> return_value = super().execute(context) <ide><path>tests/utils/test_task_group.py <ide> def tg(): <ide> ... <ide> <ide> <add>def test_decorator_multiple_use_task(): <add> from airflow.decorators import task <add> <add> @dag("test-dag", start_date=DEFAULT_DATE) <add> def _test_dag(): <add> @task <add> def t(): <add> pass <add> <add> @task_group_decorator <add> def tg(): <add> for _ in range(3): <add> t() <add> <add> t() >> tg() >> t() <add> <add> test_dag = _test_dag() <add> assert test_dag.task_ids == [ <add> "t", # Start end. <add> "tg.t", <add> "tg.t__1", <add> "tg.t__2", <add> "t__1", # End node. <add> ] <add> <add> <ide> def test_decorator_partial_unmapped(): <ide> @task_group_decorator <ide> def tg():
2
Ruby
Ruby
fix direct uploads to local service
2d20a7696a761b1840bc2fbe09a2fd4bff2a779f
<ide><path>activestorage/app/controllers/active_storage/disk_controller.rb <ide> # Always go through the BlobsController, or your own authenticated controller, rather than directly <ide> # to the service url. <ide> class ActiveStorage::DiskController < ActionController::Base <add> skip_forgery_protection <add> <ide> def show <ide> if key = decode_verified_key <ide> send_data disk_service.download(key), <ide><path>activestorage/test/dummy/config/environments/test.rb <ide> # Print deprecation notices to the stderr. <ide> config.active_support.deprecation = :stderr <ide> <add> # Disable request forgery protection in test environment. <add> config.action_controller.allow_forgery_protection = false <add> <ide> # Raises error for missing translations <ide> # config.action_view.raise_on_missing_translations = true <ide> end <ide><path>activestorage/test/test_helper.rb <ide> # frozen_string_literal: true <ide> <add>ENV["RAILS_ENV"] ||= "test" <ide> require_relative "dummy/config/environment.rb" <ide> <ide> require "bundler/setup"
3
Java
Java
fix checkstyle violation
2b6117c0c2e6ec841116b5a21f4b98bc8a08ad47
<ide><path>spring-core/src/main/java/org/springframework/util/FileCopyUtils.java <ide> public static String copyToString(@Nullable Reader in) throws IOException { <ide> private static void close(Closeable closeable) { <ide> try { <ide> closeable.close(); <del> } catch (IOException ex) { <add> } <add> catch (IOException ex) { <ide> // ignore <ide> } <ide> }
1
PHP
PHP
improve error when insert query cannot be compiled
0e182ca620a4d0fcd5ca428d62ac9bec27974d16
<ide><path>src/Database/QueryCompiler.php <ide> namespace Cake\Database; <ide> <ide> use Cake\Database\Expression\QueryExpression; <add>use Cake\Database\Exception as DatabaseException; <ide> use Closure; <ide> use Countable; <ide> <ide> protected function _buildUnionPart(array $parts, Query $query, ValueBinder $gene <ide> */ <ide> protected function _buildInsertPart(array $parts, Query $query, ValueBinder $generator): string <ide> { <add> if (!isset($parts[0])) { <add> throw new DatabaseException( <add> 'Could not compile insert query. No table was specified. ' . <add> 'Use `into()` to define a table.' <add> ); <add> } <ide> $table = $parts[0]; <ide> $columns = $this->_stringifyExpressions($parts[1], $generator); <ide> $modifiers = $this->_buildModifierPart($query->clause('modifier'), $query, $generator); <ide><path>tests/TestCase/Database/QueryTest.php <ide> */ <ide> namespace Cake\Test\TestCase\Database; <ide> <add>use Cake\Database\Exception as DatabaseException; <ide> use Cake\Database\Expression\IdentifierExpression; <ide> use Cake\Database\Expression\QueryExpression; <ide> use Cake\Database\ExpressionInterface; <ide> use DateTimeImmutable; <ide> use InvalidArgumentException; <ide> use ReflectionProperty; <add>use RuntimeException; <ide> use stdClass; <ide> use TestApp\Database\Type\BarType; <ide> <ide> public function testSelectWhereArrayType() <ide> */ <ide> public function testSelectWhereArrayTypeEmpty() <ide> { <del> $this->expectException(\Cake\Database\Exception::class); <add> $this->expectException(DatabaseException::class); <ide> $this->expectExceptionMessage('Impossible to generate condition with empty list of values for field'); <ide> $this->loadFixtures('Comments'); <ide> $query = new Query($this->connection); <ide> public function testSelectWhereArrayTypeEmpty() <ide> */ <ide> public function testSelectWhereArrayTypeEmptyWithExpression() <ide> { <del> $this->expectException(\Cake\Database\Exception::class); <add> $this->expectException(DatabaseException::class); <ide> $this->expectExceptionMessage('with empty list of values for field (SELECT 1)'); <ide> $this->loadFixtures('Comments'); <ide> $query = new Query($this->connection); <ide> public function testUpdateRemovingAliasesCanBreakJoins() <ide> */ <ide> public function testInsertValuesBeforeInsertFailure() <ide> { <del> $this->expectException(\Cake\Database\Exception::class); <add> $this->expectException(DatabaseException::class); <ide> $query = new Query($this->connection); <ide> $query->select('*')->values([ <ide> 'id' => 1, <ide> public function testInsertValuesBeforeInsertFailure() <ide> */ <ide> public function testInsertNothing() <ide> { <del> $this->expectException(\RuntimeException::class); <add> $this->expectException(RuntimeException::class); <ide> $this->expectExceptionMessage('At least 1 column is required to perform an insert.'); <ide> $query = new Query($this->connection); <ide> $query->insert([]); <ide> } <ide> <add> /** <add> * Test insert() with no into() <add> * <add> * @return void <add> */ <add> public function testInsertNoInto() <add> { <add> $this->expectException(DatabaseException::class); <add> $this->expectExceptionMessage('Could not compile insert query. No table was specified'); <add> $query = new Query($this->connection); <add> $query->insert(['title', 'body'])->sql(); <add> } <add> <ide> /** <ide> * Test insert overwrites values <ide> * <ide> public function testInsertFromSelect() <ide> */ <ide> public function testInsertFailureMixingTypesArrayFirst() <ide> { <del> $this->expectException(\Cake\Database\Exception::class); <add> $this->expectException(DatabaseException::class); <ide> $this->loadFixtures('Articles'); <ide> $query = new Query($this->connection); <ide> $query->insert(['name']) <ide> public function testInsertFailureMixingTypesArrayFirst() <ide> */ <ide> public function testInsertFailureMixingTypesQueryFirst() <ide> { <del> $this->expectException(\Cake\Database\Exception::class); <add> $this->expectException(DatabaseException::class); <ide> $this->loadFixtures('Articles'); <ide> $query = new Query($this->connection); <ide> $query->insert(['name'])
2
Python
Python
fix test.py --shell
ede1a7ceb4e5ef6ae28cfc14bd8a3a26f7521515
<ide><path>tools/test.py <ide> def GetTestStatus(self, context, sections, defs): <ide> <ide> class Context(object): <ide> <del> def __init__(self, workspace, buildspace, verbose, vm, args, expect_fail, <add> def __init__(self, workspace, verbose, vm, args, expect_fail, <ide> timeout, processor, suppress_dialogs, <ide> store_unexpected_output, repeat, abort_on_timeout): <ide> self.workspace = workspace <del> self.buildspace = buildspace <ide> self.verbose = verbose <add> self.vm = vm <ide> self.node_args = args <ide> self.expect_fail = expect_fail <ide> self.timeout = timeout <ide> def __init__(self, workspace, buildspace, verbose, vm, args, expect_fail, <ide> self.node_has_crypto = True <ide> <ide> def GetVm(self, arch, mode): <add> if self.vm is not None: <add> return self.vm <ide> if arch == 'none': <ide> name = 'out/Debug/node' if mode == 'debug' else 'out/Release/node' <ide> else: <ide> def BuildOptions(): <ide> dest="suppress_dialogs", default=True, action="store_true") <ide> result.add_option("--no-suppress-dialogs", help="Display Windows dialogs for crashing tests", <ide> dest="suppress_dialogs", action="store_false") <del> result.add_option("--shell", help="Path to V8 shell", default="shell") <add> result.add_option("--shell", help="Path to node executable", default=None) <ide> result.add_option("--store-unexpected-output", <ide> help="Store the temporary JS files from tests that fails", <ide> dest="store_unexpected_output", default=True, action="store_true") <ide> def Main(): <ide> run_worker = join(workspace, "tools", "run-worker.js") <ide> options.node_args.append(run_worker) <ide> <del> shell = abspath(options.shell) <del> buildspace = dirname(shell) <del> <ide> processor = GetSpecialCommandProcessor(options.special_command) <add> <ide> context = Context(workspace, <del> buildspace, <ide> VERBOSE, <del> shell, <add> options.shell, <ide> options.node_args, <ide> options.expect_fail, <ide> options.timeout,
1
Python
Python
remove users of `numpy.compat.strchar`
d5c841a7e55311e77a539f43f3dab58a68a1c9c1
<ide><path>numpy/core/tests/test_multiarray.py <ide> from decimal import Decimal <ide> <ide> import numpy as np <del>from numpy.compat import strchar <ide> import numpy.core._multiarray_tests as _multiarray_tests <ide> from numpy.testing import ( <ide> assert_, assert_raises, assert_warns, assert_equal, assert_almost_equal, <ide> def test_sort_order(self): <ide> strtype = '>i2' <ide> else: <ide> strtype = '<i2' <del> mydtype = [('name', strchar + '5'), ('col2', strtype)] <add> mydtype = [('name', 'U5'), ('col2', strtype)] <ide> r = np.array([('a', 1), ('b', 255), ('c', 3), ('d', 258)], <ide> dtype=mydtype) <ide> r.sort(order='col2')
1
Text
Text
translate 02.2-jsx-spread.md to japanese
829ce68cd7c658a438c7053192d585d66dc6243d
<ide><path>docs/docs/02.2-jsx-spread.ja-JP.md <add>--- <add>id: jsx-spread <add>title: JSXの拡張属性 <add>permalink: jsx-spread-ja-JP.html <add>prev: jsx-in-depth-ja-JP.html <add>next: jsx-gotchas-ja-JP.html <add>--- <add> <add>以下のように、コンポーネントにどのようなプロパティを配置したいか前もって全て分かっている場合は、JSXを使うことは簡単です。 <add> <add>```javascript <add> var component = <Component foo={x} bar={y} />; <add>``` <add> <add>## Propsを変更してはいけない <add> <add>セットしたいプロパティが分からない場合は、以下のように後からオブジェクトに追加したいと思うでしょう。 <add> <add>```javascript <add> var component = <Component />; <add> component.props.foo = x; // だめ <add> component.props.bar = y; // 同様にだめ <add>``` <add> <add>これはアンチパターンです。なぜなら、後々まで正しいpropTypesであるかどうかチェックすることを助けることができないことを意味するからです。これは、propTypesのエラーが隠されたスタックトレースに出力されて終わってしまうことを意味します。 <add> <add>propsは変更不可と考えられるべきです。propsのオブジェクトをどこかで変更することは予期せぬ結果を発生させる可能性があるので、理想的には、この時点ではpropsは固定のオブジェクトであるべきです。 <add> <add>## 拡張属性 <add> <add>以下のように、拡張属性というJSXの新しい特徴を使うことができます。 <add> <add>```javascript <add> var props = {}; <add> props.foo = x; <add> props.bar = y; <add> var component = <Component {...props} />; <add>``` <add> <add>あなたが渡したオブジェクトのプロパティはコンポーネントのpropsにコピーされます。 <add> <add>これを複数回使用したり、他の属性と組み合わせたりすることもできます。仕様書では、順序が重要となっています。後の属性は前の属性をオーバーライドします。 <add> <add>```javascript <add> var props = { foo: 'default' }; <add> var component = <Component {...props} foo={'override'} />; <add> console.log(component.props.foo); // 'override' <add>``` <add> <add>## 奇妙な `...` という表現は何でしょうか? <add> <add>`...` 操作(拡張操作)は既に[ES6のarrays](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_operator)でサポートされています。[Object Rest と Spread Properties](https://github.com/sebmarkbage/ecmascript-rest-spread)のES7のプロポーザルもあります。JSXのきれいなシンタックスを供給するために、それらのサポートや開発中の標準使用を利用しています。
1
Ruby
Ruby
add test to column alias in `exists?` sql
6cc5e9aa81870a8a4ef1582b34abfb4deff7d115
<ide><path>activerecord/lib/active_record/relation/finder_methods.rb <ide> def first! <ide> # Person.last # returns the last object fetched by SELECT * FROM people <ide> # Person.where(["user_name = ?", user_name]).last <ide> # Person.order("created_on DESC").offset(5).last <del> # Person.last(3) # returns the last three objects fetched by SELECT * FROM people. <add> # Person.last(3) # returns the last three objects fetched by SELECT * FROM people. <ide> # <ide> # Take note that in that last case, the results are sorted in ascending order: <ide> # <ide><path>activerecord/test/cases/finder_test.rb <ide> def test_exists <ide> assert_raise(NoMethodError) { Topic.exists?([1,2]) } <ide> end <ide> <add> def test_exists_does_not_select_columns_without_alias <add> assert_sql(/SELECT\W+1 AS _one FROM ["`]topics["`]\W+LIMIT 1/) do <add> Topic.exists? <add> end <add> end <add> <ide> def test_exists_returns_true_with_one_record_and_no_args <ide> assert Topic.exists? <ide> end
2
Javascript
Javascript
add the live key, fix setup breaks
8276d477a6c3b71ddab58578e71d52af4d5845f7
<ide><path>client/src/components/Donation/Donation.js <del>/* global STRIPE_PUBLIC_KEY */ <ide> import React, { Component } from 'react'; <ide> import PropTypes from 'prop-types'; <ide> import { bindActionCreators } from 'redux'; <ide> const propTypes = { <ide> show: PropTypes.bool <ide> }; <ide> <del>const stripeKey = STRIPE_PUBLIC_KEY; <add>const stripeKey = 'pk_live_E6Z6xPM8pEsJziHW905zpAvF'; <ide> <ide> class DonationModal extends Component { <ide> constructor(...props) {
1
Javascript
Javascript
simplify the import path of axioserror
7c60c6282af765b9060bdebd72ebf997e5e667b2
<ide><path>lib/axios.js <ide> import CancelToken from'./cancel/CancelToken.js'; <ide> import isCancel from'./cancel/isCancel.js'; <ide> import {VERSION} from './env/data.js'; <ide> import toFormData from './helpers/toFormData.js'; <del>import AxiosError from '../lib/core/AxiosError.js'; <add>import AxiosError from './core/AxiosError.js'; <ide> import spread from './helpers/spread.js'; <ide> import isAxiosError from './helpers/isAxiosError.js'; <ide>
1
Text
Text
add teletype highlights from last week
e4382b1b0f99c2fd411c29dbb610a271b618048c
<ide><path>docs/focus/2018-03-12.md <ide> - Sanitize stderr from git in error notifications [#1331](https://github.com/atom/github/pull/1331) <ide> - Upgrade MacOS build to CircleCI 2.0 [#1334](https://github.com/atom/github/pull/1334) <ide> - Begin packaging bundled GPG binaries akin to the way we handle git [atom/squeegpg-native](https://github.com/atom/squeegpg-native) <add>- Teletype <add> - Released [Teletype 0.10.0](https://github.com/atom/teletype/releases/tag/v0.10.0), introducing a streamlined view of your collaborators' avatars inside the editor ([atom/teletype#332](https://github.com/atom/teletype/issues/332)) <ide> - Tree-sitter <ide> - Xray <ide> - Engineering Improvements
1
PHP
PHP
rewmove unused imports of configure class
82db65bdd439eb891c6f87db7bb402610245e2e1
<ide><path>tests/TestCase/Cache/CacheTest.php <ide> use Cake\Cache\Cache; <ide> use Cake\Cache\CacheRegistry; <ide> use Cake\Cache\Engine\FileEngine; <del>use Cake\Core\Configure; <ide> use Cake\Core\Plugin; <ide> use Cake\TestSuite\TestCase; <ide> <ide><path>tests/TestCase/Console/ConsoleIoTest.php <ide> namespace Cake\Test\TestCase\Console; <ide> <ide> use Cake\Console\ConsoleIo; <del>use Cake\Core\Configure; <ide> use Cake\Log\Log; <ide> use Cake\TestSuite\TestCase; <ide> <ide><path>tests/TestCase/Console/HelperRegistryTest.php <ide> namespace Cake\Test\TestCase\Console; <ide> <ide> use Cake\Console\HelperRegistry; <del>use Cake\Core\Configure; <ide> use Cake\Core\Plugin; <ide> use Cake\TestSuite\TestCase; <ide> <ide><path>tests/TestCase/Console/ShellDispatcherTest.php <ide> <ide> use Cake\Console\Shell; <ide> use Cake\Console\ShellDispatcher; <del>use Cake\Core\Configure; <ide> use Cake\Core\Plugin; <ide> use Cake\TestSuite\TestCase; <ide> <ide><path>tests/TestCase/Console/ShellTest.php <ide> use Cake\Console\ConsoleIo; <ide> use Cake\Console\ConsoleOptionParser; <ide> use Cake\Console\Shell; <del>use Cake\Core\Configure; <ide> use Cake\Core\Plugin; <ide> use Cake\Filesystem\Folder; <ide> use Cake\TestSuite\TestCase; <ide><path>tests/TestCase/Controller/Component/FlashComponentTest.php <ide> use Cake\Controller\ComponentRegistry; <ide> use Cake\Controller\Component\FlashComponent; <ide> use Cake\Controller\Controller; <del>use Cake\Core\Configure; <ide> use Cake\Http\ServerRequest; <ide> use Cake\Network\Session; <ide> use Cake\TestSuite\TestCase; <ide><path>tests/TestCase/Controller/Component/PaginatorComponentTest.php <ide> use Cake\Controller\ComponentRegistry; <ide> use Cake\Controller\Component\PaginatorComponent; <ide> use Cake\Controller\Controller; <del>use Cake\Core\Configure; <ide> use Cake\Datasource\ConnectionManager; <ide> use Cake\Datasource\EntityInterface; <ide> use Cake\Http\ServerRequest; <ide><path>tests/TestCase/Controller/Component/RequestHandlerComponentTest.php <ide> <ide> use Cake\Controller\ComponentRegistry; <ide> use Cake\Controller\Component\RequestHandlerComponent; <del>use Cake\Core\Configure; <ide> use Cake\Event\Event; <ide> use Cake\Http\ServerRequest; <ide> use Cake\Routing\DispatcherFactory; <ide><path>tests/TestCase/Controller/ComponentTest.php <ide> use Cake\Controller\ComponentRegistry; <ide> use Cake\Controller\Component\CookieComponent; <ide> use Cake\Controller\Controller; <del>use Cake\Core\Configure; <ide> use Cake\Event\EventManager; <ide> use Cake\TestSuite\TestCase; <ide> use TestApp\Controller\ComponentTestController; <ide><path>tests/TestCase/Database/ConnectionTest.php <ide> */ <ide> namespace Cake\Test\TestCase\Database; <ide> <del>use Cake\Core\Configure; <ide> use Cake\Database\Connection; <ide> use Cake\Database\Driver\Mysql; <ide> use Cake\Database\Exception\NestedTransactionRollbackException; <ide><path>tests/TestCase/Error/Middleware/ErrorHandlerMiddlewareTest.php <ide> */ <ide> namespace Cake\Test\TestCase\Error\Middleware; <ide> <del>use Cake\Core\Configure; <ide> use Cake\Error\Middleware\ErrorHandlerMiddleware; <ide> use Cake\Http\Response; <ide> use Cake\Http\ServerRequestFactory; <ide><path>tests/TestCase/Http/ActionDispatcherTest.php <ide> */ <ide> namespace Cake\Test\TestCase\Http; <ide> <del>use Cake\Core\Configure; <ide> use Cake\Event\Event; <ide> use Cake\Http\ActionDispatcher; <ide> use Cake\Http\Response; <ide><path>tests/TestCase/Http/BaseApplicationTest.php <ide> <?php <ide> namespace Cake\Test\TestCase; <ide> <del>use Cake\Core\Configure; <ide> use Cake\Http\Response; <ide> use Cake\Http\ServerRequestFactory; <ide> use Cake\TestSuite\TestCase; <ide><path>tests/TestCase/Http/ClientTest.php <ide> */ <ide> namespace Cake\Test\TestCase\Http; <ide> <del>use Cake\Core\Configure; <ide> use Cake\Http\Client; <ide> use Cake\Http\Client\Request; <ide> use Cake\Http\Client\Response; <ide><path>tests/TestCase/Http/ControllerFactoryTest.php <ide> */ <ide> namespace Cake\Test\TestCase\Http; <ide> <del>use Cake\Core\Configure; <ide> use Cake\Http\ControllerFactory; <ide> use Cake\Http\Response; <ide> use Cake\Http\ServerRequest; <ide><path>tests/TestCase/Log/LogTest.php <ide> */ <ide> namespace Cake\Test\TestCase\Log; <ide> <del>use Cake\Core\Configure; <ide> use Cake\Core\Plugin; <ide> use Cake\Log\Engine\FileLog; <ide> use Cake\Log\Log; <ide><path>tests/TestCase/Network/Session/DatabaseSessionTest.php <ide> */ <ide> namespace Cake\Test\TestCase\Network\Session; <ide> <del>use Cake\Core\Configure; <ide> use Cake\Datasource\ConnectionManager; <ide> use Cake\Network\Session; <ide> use Cake\Network\Session\DatabaseSession; <ide><path>tests/TestCase/Network/SessionTest.php <ide> */ <ide> namespace Cake\Test\TestCase\Network; <ide> <del>use Cake\Core\Configure; <ide> use Cake\Network\Session; <ide> use Cake\Network\Session\CacheSession; <ide> use Cake\Network\Session\DatabaseSession; <ide><path>tests/TestCase/ORM/BehaviorRegistryTest.php <ide> */ <ide> namespace Cake\Test\TestCase\ORM; <ide> <del>use Cake\Core\Configure; <ide> use Cake\Core\Plugin; <ide> use Cake\ORM\BehaviorRegistry; <ide> use Cake\ORM\Table; <ide><path>tests/TestCase/ORM/Locator/TableLocatorTest.php <ide> */ <ide> namespace Cake\Test\TestCase\ORM\Locator; <ide> <del>use Cake\Core\Configure; <ide> use Cake\Core\Plugin; <ide> use Cake\Datasource\ConnectionManager; <ide> use Cake\ORM\Locator\TableLocator; <ide><path>tests/TestCase/ORM/TableTest.php <ide> <ide> use ArrayObject; <ide> use Cake\Collection\Collection; <del>use Cake\Core\Configure; <ide> use Cake\Core\Plugin; <ide> use Cake\Database\Exception; <ide> use Cake\Database\Expression\QueryExpression; <ide><path>tests/TestCase/ORM/TableUuidTest.php <ide> */ <ide> namespace Cake\Test\TestCase\ORM; <ide> <del>use Cake\Core\Configure; <ide> use Cake\Datasource\ConnectionManager; <ide> use Cake\ORM\Entity; <ide> use Cake\ORM\TableRegistry; <ide><path>tests/TestCase/Routing/DispatcherFactoryTest.php <ide> */ <ide> namespace Cake\Test\TestCase\Routing; <ide> <del>use Cake\Core\Configure; <ide> use Cake\Http\ServerRequest; <ide> use Cake\Routing\DispatcherFactory; <ide> use Cake\TestSuite\TestCase; <ide><path>tests/TestCase/Routing/Filter/ControllerFactoryFilterTest.php <ide> */ <ide> namespace Cake\Test\TestCase\Routing\Filter; <ide> <del>use Cake\Core\Configure; <ide> use Cake\Event\Event; <ide> use Cake\Http\Response; <ide> use Cake\Http\ServerRequest; <ide><path>tests/TestCase/Shell/Task/ExtractTaskTest.php <ide> */ <ide> namespace Cake\Test\TestCase\Shell\Task; <ide> <del>use Cake\Core\Configure; <ide> use Cake\Core\Plugin; <ide> use Cake\Filesystem\Folder; <ide> use Cake\TestSuite\TestCase; <ide><path>tests/TestCase/TestSuite/CookieEncryptedUsingControllerTest.php <ide> */ <ide> namespace Cake\Test\TestCase\Controller; <ide> <del>use Cake\Core\Configure; <ide> use Cake\Routing\DispatcherFactory; <ide> use Cake\Routing\Router; <ide> use Cake\TestSuite\IntegrationTestCase; <ide><path>tests/TestCase/TestSuite/IntegrationTestCaseTest.php <ide> */ <ide> namespace Cake\Test\TestCase\TestSuite; <ide> <del>use Cake\Core\Configure; <ide> use Cake\Event\EventManager; <ide> use Cake\Http\Response; <ide> use Cake\Routing\DispatcherFactory; <ide><path>tests/TestCase/TestSuite/TestCaseTest.php <ide> */ <ide> namespace Cake\Test\TestCase\TestSuite; <ide> <del>use Cake\Core\Configure; <ide> use Cake\Core\Plugin; <ide> use Cake\Datasource\ConnectionManager; <ide> use Cake\Event\Event; <ide><path>tests/TestCase/View/CellTest.php <ide> namespace Cake\Test\TestCase\View; <ide> <ide> use Cake\Cache\Cache; <del>use Cake\Core\Configure; <ide> use Cake\Core\Plugin; <ide> use Cake\TestSuite\TestCase; <ide> use Cake\View\View; <ide><path>tests/TestCase/View/HelperRegistryTest.php <ide> */ <ide> namespace Cake\Test\TestCase\View; <ide> <del>use Cake\Core\Configure; <ide> use Cake\Core\Plugin; <ide> use Cake\TestSuite\TestCase; <ide> use Cake\View\Helper; <ide><path>tests/test_app/TestApp/Error/TestAppsExceptionRenderer.php <ide> namespace TestApp\Error; <ide> <ide> use Cake\Controller\Controller; <del>use Cake\Core\Configure; <ide> use Cake\Error\ExceptionRenderer; <ide> use Cake\Http\Response; <ide> use Cake\Http\ServerRequest;
31
Go
Go
fix isvalidparent logic
1cf4b2b8bd92562ffe8d7cce8a705efb3ef32ba7
<ide><path>image/cache/cache.go <ide> func isValidParent(img, parent *image.Image) bool { <ide> if len(parent.History) >= len(img.History) { <ide> return false <ide> } <del> if len(parent.RootFS.DiffIDs) >= len(img.RootFS.DiffIDs) { <add> if len(parent.RootFS.DiffIDs) > len(img.RootFS.DiffIDs) { <ide> return false <ide> } <ide> <ide><path>integration-cli/docker_cli_build_test.go <ide> func (s *DockerSuite) TestBuildWithFailure(c *check.C) { <ide> c.Assert(result.Stdout(), checker.Not(checker.Contains), "Step 2/2 : RUN nobody") <ide> } <ide> <add>func (s *DockerSuite) TestBuildCacheFromEqualDiffIDsLength(c *check.C) { <add> dockerfile := ` <add> FROM busybox <add> RUN echo "test" <add> ENTRYPOINT ["sh"]` <add> ctx := fakeContext(c, dockerfile, map[string]string{ <add> "Dockerfile": dockerfile, <add> }) <add> defer ctx.Close() <add> <add> buildImageSuccessfully(c, "build1", withExternalBuildContext(ctx)) <add> id1 := getIDByName(c, "build1") <add> <add> // rebuild with cache-from <add> result := buildImage("build2", withBuildFlags("--cache-from=build1"), withExternalBuildContext(ctx)) <add> result.Assert(c, icmd.Success) <add> id2 := getIDByName(c, "build2") <add> c.Assert(id1, checker.Equals, id2) <add> c.Assert(strings.Count(result.Combined(), "Using cache"), checker.Equals, 2) <add>} <add> <ide> func (s *DockerSuite) TestBuildCacheFrom(c *check.C) { <ide> testRequires(c, DaemonIsLinux) // All tests that do save are skipped in windows <ide> dockerfile := `
2
Ruby
Ruby
fix code style
82535696fad17f62c6f82fc4d7f43b9f02225da2
<ide><path>Library/Homebrew/livecheck/strategy/sparkle.rb <ide> class Sparkle <ide> def self.match?(url) <ide> return false unless url.match?(%r{^https?://}) <ide> <del> xml = url.end_with?('.xml') <add> xml = url.end_with?(".xml") <ide> xml ||= begin <ide> headers = Strategy.page_headers(url) <del> content_type = headers["content-type"]&.split(';', 2)&.first <add> content_type = headers["content-type"]&.split(";", 2)&.first <ide> ["application/xml", "text/xml"].include?(content_type) <ide> end <ide> return false unless xml
1
Javascript
Javascript
remove an unnecessary `undefined` in wpt
0525a147b2fb5bad45a9e91a4f1f0b0effcf97e7
<ide><path>test/common/wpt.js <ide> class ResourceLoader { <ide> } <ide> <ide> class StatusRule { <del> constructor(key, value, pattern = undefined) { <add> constructor(key, value, pattern) { <ide> this.key = key; <ide> this.requires = value.requires || []; <ide> this.fail = value.fail;
1
Text
Text
add v3.15.0-beta.1 to changelog
1a1c1e703f65454c5626dfe0902129fdd6ff7646
<ide><path>CHANGELOG.md <ide> # Ember Changelog <ide> <add>### v3.15.0-beta.1 (October 31, 2019) <add> <add>- [#17948](https://github.com/emberjs/ember.js/pull/17948) [DEPRECATION] Deprecate `Component#isVisible` per [RFC #324](https://github.com/emberjs/rfcs/blob/master/text/0324-deprecate-component-isvisible.md). <add>- [#18491](https://github.com/emberjs/ember.js/pull/18491) [DEPRECATION] Deprecate `{{partial}}` per [RFC #449](https://github.com/emberjs/rfcs/blob/master/text/0449-deprecate-partials.md). <add>- [#18441](https://github.com/emberjs/ember.js/pull/18441) [DEPRECATION] Deprecate window.ENV <add> <ide> ### v3.14.1 (October 30, 2019) <ide> <ide> - [#18244](https://github.com/emberjs/ember.js/pull/18244) [BUGFIX] Fix query param assertion when using the router services `transitionTo` to redirect _during_ an existing transition.
1
Text
Text
add link to practice website
cd852622565a516400bf625734465f752c02a07b
<ide><path>guide/english/git/git-branch/index.md <ide> man git-branch <ide> - The `git commit` command: <a href='https://guide.freecodecamp.org/git/git-commit/' target='_blank' rel='nofollow'>fCC Guide</a> <ide> - The `git stash` command: <a href='https://guide.freecodecamp.org/git/git-stash/' target='_blank' rel='nofollow'>fCC Guide</a> <ide> - Git documentation: <a href='https://git-scm.com/docs/git-branch' target='_blank' rel='nofollow'>branch</a> <add>- An in-depth game and visual aide: <a href='https://learngitbranching.js.org/' target='_blank' rel='nofollow'>LearnGitBranching.js</a>
1
Python
Python
fix trainer callback
1d5ea34f6abd0b62ed89a8d5fa4cc14ed970cf67
<ide><path>src/transformers/trainer_callback.py <ide> def on_prediction_step(self, args, state, control, eval_dataloader=None, **kwarg <ide> <ide> def on_evaluate(self, args, state, control, **kwargs): <ide> if state.is_local_process_zero: <del> self.prediction_bar.close() <add> if self.prediction_bar is not None: <add> self.prediction_bar.close() <ide> self.prediction_bar = None <ide> <ide> def on_log(self, args, state, control, logs=None, **kwargs):
1
PHP
PHP
fix typo in comment
8640c80dc92d57caf1ab3b4d6d87e6ef749893b5
<ide><path>tests/TestCase/ORM/Behavior/TranslateBehaviorShadowTableTest.php <ide> public function testFindWithBTMAssociations() <ide> * <ide> * The parent test expects description translations in only some of the records <ide> * that's incompatible with the shadow-translate behavior, since the schema <del> * dictates what fields to expect to be translated and doesnt permit any EAV <add> * dictates what fields to expect to be translated and doesn't permit any EAV <ide> * style translations <ide> * <ide> * @return void
1
Text
Text
remove reference to "backspace" regex
4a7571492d1a7e4436e9131d46c30cfc31d631fe
<ide><path>curriculum/challenges/english/02-javascript-algorithms-and-data-structures/basic-javascript/escape-sequences-in-strings.english.md <ide> videoUrl: 'https://scrimba.com/c/cvmqRh6' <ide> <ide> ## Description <ide> <section id='description'> <del>Quotes are not the only characters that can be <dfn>escaped</dfn> inside a string. There are two reasons to use escaping characters: First is to allow you to use characters you might not otherwise be able to type out, such as a backspace. Second is to allow you to represent multiple quotes in a string without JavaScript misinterpreting what you mean. We learned this in the previous challenge. <del><table class="table table-striped"><thead><tr><th>Code</th><th>Output</th></tr></thead><tbody><tr><td><code>\'</code></td><td>single quote</td></tr><tr><td><code>\"</code></td><td>double quote</td></tr><tr><td><code>\\</code></td><td>backslash</td></tr><tr><td><code>\n</code></td><td>newline</td></tr><tr><td><code>\r</code></td><td>carriage return</td></tr><tr><td><code>\t</code></td><td>tab</td></tr><tr><td><code>\b</code></td><td>backspace</td></tr><tr><td><code>\f</code></td><td>form feed</td></tr></tbody></table> <add>Quotes are not the only characters that can be <dfn>escaped</dfn> inside a string. There are two reasons to use escaping characters:<ol><li>To allow you to use characters you may not otherwise be able to type out, such as a carriage returns.</li><li>To allow you to represent multiple quotes in a string without JavaScript misinterpreting what you mean.</li></ol>We learned this in the previous challenge. <add><table class="table table-striped"><thead><tr><th>Code</th><th>Output</th></tr></thead><tbody><tr><td><code>\'</code></td><td>single quote</td></tr><tr><td><code>\"</code></td><td>double quote</td></tr><tr><td><code>\\</code></td><td>backslash</td></tr><tr><td><code>\n</code></td><td>newline</td></tr><tr><td><code>\r</code></td><td>carriage return</td></tr><tr><td><code>\t</code></td><td>tab</td></tr><tr><td><code>\b</code></td><td>word boundary</td></tr><tr><td><code>\f</code></td><td>form feed</td></tr></tbody></table> <ide> <em>Note that the backslash itself must be escaped in order to display as a backslash.</em> <ide> </section> <ide>
1
PHP
PHP
refactor the request class
3d684136b8c7d5d9f598e090627029cd792b2afc
<ide><path>system/request.php <ide> public static function uri() <ide> { <ide> if ( ! is_null(static::$uri)) return static::$uri; <ide> <add> $uri = static::raw_uri(); <add> <add> if (strpos($uri, $base = parse_url(Config::get('application.url'), PHP_URL_PATH)) === 0) <add> { <add> $uri = substr($uri, strlen($base)); <add> } <add> <add> if (strpos($uri, $index = '/index.php') === 0) <add> { <add> $uri = substr($uri, strlen($index)); <add> } <add> <add> return static::$uri = (($uri = trim($uri, '/')) == '') ? '/' : $uri; <add> } <add> <add> /** <add> * Get the raw request URI from the $_SERVER array. <add> * <add> * @return string <add> */ <add> private static function raw_uri() <add> { <ide> if (isset($_SERVER['PATH_INFO'])) <ide> { <ide> $uri = $_SERVER['PATH_INFO']; <ide> public static function uri() <ide> throw new \Exception("Malformed request URI. Request terminated."); <ide> } <ide> <del> if (strpos($uri, $base = parse_url(Config::get('application.url'), PHP_URL_PATH)) === 0) <del> { <del> $uri = substr($uri, strlen($base)); <del> } <del> <del> if (strpos($uri, $index = '/index.php') === 0) <del> { <del> $uri = substr($uri, strlen($index)); <del> } <del> <del> return static::$uri = (($uri = trim($uri, '/')) == '') ? '/' : $uri; <add> return $uri; <ide> } <ide> <ide> /** <ide> public static function route_is($name) <ide> */ <ide> public static function __callStatic($method, $parameters) <ide> { <del> // Dynamically determine if a given route is handling the request. <ide> if (strpos($method, 'route_is_') === 0) <ide> { <ide> return static::route_is(substr($method, 9));
1
Python
Python
update json imports
d740bae95a30d473e9de0019bc31475900caa435
<ide><path>rest_framework/fields.py <ide> import datetime <ide> import decimal <ide> import inspect <del>import json <ide> import re <ide> import uuid <ide> from collections import OrderedDict <ide> ) <ide> from rest_framework.exceptions import ErrorDetail, ValidationError <ide> from rest_framework.settings import api_settings <del>from rest_framework.utils import html, humanize_datetime, representation <add>from rest_framework.utils import html, humanize_datetime, json, representation <ide> <ide> <ide> class empty: <ide><path>rest_framework/parsers.py <ide> from __future__ import unicode_literals <ide> <ide> import codecs <del>import json <ide> <ide> from django.conf import settings <ide> from django.core.files.uploadhandler import StopFutureHandlers <ide> <ide> from rest_framework import renderers <ide> from rest_framework.exceptions import ParseError <add>from rest_framework.utils import json <ide> <ide> <ide> class DataAndFiles(object): <ide><path>rest_framework/renderers.py <ide> from __future__ import unicode_literals <ide> <ide> import base64 <del>import json <ide> from collections import OrderedDict <ide> <ide> from django import forms <ide> from rest_framework.exceptions import ParseError <ide> from rest_framework.request import is_form_media_type, override_method <ide> from rest_framework.settings import api_settings <del>from rest_framework.utils import encoders <add>from rest_framework.utils import encoders, json <ide> from rest_framework.utils.breadcrumbs import get_breadcrumbs <ide> from rest_framework.utils.field_mapping import ClassLookupDict <ide> <ide><path>rest_framework/utils/serializer_helpers.py <ide> from __future__ import unicode_literals <ide> <ide> import collections <del>import json <ide> from collections import OrderedDict <ide> <ide> from django.utils.encoding import force_text <ide> <ide> from rest_framework.compat import unicode_to_repr <add>from rest_framework.utils import json <ide> <ide> <ide> class ReturnDict(OrderedDict): <ide><path>tests/test_renderers.py <ide> # -*- coding: utf-8 -*- <ide> from __future__ import unicode_literals <ide> <del>import json <ide> import re <ide> from collections import MutableMapping, OrderedDict <ide> <ide> from rest_framework.response import Response <ide> from rest_framework.settings import api_settings <ide> from rest_framework.test import APIRequestFactory <add>from rest_framework.utils import json <ide> from rest_framework.views import APIView <ide> <ide> DUMMYSTATUS = status.HTTP_200_OK <ide><path>tests/test_routers.py <ide> from __future__ import unicode_literals <ide> <del>import json <ide> from collections import namedtuple <ide> <ide> import pytest <ide> from rest_framework.response import Response <ide> from rest_framework.routers import DefaultRouter, SimpleRouter <ide> from rest_framework.test import APIRequestFactory <add>from rest_framework.utils import json <ide> <ide> factory = APIRequestFactory() <ide>
6
Python
Python
move view to admin and fix print statement
a7aa4c767c055c765e9e8ab97a70ac8d4b2a9df6
<ide><path>airflow/models.py <ide> class Variable(Base): <ide> val = Column(Text) <ide> <ide> def __repr__(self): <del> return '"{}" : {}'.format(self.key, self.key) <add> return '{} : {}'.format(self.key, self.val) <ide><path>airflow/www/app.py <ide> class VariableView(LoginMixin, ModelView): <ide> pass <ide> <ide> mv = VariableView( <del> models.Variable, Session, name="Variables", category="Browse") <add> models.Variable, Session, name="Variables", category="Admin") <ide> admin.add_view(mv)
2