content_type stringclasses 8 values | main_lang stringclasses 7 values | message stringlengths 1 50 | sha stringlengths 40 40 | patch stringlengths 52 962k | file_count int64 1 300 |
|---|---|---|---|---|---|
Ruby | Ruby | remove the assumption of schema in database_url | 7429bc58a0dffa94636b21cc0cba1d19a5ae7a84 | <ide><path>activerecord/lib/active_record/connection_adapters/connection_specification.rb
<ide> class ConnectionUrlResolver # :nodoc:
<ide> def initialize(url)
<ide> raise "Database URL cannot be empty" if url.blank?
<ide> @uri = uri_parser.parse(url)
<del> @adapter = @uri.scheme.tr('-', '_')
<add> @adapter = @uri.scheme && @uri.scheme.tr('-', '_')
<ide> @adapter = "postgresql" if @adapter == "postgres"
<ide>
<ide> if @uri.opaque | 1 |
Ruby | Ruby | remove dead code. @klass isn't used anymore | 15adf778af4e1e23d68bfe7684ab2337e5091931 | <ide><path>actionpack/lib/action_dispatch/routing/route_set.rb
<ide> class OptimizedUrlHelper < UrlHelper # :nodoc:
<ide>
<ide> def initialize(route, options)
<ide> super
<del> @klass = Journey::Router::Utils
<ide> @required_parts = @route.required_parts
<ide> @arg_size = @required_parts.size
<ide> end | 1 |
Text | Text | fix linter issue | 1ad09593fd077ecc4b42e8f73740349fec9d469a | <ide><path>doc/api/assert.md
<ide> Legacy assertion mode uses the [Abstract Equality Comparison][] in:
<ide>
<ide> To use legacy assertion mode:
<ide>
<del>```cjs
<add>```mjs
<ide> import assert from 'assert';
<ide> ```
<ide> | 1 |
Javascript | Javascript | remove obsolete lint comment | bda34bde56f688d38a908cc851e892402bef6f23 | <ide><path>test/parallel/test-repl.js
<ide> // OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
<ide> // USE OR OTHER DEALINGS IN THE SOFTWARE.
<ide>
<del>/* eslint-disable max-len, strict */
<ide> 'use strict';
<ide> const common = require('../common');
<ide> const assert = require('assert'); | 1 |
Python | Python | fix flaky test | 8b42fff90e14503d5612106d8d17aebc58a9c061 | <ide><path>tests/keras/test_optimizers.py
<ide> def test_adagrad():
<ide>
<ide>
<ide> def test_adadelta():
<del> _test_optimizer(Adadelta())
<del> _test_optimizer(Adadelta(decay=1e-3))
<add> _test_optimizer(Adadelta(), target=0.83)
<add> _test_optimizer(Adadelta(decay=1e-3), target=0.83)
<ide>
<ide>
<ide> def test_adam(): | 1 |
Text | Text | add markdown identifier to code language | dd3c0e08939e4333d53acc9170b906958e42c58e | <ide><path>guide/english/algorithms/sorting-algorithms/radix-sort/index.md
<ide> Finally , we sort according to the hundred's digit (most significant digit):
<ide> The array becomes : 10, 11, 17, 21, 34, 44, 123, 654 which is sorted. This is how our algorithm works.
<ide>
<ide> An implementation in C:
<del>```
<add>```c
<ide> void countsort(int arr[],int n,int place){
<del>
<del> int i,freq[range]={0}; //range for integers is 10 as digits range from 0-9
<del> int output[n];
<del>
<del> for(i=0;i<n;i++)
<del> freq[(arr[i]/place)%range]++;
<del>
<del> for(i=1;i<range;i++)
<del> freq[i]+=freq[i-1];
<del>
<del> for(i=n-1;i>=0;i--){
<del> output[freq[(arr[i]/place)%range]-1]=arr[i];
<del> freq[(arr[i]/place)%range]--;
<del> }
<del>
<del> for(i=0;i<n;i++)
<del> arr[i]=output[i];
<add> int i,freq[range]={0}; //range for integers is 10 as digits range from 0-9
<add> int output[n];
<add>
<add> for(i=0;i<n;i++)
<add> freq[(arr[i]/place)%range]++;
<add>
<add> for(i=1;i<range;i++)
<add> freq[i]+=freq[i-1];
<add>
<add> for(i=n-1;i>=0;i--){
<add> output[freq[(arr[i]/place)%range]-1]=arr[i];
<add> freq[(arr[i]/place)%range]--;
<add> }
<add>
<add> for(i=0;i<n;i++)
<add> arr[i]=output[i];
<ide> }
<del>
<add>
<ide> void radixsort(ll arr[],int n,int maxx){ //maxx is the maximum element in the array
<del>
<del> int mul=1;
<del> while(maxx){
<del> countsort(arr,n,mul);
<del> mul*=10;
<del> maxx/=10;
<del> }
<add> int mul=1;
<add> while(maxx){
<add> countsort(arr,n,mul);
<add> mul*=10;
<add> maxx/=10;
<add> }
<ide> }
<ide> ```
<ide> An implementation in python :
<ide>
<del>```
<add>```py
<ide> def counting_sort(arr, max_value, get_index):
<ide> counts = [0] * max_value
<ide> | 1 |
Ruby | Ruby | switch bottle provider over to bintray | 77d47de3b482b78782dec7fdd1392a7dee436a38 | <ide><path>Library/Homebrew/software_spec.rb
<ide> def build_url(root_url, filename)
<ide> class BottleSpecification
<ide> DEFAULT_PREFIX = "/usr/local".freeze
<ide> DEFAULT_CELLAR = "/usr/local/Cellar".freeze
<del> if ENV["HOMEBREW_BINTRAY_TESTING"]
<add> if ENV["HOMEBREW_SOURCEFORGE_TESTING"]
<add> DEFAULT_ROOT_URL = "https://downloads.sf.net/project/machomebrew/Bottles".freeze
<add> else
<ide> DEFAULT_DOMAIN = "https://homebrew.bintray.com".freeze
<ide> DEFAULT_ROOT_URL = "#{DEFAULT_DOMAIN}/bottles".freeze
<del> else
<del> DEFAULT_ROOT_URL = "https://downloads.sf.net/project/machomebrew/Bottles".freeze
<ide> end
<ide>
<ide> attr_rw :root_url, :prefix, :cellar, :revision | 1 |
Javascript | Javascript | fix flaky test on firefox 54+ and safari 9 | 7c876285cbfebf69a2ea64a216f903cf8d3803ee | <ide><path>test/ng/directive/ngOptionsSpec.js
<ide> describe('ngOptions', function() {
<ide> });
<ide>
<ide>
<del> it('should not re-set the `selected` property if it already has the correct value', function() {
<del> scope.values = [{name: 'A'}, {name: 'B'}];
<del> createMultiSelect();
<add> // Support: Safari 9
<add> // This test relies defining a getter/setter `selected` property on either `<option>` elements
<add> // or their prototype. Some browsers (including Safari 9) are very flakey when the
<add> // getter/setter is not defined on the prototype (probably due to some bug). On Safari 9, the
<add> // getter/setter that is already defined on the `<option>` element's prototype is not
<add> // configurable, so we can't overwrite it with our spy.
<add> if (!/\b9(?:\.\d+)+ safari/i.test(window.navigator.userAgent)) {
<add> it('should not re-set the `selected` property if it already has the correct value', function() {
<add> scope.values = [{name: 'A'}, {name: 'B'}];
<add> createMultiSelect();
<ide>
<del> var options = element.find('option');
<del> var optionsSetSelected = [];
<del> var _selected = [];
<del>
<del> // Set up spies
<del> forEach(options, function(option, i) {
<del> optionsSetSelected[i] = jasmine.createSpy('optionSetSelected' + i);
<del> _selected[i] = option.selected;
<del> Object.defineProperty(option, 'selected', {
<del> get: function() { return _selected[i]; },
<del> set: optionsSetSelected[i].and.callFake(function(value) { _selected[i] = value; })
<add> var options = element.find('option');
<add> var optionsSetSelected = [];
<add> var _selected = [];
<add>
<add> // Set up spies
<add> var optionProto = Object.getPrototypeOf(options[0]);
<add> var originalSelectedDescriptor = isFunction(Object.getOwnPropertyDescriptor) &&
<add> Object.getOwnPropertyDescriptor(optionProto, 'selected');
<add> var addSpiesOnProto = originalSelectedDescriptor && originalSelectedDescriptor.configurable;
<add>
<add> forEach(options, function(option, i) {
<add> var setSelected = function(value) { _selected[i] = value; };
<add> optionsSetSelected[i] = jasmine.createSpy('optionSetSelected' + i).and.callFake(setSelected);
<add> setSelected(option.selected);
<ide> });
<del> });
<ide>
<del> // Select `optionA`
<del> scope.$apply('selected = [values[0]]');
<del>
<del> expect(optionsSetSelected[0]).toHaveBeenCalledOnceWith(true);
<del> expect(optionsSetSelected[1]).not.toHaveBeenCalled();
<del> expect(options[0].selected).toBe(true);
<del> expect(options[1].selected).toBe(false);
<del> optionsSetSelected[0].calls.reset();
<del> optionsSetSelected[1].calls.reset();
<del>
<del> // Select `optionB` (`optionA` remains selected)
<del> scope.$apply('selected.push(values[1])');
<del>
<del> expect(optionsSetSelected[0]).not.toHaveBeenCalled();
<del> expect(optionsSetSelected[1]).toHaveBeenCalledOnceWith(true);
<del> expect(options[0].selected).toBe(true);
<del> expect(options[1].selected).toBe(true);
<del> optionsSetSelected[0].calls.reset();
<del> optionsSetSelected[1].calls.reset();
<del>
<del> // Unselect `optionA` (`optionB` remains selected)
<del> scope.$apply('selected.shift()');
<del>
<del> expect(optionsSetSelected[0]).toHaveBeenCalledOnceWith(false);
<del> expect(optionsSetSelected[1]).not.toHaveBeenCalled();
<del> expect(options[0].selected).toBe(false);
<del> expect(options[1].selected).toBe(true);
<del> optionsSetSelected[0].calls.reset();
<del> optionsSetSelected[1].calls.reset();
<del>
<del> // Reselect `optionA` (`optionB` remains selected)
<del> scope.$apply('selected.push(values[0])');
<del>
<del> expect(optionsSetSelected[0]).toHaveBeenCalledOnceWith(true);
<del> expect(optionsSetSelected[1]).not.toHaveBeenCalled();
<del> expect(options[0].selected).toBe(true);
<del> expect(options[1].selected).toBe(true);
<del> optionsSetSelected[0].calls.reset();
<del> optionsSetSelected[1].calls.reset();
<del>
<del> // Unselect `optionB` (`optionA` remains selected)
<del> scope.$apply('selected.shift()');
<del>
<del> expect(optionsSetSelected[0]).not.toHaveBeenCalled();
<del> expect(optionsSetSelected[1]).toHaveBeenCalledOnceWith(false);
<del> expect(options[0].selected).toBe(true);
<del> expect(options[1].selected).toBe(false);
<del> optionsSetSelected[0].calls.reset();
<del> optionsSetSelected[1].calls.reset();
<del>
<del> // Unselect `optionA`
<del> scope.$apply('selected.length = 0');
<del>
<del> expect(optionsSetSelected[0]).toHaveBeenCalledOnceWith(false);
<del> expect(optionsSetSelected[1]).not.toHaveBeenCalled();
<del> expect(options[0].selected).toBe(false);
<del> expect(options[1].selected).toBe(false);
<del> optionsSetSelected[0].calls.reset();
<del> optionsSetSelected[1].calls.reset();
<del> });
<add> if (!addSpiesOnProto) {
<add> forEach(options, function(option, i) {
<add> Object.defineProperty(option, 'selected', {
<add> get: function() { return _selected[i]; },
<add> set: optionsSetSelected[i]
<add> });
<add> });
<add> } else {
<add> // Support: Firefox 54+
<add> // We cannot use the above (simpler) method on all browsers because of Firefox 54+, which
<add> // is very flaky when the getter/setter property is defined on the element itself and not
<add> // the prototype. (Possibly the result of some (buggy?) optimization.)
<add> var getSelected = function(index) { return _selected[index]; };
<add> var setSelected = function(index, value) { optionsSetSelected[index](value); };
<add> var getSelectedOriginal = function(option) {
<add> return originalSelectedDescriptor.get.call(option);
<add> };
<add> var setSelectedOriginal = function(option, value) {
<add> originalSelectedDescriptor.set.call(option, value);
<add> };
<add> var getIndexAndCall = function(option, foundFn, notFoundFn, value) {
<add> for (var i = 0, ii = options.length; i < ii; ++i) {
<add> if (options[i] === option) return foundFn(i, value);
<add> }
<add> return notFoundFn(option, value);
<add> };
<add>
<add> Object.defineProperty(optionProto, 'selected', {
<add> get: function() {
<add> return getIndexAndCall(this, getSelected, getSelectedOriginal);
<add> },
<add> set: function(value) {
<add> return getIndexAndCall(this, setSelected, setSelectedOriginal, value);
<add> }
<add> });
<add> }
<add>
<add> // Select `optionA`
<add> scope.$apply('selected = [values[0]]');
<add>
<add> expect(optionsSetSelected[0]).toHaveBeenCalledOnceWith(true);
<add> expect(optionsSetSelected[1]).not.toHaveBeenCalled();
<add> expect(options[0].selected).toBe(true);
<add> expect(options[1].selected).toBe(false);
<add> optionsSetSelected[0].calls.reset();
<add> optionsSetSelected[1].calls.reset();
<add>
<add> // Select `optionB` (`optionA` remains selected)
<add> scope.$apply('selected.push(values[1])');
<add>
<add> expect(optionsSetSelected[0]).not.toHaveBeenCalled();
<add> expect(optionsSetSelected[1]).toHaveBeenCalledOnceWith(true);
<add> expect(options[0].selected).toBe(true);
<add> expect(options[1].selected).toBe(true);
<add> optionsSetSelected[0].calls.reset();
<add> optionsSetSelected[1].calls.reset();
<add>
<add> // Unselect `optionA` (`optionB` remains selected)
<add> scope.$apply('selected.shift()');
<add>
<add> expect(optionsSetSelected[0]).toHaveBeenCalledOnceWith(false);
<add> expect(optionsSetSelected[1]).not.toHaveBeenCalled();
<add> expect(options[0].selected).toBe(false);
<add> expect(options[1].selected).toBe(true);
<add> optionsSetSelected[0].calls.reset();
<add> optionsSetSelected[1].calls.reset();
<add>
<add> // Reselect `optionA` (`optionB` remains selected)
<add> scope.$apply('selected.push(values[0])');
<add>
<add> expect(optionsSetSelected[0]).toHaveBeenCalledOnceWith(true);
<add> expect(optionsSetSelected[1]).not.toHaveBeenCalled();
<add> expect(options[0].selected).toBe(true);
<add> expect(options[1].selected).toBe(true);
<add> optionsSetSelected[0].calls.reset();
<add> optionsSetSelected[1].calls.reset();
<add>
<add> // Unselect `optionB` (`optionA` remains selected)
<add> scope.$apply('selected.shift()');
<add>
<add> expect(optionsSetSelected[0]).not.toHaveBeenCalled();
<add> expect(optionsSetSelected[1]).toHaveBeenCalledOnceWith(false);
<add> expect(options[0].selected).toBe(true);
<add> expect(options[1].selected).toBe(false);
<add> optionsSetSelected[0].calls.reset();
<add> optionsSetSelected[1].calls.reset();
<add>
<add> // Unselect `optionA`
<add> scope.$apply('selected.length = 0');
<add>
<add> expect(optionsSetSelected[0]).toHaveBeenCalledOnceWith(false);
<add> expect(optionsSetSelected[1]).not.toHaveBeenCalled();
<add> expect(options[0].selected).toBe(false);
<add> expect(options[1].selected).toBe(false);
<add> optionsSetSelected[0].calls.reset();
<add> optionsSetSelected[1].calls.reset();
<add>
<add> // Support: Firefox 54+
<add> // Restore `originalSelectedDescriptor`
<add> if (addSpiesOnProto) {
<add> Object.defineProperty(optionProto, 'selected', originalSelectedDescriptor);
<add> }
<add> });
<add> }
<ide>
<ide> if (window.MutationObserver) {
<ide> //IE9 and IE10 do not support MutationObserver | 1 |
Javascript | Javascript | keep compatibility with `.[ext]` | 8788add9126014473e74d643e4f3511e7890979a | <ide><path>lib/NormalModule.js
<ide> class NormalModule extends Module {
<ide> /** @type {string} */
<ide> this.rawRequest = rawRequest;
<ide> /** @type {boolean} */
<del> this.binary = type.startsWith("webassembly");
<add> this.binary = /^(url|webassembly)\b/.test(type);
<ide> this.parser = parser;
<ide> this.generator = generator;
<ide> this.resource = resource;
<ide><path>lib/Template.js
<ide> const MATCH_PADDED_HYPHENS_REPLACE_REGEX = /^-|-$/g;
<ide> * @property {string} hash
<ide> * @property {string} fullHash
<ide> * @property {TODO} outputOptions
<del> * @property {{javascript: ModuleTemplate, webassembly: ModuleTemplate}} moduleTemplates
<add> * @property {{url: ModuleTemplate, javascript: ModuleTemplate, webassembly: ModuleTemplate}} moduleTemplates
<ide> * @property {DependencyTemplates} dependencyTemplates
<ide> * @property {RuntimeTemplate} runtimeTemplate
<ide> * @property {ModuleGraph} moduleGraph
<ide><path>lib/TemplatedPathPlugin.js
<ide> const replacer = (value, allowEmpty) => {
<ide>
<ide> return "";
<ide> } else {
<add> // [ext] has `.` but file-loader is specified `[hash].[ext]` as default
<add> if (match === "[ext]" && args.length > 2) {
<add> if (args[2].includes(".[ext]")) {
<add> return value.slice(1);
<add> }
<add> }
<add>
<ide> return `${value}`;
<ide> }
<ide> }; | 3 |
Javascript | Javascript | fix tests on firefox v93+ | 6a52c4f90cc661c0605cd98e6cb04455ba913f58 | <ide><path>test/ng/directive/inputSpec.js
<ide> describe('input', function() {
<ide> var inputElm = helper.compileInput('<input type="datetime-local" ng-model="breakMe"/>');
<ide>
<ide> $rootScope.$apply(function() {
<del> $rootScope.breakMe = new Date(2009, 0, 6, 16, 25, 0);
<add> $rootScope.breakMe = new Date(2009, 0, 6, 16, 25, 1, 337);
<ide> });
<ide>
<del> expect(inputElm.val()).toBe('2009-01-06T16:25:00.000');
<add> expect(inputElm.val()).toBe('2009-01-06T16:25:01.337');
<ide>
<ide> //set to text for browsers with datetime-local validation.
<ide> inputElm[0].setAttribute('type', 'text');
<ide> describe('input', function() {
<ide> it('should use UTC if specified in the options', function() {
<ide> var inputElm = helper.compileInput('<input type="datetime-local" ng-model="value" ng-model-options="{timezone: \'UTC\'}" />');
<ide>
<del> helper.changeInputValueTo('2000-01-01T01:02');
<del> expect(+$rootScope.value).toBe(Date.UTC(2000, 0, 1, 1, 2, 0));
<add> helper.changeInputValueTo('2000-01-01T01:02:03.456');
<add> expect(+$rootScope.value).toBe(Date.UTC(2000, 0, 1, 1, 2, 3, 456));
<ide>
<ide> $rootScope.$apply(function() {
<del> $rootScope.value = new Date(Date.UTC(2001, 0, 1, 1, 2, 0));
<add> $rootScope.value = new Date(Date.UTC(2001, 0, 1, 1, 2, 3, 456));
<ide> });
<del> expect(inputElm.val()).toBe('2001-01-01T01:02:00.000');
<add> expect(inputElm.val()).toBe('2001-01-01T01:02:03.456');
<ide> });
<ide>
<ide>
<ide> it('should be possible to override the timezone', function() {
<ide> var inputElm = helper.compileInput('<input type="datetime-local" ng-model="value" ng-model-options="{timezone: \'UTC\'}" />');
<ide>
<del> helper.changeInputValueTo('2000-01-01T01:02');
<del> expect(+$rootScope.value).toBe(Date.UTC(2000, 0, 1, 1, 2, 0));
<add> helper.changeInputValueTo('2000-01-01T01:02:03.456');
<add> expect(+$rootScope.value).toBe(Date.UTC(2000, 0, 1, 1, 2, 3, 456));
<ide>
<ide> inputElm.controller('ngModel').$overrideModelOptions({timezone: '+0500'});
<ide> $rootScope.$apply(function() {
<del> $rootScope.value = new Date(Date.UTC(2001, 0, 1, 1, 2, 0));
<add> $rootScope.value = new Date(Date.UTC(2001, 0, 1, 1, 2, 3, 456));
<ide> });
<del> expect(inputElm.val()).toBe('2001-01-01T06:02:00.000');
<add> expect(inputElm.val()).toBe('2001-01-01T06:02:03.456');
<ide>
<ide> inputElm.controller('ngModel').$overrideModelOptions({timezone: 'UTC'});
<ide>
<del> helper.changeInputValueTo('2000-01-01T01:02');
<del> expect(+$rootScope.value).toBe(Date.UTC(2000, 0, 1, 1, 2, 0));
<add> helper.changeInputValueTo('2000-01-01T01:02:03.456');
<add> expect(+$rootScope.value).toBe(Date.UTC(2000, 0, 1, 1, 2, 3, 456));
<ide> });
<ide>
<ide>
<ide> describe('input', function() {
<ide> var inputElm = helper.compileInput(
<ide> '<input type="datetime-local" ng-model="value" ng-model-options="' + ngModelOptions + '" />');
<ide>
<del> helper.changeInputValueTo('2000-01-01T06:02');
<del> expect(+$rootScope.value).toBe(Date.UTC(2000, 0, 1, 1, 2, 0));
<add> helper.changeInputValueTo('2000-01-01T06:02:03.456');
<add> expect(+$rootScope.value).toBe(Date.UTC(2000, 0, 1, 1, 2, 3, 456));
<ide>
<ide> $rootScope.$apply(function() {
<del> $rootScope.value = new Date(Date.UTC(2001, 0, 1, 1, 2, 0));
<add> $rootScope.value = new Date(Date.UTC(2001, 0, 1, 1, 2, 3, 456));
<ide> });
<del> expect(inputElm.val()).toBe('2001-01-01T06:02:00.000');
<add> expect(inputElm.val()).toBe('2001-01-01T06:02:03.456');
<ide> }
<ide> );
<ide>
<ide> describe('input', function() {
<ide> it('should allow to specify the seconds', function() {
<ide> var inputElm = helper.compileInput('<input type="datetime-local" ng-model="value"" />');
<ide>
<del> helper.changeInputValueTo('2000-01-01T01:02:03');
<del> expect(+$rootScope.value).toBe(+new Date(2000, 0, 1, 1, 2, 3));
<add> helper.changeInputValueTo('2000-01-01T01:02:03.456');
<add> expect(+$rootScope.value).toBe(+new Date(2000, 0, 1, 1, 2, 3, 456));
<ide>
<ide> $rootScope.$apply(function() {
<del> $rootScope.value = new Date(2001, 0, 1, 1, 2, 3);
<add> $rootScope.value = new Date(2001, 0, 1, 1, 2, 3, 456);
<ide> });
<del> expect(inputElm.val()).toBe('2001-01-01T01:02:03.000');
<add> expect(inputElm.val()).toBe('2001-01-01T01:02:03.456');
<ide> });
<ide>
<ide>
<ide> describe('input', function() {
<ide> it('should allow four or more digits in year', function() {
<ide> var inputElm = helper.compileInput('<input type="datetime-local" ng-model="value" />');
<ide>
<del> helper.changeInputValueTo('10123-01-01T01:02');
<del> expect(+$rootScope.value).toBe(+new Date(10123, 0, 1, 1, 2, 0));
<add> helper.changeInputValueTo('10123-01-01T01:02:03.456');
<add> expect(+$rootScope.value).toBe(+new Date(10123, 0, 1, 1, 2, 3, 456));
<ide>
<ide> $rootScope.$apply(function() {
<del> $rootScope.value = new Date(20456, 1, 1, 1, 2, 0);
<add> $rootScope.value = new Date(20456, 1, 1, 1, 2, 3, 456);
<ide> });
<del> expect(inputElm.val()).toBe('20456-02-01T01:02:00.000');
<add> expect(inputElm.val()).toBe('20456-02-01T01:02:03.456');
<ide> }
<ide> );
<ide> } | 1 |
Python | Python | teach gyp to write an 'all deps' rule" | fa9f31a4fda5a3782c652e56e394465805ebb50f | <ide><path>tools/gyp/pylib/gyp/generator/make.py
<ide> def CalculateMakefilePath(build_file, base_name):
<ide> for target in gyp.common.AllTargets(target_list, target_dicts, build_file):
<ide> needed_targets.add(target)
<ide>
<del> all_deps = set()
<ide> build_files = set()
<ide> include_list = set()
<ide> for qualified_target in target_list:
<ide> def CalculateMakefilePath(build_file, base_name):
<ide> os.path.dirname(makefile_path))
<ide> include_list.add(mkfile_rel_path)
<ide>
<del> if 'actions' in spec:
<del> for action in spec['actions']:
<del> all_deps.update(map(writer.Absolutify, action['inputs']))
<del> if 'sources' in spec:
<del> all_deps.update(map(writer.Absolutify, spec['sources']))
<del>
<ide> # Write out per-gyp (sub-project) Makefiles.
<ide> depth_rel_path = gyp.common.RelativePath(options.depth, os.getcwd())
<ide> for build_file in build_files:
<ide> def CalculateMakefilePath(build_file, base_name):
<ide> root_makefile.write(SHARED_FOOTER)
<ide>
<ide> root_makefile.close()
<del>
<del> # Hack to get rid of $(obj)/path/to/foo.o deps that node.gyp adds manually.
<del> all_deps = [s for s in all_deps if not '$' in s]
<del> all_deps_path = os.path.join(options.toplevel_dir, '.deps')
<del> with open(all_deps_path, 'w') as f:
<del> f.write('ALL_DEPS := \\\n\t')
<del> f.write(' \\\n\t'.join(sorted(all_deps))) | 1 |
Javascript | Javascript | add uuid field to animationclip.tojson | 4da3cc11407434a6ccc75d8789b2622cb02a57cb | <ide><path>src/animation/AnimationClip.js
<ide> Object.assign( AnimationClip, {
<ide>
<ide> 'name': clip.name,
<ide> 'duration': clip.duration,
<del> 'tracks': tracks
<add> 'tracks': tracks,
<add> 'uuid': clip.uuid
<ide>
<ide> };
<ide> | 1 |
Text | Text | explain volume_name in post /container binds | 2bd2893b9239d0bf0b584612a21ca205c0059170 | <ide><path>docs/reference/api/docker_remote_api_v1.21.md
<ide> Json Parameters:
<ide> + `container_path` to create a new volume for the container
<ide> + `host_path:container_path` to bind-mount a host path into the container
<ide> + `host_path:container_path:ro` to make the bind-mount read-only inside the container.
<add> + `volume_name:container_path` to bind-mount a volume managed by a volume plugin into the container.
<add> + `volume_name:container_path:ro` to make the bind mount read-only inside the container.
<ide> - **Links** - A list of links for the container. Each link entry should be
<ide> in the form of `container_name:alias`.
<ide> - **LxcConf** - LXC specific configurations. These configurations only | 1 |
Text | Text | use https for links in `readme.md` | 6d85c7e41ae2978664e4840c75d59f3a2d58cafa | <ide><path>README.md
<ide> To get a local copy of the current code, clone it using git:
<ide> $ git clone https://github.com/mozilla/pdf.js.git
<ide> $ cd pdf.js
<ide>
<del>Next, install Node.js via the [official package](http://nodejs.org) or via
<add>Next, install Node.js via the [official package](https://nodejs.org) or via
<ide> [nvm](https://github.com/creationix/nvm). You need to install the gulp package
<ide> globally (see also [gulp's getting started](https://github.com/gulpjs/gulp/blob/master/docs/getting-started.md#getting-started)):
<ide>
<ide> PDF.js is hosted on several free CDNs:
<ide>
<ide> You can play with the PDF.js API directly from your browser using the live demos below:
<ide>
<del>+ [Interactive examples](http://mozilla.github.io/pdf.js/examples/index.html#interactive-examples)
<add>+ [Interactive examples](https://mozilla.github.io/pdf.js/examples/index.html#interactive-examples)
<ide>
<ide> More examples can be found in the [examples folder](https://github.com/mozilla/pdf.js/tree/master/examples/). Some of them are using the pdfjs-dist package, which can be built and installed in this repo directory via `gulp dist-install` command.
<ide>
<ide> For an introduction to the PDF.js code, check out the presentation by our
<ide> contributor Julian Viereck:
<ide>
<del>+ http://www.youtube.com/watch?v=Iv15UY-4Fg8
<add>+ https://www.youtube.com/watch?v=Iv15UY-4Fg8
<ide>
<ide> More learning resources can be found at:
<ide>
<ide> File an issue:
<ide>
<ide> Follow us on twitter: @pdfjs
<ide>
<del>+ http://twitter.com/#!/pdfjs
<add>+ https://twitter.com/pdfjs | 1 |
PHP | PHP | remove unnecessary import | 75cdbedd505441de42427ab5bb2c7e03e5fb0777 | <ide><path>src/Illuminate/Validation/ValidationException.php
<ide> use Exception;
<ide> use Illuminate\Support\Arr;
<ide> use Illuminate\Support\Facades\Validator as ValidatorFacade;
<del>use Illuminate\Support\Str;
<ide>
<ide> class ValidationException extends Exception
<ide> { | 1 |
Go | Go | fix restore container by nspid | 9f03fd76b578f2d9d00b0a1bd76b776e20a7d681 | <ide><path>execdriver/namespaces/driver.go
<ide> func (d *driver) Kill(p *execdriver.Command, sig int) error {
<ide> }
<ide>
<ide> func (d *driver) Restore(c *execdriver.Command) error {
<del> return ErrNotSupported
<add> var (
<add> nspid int
<add> p = filepath.Join(d.root, "containers", c.ID, "root", ".nspid")
<add> )
<add> f, err := os.Open(p)
<add> if err != nil {
<add> return err
<add> }
<add> defer f.Close()
<add> if _, err := fmt.Fscanf(f, "%d", &nspid); err != nil {
<add> return err
<add> }
<add> proc, err := os.FindProcess(nspid)
<add> if err != nil {
<add> return err
<add> }
<add> _, err = proc.Wait()
<add> return err
<ide> }
<ide>
<ide> func (d *driver) Info(id string) execdriver.Info { | 1 |
Javascript | Javascript | add values to error message | 15fa9fca0c319525766ee45ff05c36bbc8d0247a | <ide><path>test/parallel/test-require-extensions-main.js
<ide> require('../common');
<ide> const assert = require('assert');
<ide> const fixtures = require('../common/fixtures');
<ide>
<del>const fixturesRequire =
<del> require(fixtures.path('require-bin', 'bin', 'req.js'));
<add>const fixturesRequire = require(fixtures.path('require-bin', 'bin', 'req.js'));
<ide>
<ide> assert.strictEqual(
<ide> fixturesRequire,
<ide> '',
<del> 'test-require-extensions-main failed to import fixture requirements'
<add> 'test-require-extensions-main failed to import fixture requirements: ' +
<add> fixturesRequire
<ide> ); | 1 |
Python | Python | log the loss value in summary exporter | 17263f301b648d461329af6070664f37e77b6abe | <ide><path>research/object_detection/model_lib_v2.py
<ide> def _dist_train_step(data_iterator):
<ide> 'steps_per_sec': np.mean(steps_per_sec_list),
<ide> 'steps_per_sec_p50': np.median(steps_per_sec_list),
<ide> 'steps_per_sec_max': max(steps_per_sec_list),
<add> 'last_batch_loss': loss
<ide> }
<ide> mixed_precision = 'bf16' if kwargs['use_bfloat16'] else 'fp32'
<ide> performance_summary_exporter(metrics, mixed_precision) | 1 |
Javascript | Javascript | add polyfills to jest setup scripts | 1ae7a77934465a33538dd73226359fda141db981 | <ide><path>packager/react-packager/src/BundlesLayout/__tests__/BundlesLayoutIntegration-test.js
<ide> describe('BundlesLayout', () => {
<ide> 'polyfills/error-guard.js',
<ide> 'polyfills/String.prototype.es6.js',
<ide> 'polyfills/Array.prototype.es6.js',
<add> 'polyfills/Array.es6.js',
<ide> ];
<ide> const baseFs = getBaseFs();
<ide>
<ide><path>packager/react-packager/src/Resolver/__tests__/Resolver-test.js
<ide> describe('Resolver', function() {
<ide> 'polyfills/String.prototype.es6.js',
<ide> ],
<ide> },
<add> { id: 'polyfills/Array.es6.js',
<add> isPolyfill: true,
<add> path: 'polyfills/Array.es6.js',
<add> dependencies: [
<add> 'polyfills/prelude.js',
<add> 'polyfills/require.js',
<add> 'polyfills/polyfills.js',
<add> 'polyfills/console.js',
<add> 'polyfills/error-guard.js',
<add> 'polyfills/String.prototype.es6.js',
<add> 'polyfills/Array.prototype.es6.js',
<add> ],
<add> }
<ide> ]);
<ide> });
<ide> });
<ide> describe('Resolver', function() {
<ide> 'polyfills/console.js',
<ide> 'polyfills/error-guard.js',
<ide> 'polyfills/String.prototype.es6.js',
<del> 'polyfills/Array.prototype.es6.js'
<add> 'polyfills/Array.prototype.es6.js',
<add> 'polyfills/Array.es6.js',
<ide> ]
<ide> },
<ide> ]);
<ide><path>packager/react-packager/src/Resolver/index.js
<ide> class Resolver {
<ide> path.join(__dirname, 'polyfills/error-guard.js'),
<ide> path.join(__dirname, 'polyfills/String.prototype.es6.js'),
<ide> path.join(__dirname, 'polyfills/Array.prototype.es6.js'),
<add> path.join(__dirname, 'polyfills/Array.es6.js'),
<ide> ].concat(this._polyfillModuleNames);
<ide>
<ide> return polyfillModuleNames.map(
<ide><path>packager/react-packager/src/Resolver/polyfills/Array.es6.js
<add>/**
<add> * Copyright 2013-2014 Facebook, Inc.
<add> * @provides Array.es6
<add> * @polyfill
<add> */
<add>
<add>/*eslint-disable */
<add>
<add>/**
<add> * Creates an array from array like objects.
<add> *
<add> * https://people.mozilla.org/~jorendorff/es6-draft.html#sec-array.from
<add> */
<add>if (!Array.from) {
<add> Array.from = function(arrayLike /*, mapFn, thisArg */) {
<add> if (arrayLike == null) {
<add> throw new TypeError('Object is null or undefined');
<add> }
<add>
<add> // Optional args.
<add> var mapFn = arguments[1];
<add> var thisArg = arguments[2];
<add>
<add> var C = this;
<add> var items = Object(arrayLike);
<add> var symbolIterator = typeof Symbol === 'function'
<add> ? Symbol.iterator
<add> : '@@iterator';
<add> var mapping = typeof mapFn === 'function';
<add> var usingIterator = typeof items[symbolIterator] === 'function';
<add> var key = 0;
<add> var ret;
<add> var value;
<add>
<add> if (usingIterator) {
<add> ret = typeof C === 'function'
<add> ? new C()
<add> : [];
<add> var it = items[symbolIterator]();
<add> var next;
<add>
<add> while (!(next = it.next()).done) {
<add> value = next.value;
<add>
<add> if (mapping) {
<add> value = mapFn.call(thisArg, value, key);
<add> }
<add>
<add> ret[key] = value;
<add> key += 1;
<add> }
<add>
<add> ret.length = key;
<add> return ret;
<add> }
<add>
<add> var len = items.length;
<add> if (isNaN(len) || len < 0) {
<add> len = 0;
<add> }
<add>
<add> ret = typeof C === 'function'
<add> ? new C(len)
<add> : new Array(len);
<add>
<add> while (key < len) {
<add> value = items[key];
<add>
<add> if (mapping) {
<add> value = mapFn.call(thisArg, value, key);
<add> }
<add>
<add> ret[key] = value;
<add>
<add> key += 1;
<add> }
<add>
<add> ret.length = key;
<add> return ret;
<add> };
<add>}
<ide><path>packager/react-packager/src/Resolver/polyfills/Array.prototype.es6.js
<ide> }
<ide> });
<ide> }
<del>
<del> /**
<del> * Creates an array from array like objects.
<del> *
<del> * https://people.mozilla.org/~jorendorff/es6-draft.html#sec-array.from
<del> */
<del> if (!Array.from) {
<del> Array.from = function(arrayLike /*, mapFn, thisArg */) {
<del> if (arrayLike == null) {
<del> throw new TypeError('Object is null or undefined');
<del> }
<del>
<del> // Optional args.
<del> var mapFn = arguments[1];
<del> var thisArg = arguments[2];
<del>
<del> var C = this;
<del> var items = Object(arrayLike);
<del> var symbolIterator = typeof Symbol === 'function'
<del> ? Symbol.iterator
<del> : '@@iterator';
<del> var mapping = typeof mapFn === 'function';
<del> var usingIterator = typeof items[symbolIterator] === 'function';
<del> var key = 0;
<del> var ret;
<del> var value;
<del>
<del> if (usingIterator) {
<del> ret = typeof C === 'function'
<del> ? new C()
<del> : [];
<del> var it = items[symbolIterator]();
<del> var next;
<del>
<del> while (!(next = it.next()).done) {
<del> value = next.value;
<del>
<del> if (mapping) {
<del> value = mapFn.call(thisArg, value, key);
<del> }
<del>
<del> ret[key] = value;
<del> key += 1;
<del> }
<del>
<del> ret.length = key;
<del> return ret;
<del> }
<del>
<del> var len = items.length;
<del> if (isNaN(len) || len < 0) {
<del> len = 0;
<del> }
<del>
<del> ret = typeof C === 'function'
<del> ? new C(len)
<del> : new Array(len);
<del>
<del> while (key < len) {
<del> value = items[key];
<del>
<del> if (mapping) {
<del> value = mapFn.call(thisArg, value, key);
<del> }
<del>
<del> ret[key] = value;
<del>
<del> key += 1;
<del> }
<del>
<del> ret.length = key;
<del> return ret;
<del> };
<del> }
<ide> })(); | 5 |
Ruby | Ruby | remove all journey constant from public api | 3b50fb6b2f413b4bfe638b3c9839fe7db5077f73 | <ide><path>actionpack/lib/action_dispatch/journey/formatter.rb
<ide> require "action_controller/metal/exceptions"
<ide>
<ide> module ActionDispatch
<add> # :stopdoc:
<ide> module Journey
<ide> # The Formatter class is used for formatting URLs. For example, parameters
<ide> # passed to +url_for+ in Rails will eventually call Formatter#generate.
<del> class Formatter # :nodoc:
<add> class Formatter
<ide> attr_reader :routes
<ide>
<ide> def initialize(routes)
<ide> def cache
<ide> end
<ide> end
<ide> end
<add> # :stopdoc:
<ide> end
<ide><path>actionpack/lib/action_dispatch/journey/parser.rb
<ide>
<ide> require "action_dispatch/journey/parser_extras"
<ide> module ActionDispatch
<add> # :stopdoc:
<ide> module Journey
<ide> class Parser < Racc::Parser
<ide> ##### State transition tables begin ###
<ide> def _reduce_none(val, _values)
<ide> end
<ide> end # class Parser
<ide> end # module Journey
<add> # :startdoc:
<ide> end # module ActionDispatch
<ide><path>actionpack/lib/action_dispatch/journey/parser_extras.rb
<ide> require "action_dispatch/journey/nodes/node"
<ide>
<ide> module ActionDispatch
<del> module Journey # :nodoc:
<del> class Parser < Racc::Parser # :nodoc:
<add> # :stopdoc:
<add> module Journey
<add> class Parser < Racc::Parser
<ide> include Journey::Nodes
<ide>
<ide> def self.parse(string)
<ide> def next_token
<ide> end
<ide> end
<ide> end
<add> # :startdoc:
<ide> end
<ide><path>actionpack/lib/action_dispatch/journey/route.rb
<ide> module ActionDispatch
<del> module Journey # :nodoc:
<del> class Route # :nodoc:
<add> # :stopdoc:
<add> module Journey
<add> class Route
<ide> attr_reader :app, :path, :defaults, :name, :precedence
<ide>
<ide> attr_reader :constraints, :internal
<ide> def ast
<ide> end
<ide> end
<ide>
<del> def requirements # :nodoc:
<add> def requirements
<ide> # needed for rails `rails routes`
<ide> @defaults.merge(path.requirements).delete_if { |_,v|
<ide> /.+?/ == v
<ide> def match_verb(request)
<ide> end
<ide> end
<ide> end
<add> # :startdoc:
<ide> end
<ide><path>actionpack/lib/action_dispatch/journey/visitors.rb
<ide> module ActionDispatch
<del> module Journey # :nodoc:
<add> # :stopdoc:
<add> module Journey
<ide> class Format
<ide> ESCAPE_PATH = ->(value) { Router::Utils.escape_path(value) }
<ide> ESCAPE_SEGMENT = ->(value) { Router::Utils.escape_segment(value) }
<ide> def terminal(node, seed)
<ide> end
<ide> end
<ide> end
<add> # :startdoc:
<ide> end | 5 |
Java | Java | implement touch intercepting in rctview | d0de0767e3c2a034aca4d4cf5330b0937ea0dd9f | <ide><path>ReactAndroid/src/main/java/com/facebook/react/uimanager/TouchTargetHelper.java
<ide> private static boolean isTransformedTouchPointInView(
<ide> // This view can't be the target, but its children might
<ide> if (view instanceof ViewGroup) {
<ide> View targetView = findTouchTargetView(eventCoords, (ViewGroup) view);
<del> return targetView != view ? targetView : null;
<add> if (targetView != view) {
<add> return targetView;
<add> }
<add>
<add> // PointerEvents.BOX_NONE means that this react element cannot receive pointer events.
<add> // However, there might be virtual children that can receive pointer events, in which case
<add> // we still want to return this View and dispatch a pointer event to the virtual element.
<add> // Note that this currently only applies to Nodes/FlatViewGroup as it's the only class that
<add> // is both a ViewGroup and ReactCompoundView (ReactTextView is a ReactCompoundView but not a
<add> // ViewGroup).
<add> if (view instanceof ReactCompoundView) {
<add> int reactTag = ((ReactCompoundView)view).reactTagForTouch(eventCoords[0], eventCoords[1]);
<add> if (reactTag != view.getId()) {
<add> // make sure we exclude the View itself because of the PointerEvents.BOX_NONE
<add> return view;
<add> }
<add> }
<ide> }
<ide> return null;
<ide> | 1 |
Text | Text | change the title so it is more visible. | 02185e18dfa2f8ed3b784bb743482622165a82db | <ide><path>guide/english/python/anaconda/index.md
<ide> ---
<ide> title: Anaconda
<ide> ---
<add>
<add>## Anaconda
<ide> **Anaconda** is a package manager, environment manager and Python distribution with a collection of numerous packages. Anaconda is platform-agnostic, so you can use it whether you are on Windows, macOS or Linux.
<ide> Anaconda easily creates, saves, loads and switches between environments on your local computer. It was created for Python programs, but it can package and distribute software for any language.
<ide> Anaconda as a package manager helps you find and install packages. If you need a package that requires a different version of Python, you do not need to switch to a different environment manager, because Anaconda is also an environment manager. With just a few commands, you can set up a totally separate environment to run that different version of Python, while continuing to run your usual version of Python in your normal environment. | 1 |
Javascript | Javascript | update doc example to current router | 3d87f631e2b3924b584a6b4ceb2a1a02b4142908 | <ide><path>packages/ember-application/lib/system/application.js
<ide> var Application = Ember.Application = Ember.Namespace.extend(Ember.DeferredMixin
<ide> This allows application developers to do:
<ide>
<ide> ```javascript
<del> App = Ember.Application.create();
<add> var App = Ember.Application.create();
<ide>
<del> App.Router.map(function(match) {
<del> match("/").to("index");
<add> App.Router.map(function() {
<add> this.resource('posts');
<ide> });
<ide> ```
<ide> | 1 |
PHP | PHP | update doc block | 75d239b354ea8c12a22bfb180e809e0c171b6bfb | <ide><path>src/Illuminate/Support/Facades/Validator.php
<ide> namespace Illuminate\Support\Facades;
<ide>
<ide> /**
<del> * @method static \Illuminate\Contracts\Validation\Validator make(array $data, array $rules, array $messages = [], array $customAttributes = [])
<add> * @method static \Illuminate\Validation\Validator make(array $data, array $rules, array $messages = [], array $customAttributes = [])
<ide> * @method static void extend(string $rule, \Closure | string $extension, string $message = null)
<ide> * @method static void extendImplicit(string $rule, \Closure | string $extension, string $message = null)
<ide> * @method static void replacer(string $rule, \Closure | string $replacer) | 1 |
Ruby | Ruby | handle non-core kegs without receipts | d274d37263e99193567606ac2b7929bc64dba091 | <ide><path>Library/Homebrew/tab.rb
<ide> def self.for_keg keg
<ide> if path.exist?
<ide> self.from_file path
<ide> else
<del> self.dummy_tab Formula.factory(keg.parent.basename)
<add> begin
<add> self.dummy_tab Formula.factory(keg.parent.basename)
<add> rescue FormulaUnavailableError
<add> Tab.new :used_options => [], :unused_options => []
<add> end
<ide> end
<ide> end
<ide> | 1 |
Ruby | Ruby | allow resource fetching | 809fc87da02749435da3ffd843691b38f2169811 | <ide><path>Library/Homebrew/cmd/fetch.rb
<ide> def fetch
<ide> end
<ide>
<ide> puts "Fetching: #{bucket * ', '}" if bucket.size > 1
<del> bucket.each { |f| fetch_formula(f) }
<add> bucket.each do |f|
<add> fetch_formula(f)
<add> f.resources.each do |r|
<add> fetch_resource(r)
<add> end
<add> end
<ide> end
<ide>
<ide> def already_fetched? f
<ide> f.cached_download.exist?
<ide> end
<ide>
<add> def fetch_resource r
<add> puts "Resource: #{r.name}"
<add> fetch_fetchable r
<add> rescue ChecksumMismatchError => e
<add> Homebrew.failed = true
<add> opoo "Resource #{r.name} reports different #{e.hash_type}: #{e.expected}"
<add> end
<add>
<ide> def fetch_formula f
<add> fetch_fetchable f
<add> rescue ChecksumMismatchError => e
<add> Homebrew.failed = true
<add> opoo "Formula reports different #{e.hash_type}: #{e.expected}"
<add> end
<add>
<add> private
<add>
<add> def fetch_fetchable f
<ide> f.cached_download.rmtree if already_fetched?(f) && ARGV.force?
<ide> download = f.fetch
<ide>
<ide> def fetch_formula f
<ide> puts Checksum::TYPES.map { |t| "#{t.to_s.upcase}: #{download.send(t)}" }
<ide>
<ide> f.verify_download_integrity(download)
<del> rescue ChecksumMismatchError => e
<del> Homebrew.failed = true
<del> opoo "Formula reports different #{e.hash_type}: #{e.expected}"
<ide> end
<ide> end | 1 |
Javascript | Javascript | make the docs of time and date picker in order | 54da3926d9bac84a9880b062510ab5892bb2b2be | <ide><path>website/server/extractDocs.js
<ide> var apis = [
<ide> '../Libraries/Storage/AsyncStorage.js',
<ide> '../Libraries/Utilities/BackAndroid.android.js',
<ide> '../Libraries/CameraRoll/CameraRoll.js',
<add> '../Libraries/Components/DatePickerAndroid/DatePickerAndroid.android.js',
<ide> '../Libraries/Utilities/Dimensions.js',
<ide> '../Libraries/Components/Intent/IntentAndroid.android.js',
<ide> '../Libraries/Interaction/InteractionManager.js',
<ide> var apis = [
<ide> '../Libraries/PushNotificationIOS/PushNotificationIOS.js',
<ide> '../Libraries/Components/StatusBar/StatusBarIOS.ios.js',
<ide> '../Libraries/StyleSheet/StyleSheet.js',
<add> '../Libraries/Components/TimePickerAndroid/TimePickerAndroid.android.js',
<ide> '../Libraries/Components/ToastAndroid/ToastAndroid.android.js',
<ide> '../Libraries/Vibration/VibrationIOS.ios.js',
<del> '../Libraries/Components/TimePickerAndroid/TimePickerAndroid.android.js',
<del> '../Libraries/Components/DatePickerAndroid/DatePickerAndroid.android.js',
<ide> ];
<ide>
<ide> var stylesWithPermalink = [ | 1 |
Python | Python | move pad_sequences to /utils | d022b8c987e470c33f8aac86606f6f7369f64feb | <ide><path>keras/distribute/keras_correctness_test_base.py
<ide> from keras.distribute.strategy_combinations import multi_worker_mirrored_strategies
<ide> from keras.distribute.strategy_combinations import strategies_minus_tpu
<ide> from keras.mixed_precision import policy
<del>from keras.preprocessing import sequence
<add>from keras.utils import data_utils
<ide>
<ide> _RANDOM_SEED = 1337
<ide> _EVAL_STEPS = 20
<ide> def get_data(self,
<ide> labels.append(label)
<ide> features.append(word_ids)
<ide>
<del> features = sequence.pad_sequences(
<add> features = data_utils.pad_sequences(
<ide> features, maxlen=max_words)
<ide> x_train = np.asarray(features, dtype=np.float32)
<ide> y_train = np.asarray(labels, dtype=np.int32).reshape((count, 1))
<ide><path>keras/preprocessing/sequence.py
<ide> def to_json(self, **kwargs):
<ide> return json.dumps(timeseries_generator_config, **kwargs)
<ide>
<ide>
<del>@keras_export('keras.utils.pad_sequences',
<del> 'keras.preprocessing.sequence.pad_sequences')
<del>def pad_sequences(sequences, maxlen=None, dtype='int32',
<del> padding='pre', truncating='pre', value=0.):
<del> """Pads sequences to the same length.
<del>
<del> This function transforms a list (of length `num_samples`)
<del> of sequences (lists of integers)
<del> into a 2D Numpy array of shape `(num_samples, num_timesteps)`.
<del> `num_timesteps` is either the `maxlen` argument if provided,
<del> or the length of the longest sequence in the list.
<del>
<del> Sequences that are shorter than `num_timesteps`
<del> are padded with `value` until they are `num_timesteps` long.
<del>
<del> Sequences longer than `num_timesteps` are truncated
<del> so that they fit the desired length.
<del>
<del> The position where padding or truncation happens is determined by
<del> the arguments `padding` and `truncating`, respectively.
<del> Pre-padding or removing values from the beginning of the sequence is the
<del> default.
<del>
<del> >>> sequence = [[1], [2, 3], [4, 5, 6]]
<del> >>> tf.keras.preprocessing.sequence.pad_sequences(sequence)
<del> array([[0, 0, 1],
<del> [0, 2, 3],
<del> [4, 5, 6]], dtype=int32)
<del>
<del> >>> tf.keras.preprocessing.sequence.pad_sequences(sequence, value=-1)
<del> array([[-1, -1, 1],
<del> [-1, 2, 3],
<del> [ 4, 5, 6]], dtype=int32)
<del>
<del> >>> tf.keras.preprocessing.sequence.pad_sequences(sequence, padding='post')
<del> array([[1, 0, 0],
<del> [2, 3, 0],
<del> [4, 5, 6]], dtype=int32)
<del>
<del> >>> tf.keras.preprocessing.sequence.pad_sequences(sequence, maxlen=2)
<del> array([[0, 1],
<del> [2, 3],
<del> [5, 6]], dtype=int32)
<del>
<del> Args:
<del> sequences: List of sequences (each sequence is a list of integers).
<del> maxlen: Optional Int, maximum length of all sequences. If not provided,
<del> sequences will be padded to the length of the longest individual
<del> sequence.
<del> dtype: (Optional, defaults to int32). Type of the output sequences.
<del> To pad sequences with variable length strings, you can use `object`.
<del> padding: String, 'pre' or 'post' (optional, defaults to 'pre'):
<del> pad either before or after each sequence.
<del> truncating: String, 'pre' or 'post' (optional, defaults to 'pre'):
<del> remove values from sequences larger than
<del> `maxlen`, either at the beginning or at the end of the sequences.
<del> value: Float or String, padding value. (Optional, defaults to 0.)
<del>
<del> Returns:
<del> Numpy array with shape `(len(sequences), maxlen)`
<del>
<del> Raises:
<del> ValueError: In case of invalid values for `truncating` or `padding`,
<del> or in case of invalid shape for a `sequences` entry.
<del> """
<del> if not hasattr(sequences, '__len__'):
<del> raise ValueError('`sequences` must be iterable.')
<del> num_samples = len(sequences)
<del>
<del> lengths = []
<del> sample_shape = ()
<del> flag = True
<del>
<del> # take the sample shape from the first non empty sequence
<del> # checking for consistency in the main loop below.
<del>
<del> for x in sequences:
<del> try:
<del> lengths.append(len(x))
<del> if flag and len(x):
<del> sample_shape = np.asarray(x).shape[1:]
<del> flag = False
<del> except TypeError as e:
<del> raise ValueError('`sequences` must be a list of iterables. '
<del> 'Found non-iterable: ' + str(x)) from e
<del>
<del> if maxlen is None:
<del> maxlen = np.max(lengths)
<del>
<del> is_dtype_str = np.issubdtype(dtype, np.str_) or np.issubdtype(
<del> dtype, np.unicode_)
<del> if isinstance(value, str) and dtype != object and not is_dtype_str:
<del> raise ValueError(
<del> "`dtype` {} is not compatible with `value`'s type: {}\n"
<del> 'You should set `dtype=object` for variable length strings.'.format(
<del> dtype, type(value)))
<del>
<del> x = np.full((num_samples, maxlen) + sample_shape, value, dtype=dtype)
<del> for idx, s in enumerate(sequences):
<del> if not len(s): # pylint: disable=g-explicit-length-test
<del> continue # empty list/array was found
<del> if truncating == 'pre':
<del> trunc = s[-maxlen:] # pylint: disable=invalid-unary-operand-type
<del> elif truncating == 'post':
<del> trunc = s[:maxlen]
<del> else:
<del> raise ValueError('Truncating type "%s" ' 'not understood' % truncating)
<del>
<del> # check `trunc` has expected shape
<del> trunc = np.asarray(trunc, dtype=dtype)
<del> if trunc.shape[1:] != sample_shape:
<del> raise ValueError('Shape of sample %s of sequence at position %s '
<del> 'is different from expected shape %s' %
<del> (trunc.shape[1:], idx, sample_shape))
<del>
<del> if padding == 'post':
<del> x[idx, :len(trunc)] = trunc
<del> elif padding == 'pre':
<del> x[idx, -len(trunc):] = trunc
<del> else:
<del> raise ValueError('Padding type "%s" not understood' % padding)
<del> return x
<del>
<del>
<ide> @keras_export('keras.preprocessing.sequence.make_sampling_table')
<ide> def make_sampling_table(size, sampling_factor=1e-5):
<ide> """Generates a word rank-based probabilistic sampling table.
<ide><path>keras/preprocessing/sequence_test.py
<ide>
<ide> class TestSequence(tf.test.TestCase):
<ide>
<del> def test_pad_sequences(self):
<del> a = [[1], [1, 2], [1, 2, 3]]
<del>
<del> # test padding
<del> b = sequence.pad_sequences(a, maxlen=3, padding='pre')
<del> self.assertAllClose(b, [[0, 0, 1], [0, 1, 2], [1, 2, 3]])
<del> b = sequence.pad_sequences(a, maxlen=3, padding='post')
<del> self.assertAllClose(b, [[1, 0, 0], [1, 2, 0], [1, 2, 3]])
<del>
<del> # test truncating
<del> b = sequence.pad_sequences(a, maxlen=2, truncating='pre')
<del> self.assertAllClose(b, [[0, 1], [1, 2], [2, 3]])
<del> b = sequence.pad_sequences(a, maxlen=2, truncating='post')
<del> self.assertAllClose(b, [[0, 1], [1, 2], [1, 2]])
<del>
<del> # test value
<del> b = sequence.pad_sequences(a, maxlen=3, value=1)
<del> self.assertAllClose(b, [[1, 1, 1], [1, 1, 2], [1, 2, 3]])
<del>
<del> def test_pad_sequences_str(self):
<del> a = [['1'], ['1', '2'], ['1', '2', '3']]
<del>
<del> # test padding
<del> b = sequence.pad_sequences(
<del> a, maxlen=3, padding='pre', value='pad', dtype=object)
<del> self.assertAllEqual(
<del> b, [['pad', 'pad', '1'], ['pad', '1', '2'], ['1', '2', '3']])
<del> b = sequence.pad_sequences(
<del> a, maxlen=3, padding='post', value='pad', dtype='<U3')
<del> self.assertAllEqual(
<del> b, [['1', 'pad', 'pad'], ['1', '2', 'pad'], ['1', '2', '3']])
<del>
<del> # test truncating
<del> b = sequence.pad_sequences(
<del> a, maxlen=2, truncating='pre', value='pad', dtype=object)
<del> self.assertAllEqual(b, [['pad', '1'], ['1', '2'], ['2', '3']])
<del> b = sequence.pad_sequences(
<del> a, maxlen=2, truncating='post', value='pad', dtype='<U3')
<del> self.assertAllEqual(b, [['pad', '1'], ['1', '2'], ['1', '2']])
<del>
<del> with self.assertRaisesRegex(ValueError,
<del> '`dtype` int32 is not compatible with '):
<del> sequence.pad_sequences(a, maxlen=2, truncating='post', value='pad')
<del>
<del> def test_pad_sequences_vector(self):
<del> a = [[[1, 1]], [[2, 1], [2, 2]], [[3, 1], [3, 2], [3, 3]]]
<del>
<del> # test padding
<del> b = sequence.pad_sequences(a, maxlen=3, padding='pre')
<del> self.assertAllClose(b, [[[0, 0], [0, 0], [1, 1]], [[0, 0], [2, 1], [2, 2]],
<del> [[3, 1], [3, 2], [3, 3]]])
<del> b = sequence.pad_sequences(a, maxlen=3, padding='post')
<del> self.assertAllClose(b, [[[1, 1], [0, 0], [0, 0]], [[2, 1], [2, 2], [0, 0]],
<del> [[3, 1], [3, 2], [3, 3]]])
<del>
<del> # test truncating
<del> b = sequence.pad_sequences(a, maxlen=2, truncating='pre')
<del> self.assertAllClose(b,
<del> [[[0, 0], [1, 1]], [[2, 1], [2, 2]], [[3, 2], [3, 3]]])
<del>
<del> b = sequence.pad_sequences(a, maxlen=2, truncating='post')
<del> self.assertAllClose(b,
<del> [[[0, 0], [1, 1]], [[2, 1], [2, 2]], [[3, 1], [3, 2]]])
<del>
<del> # test value
<del> b = sequence.pad_sequences(a, maxlen=3, value=1)
<del> self.assertAllClose(b, [[[1, 1], [1, 1], [1, 1]], [[1, 1], [2, 1], [2, 2]],
<del> [[3, 1], [3, 2], [3, 3]]])
<del>
<ide> def test_make_sampling_table(self):
<ide> a = sequence.make_sampling_table(3)
<ide> self.assertAllClose(
<ide><path>keras/utils/__init__.py
<ide> from keras.utils.data_utils import GeneratorEnqueuer
<ide> from keras.utils.data_utils import OrderedEnqueuer
<ide> from keras.utils.data_utils import SequenceEnqueuer
<add>from keras.utils.data_utils import pad_sequences
<ide>
<ide> # Serialization related
<ide> from keras.utils.generic_utils import custom_object_scope
<ide><path>keras/utils/data_utils.py
<ide> def get(self):
<ide> 'Keras requires a thread-safe generator when '
<ide> '`use_multiprocessing=False, workers > 1`. ')
<ide> raise e
<add>
<add>
<add>@keras_export('keras.utils.pad_sequences',
<add> 'keras.preprocessing.sequence.pad_sequences')
<add>def pad_sequences(sequences, maxlen=None, dtype='int32',
<add> padding='pre', truncating='pre', value=0.):
<add> """Pads sequences to the same length.
<add>
<add> This function transforms a list (of length `num_samples`)
<add> of sequences (lists of integers)
<add> into a 2D Numpy array of shape `(num_samples, num_timesteps)`.
<add> `num_timesteps` is either the `maxlen` argument if provided,
<add> or the length of the longest sequence in the list.
<add>
<add> Sequences that are shorter than `num_timesteps`
<add> are padded with `value` until they are `num_timesteps` long.
<add>
<add> Sequences longer than `num_timesteps` are truncated
<add> so that they fit the desired length.
<add>
<add> The position where padding or truncation happens is determined by
<add> the arguments `padding` and `truncating`, respectively.
<add> Pre-padding or removing values from the beginning of the sequence is the
<add> default.
<add>
<add> >>> sequence = [[1], [2, 3], [4, 5, 6]]
<add> >>> tf.keras.preprocessing.sequence.pad_sequences(sequence)
<add> array([[0, 0, 1],
<add> [0, 2, 3],
<add> [4, 5, 6]], dtype=int32)
<add>
<add> >>> tf.keras.preprocessing.sequence.pad_sequences(sequence, value=-1)
<add> array([[-1, -1, 1],
<add> [-1, 2, 3],
<add> [ 4, 5, 6]], dtype=int32)
<add>
<add> >>> tf.keras.preprocessing.sequence.pad_sequences(sequence, padding='post')
<add> array([[1, 0, 0],
<add> [2, 3, 0],
<add> [4, 5, 6]], dtype=int32)
<add>
<add> >>> tf.keras.preprocessing.sequence.pad_sequences(sequence, maxlen=2)
<add> array([[0, 1],
<add> [2, 3],
<add> [5, 6]], dtype=int32)
<add>
<add> Args:
<add> sequences: List of sequences (each sequence is a list of integers).
<add> maxlen: Optional Int, maximum length of all sequences. If not provided,
<add> sequences will be padded to the length of the longest individual
<add> sequence.
<add> dtype: (Optional, defaults to `"int32"`). Type of the output sequences.
<add> To pad sequences with variable length strings, you can use `object`.
<add> padding: String, "pre" or "post" (optional, defaults to `"pre"`):
<add> pad either before or after each sequence.
<add> truncating: String, "pre" or "post" (optional, defaults to `"pre"`):
<add> remove values from sequences larger than
<add> `maxlen`, either at the beginning or at the end of the sequences.
<add> value: Float or String, padding value. (Optional, defaults to 0.)
<add>
<add> Returns:
<add> Numpy array with shape `(len(sequences), maxlen)`
<add>
<add> Raises:
<add> ValueError: In case of invalid values for `truncating` or `padding`,
<add> or in case of invalid shape for a `sequences` entry.
<add> """
<add> if not hasattr(sequences, '__len__'):
<add> raise ValueError('`sequences` must be iterable.')
<add> num_samples = len(sequences)
<add>
<add> lengths = []
<add> sample_shape = ()
<add> flag = True
<add>
<add> # take the sample shape from the first non empty sequence
<add> # checking for consistency in the main loop below.
<add>
<add> for x in sequences:
<add> try:
<add> lengths.append(len(x))
<add> if flag and len(x):
<add> sample_shape = np.asarray(x).shape[1:]
<add> flag = False
<add> except TypeError as e:
<add> raise ValueError('`sequences` must be a list of iterables. '
<add> f'Found non-iterable: {str(x)}') from e
<add>
<add> if maxlen is None:
<add> maxlen = np.max(lengths)
<add>
<add> is_dtype_str = np.issubdtype(dtype, np.str_) or np.issubdtype(
<add> dtype, np.unicode_)
<add> if isinstance(value, str) and dtype != object and not is_dtype_str:
<add> raise ValueError(
<add> f'`dtype` {dtype} is not compatible with `value`\'s type: '
<add> f'{type(value)}\nYou should set `dtype=object` for variable length '
<add> 'strings.')
<add>
<add> x = np.full((num_samples, maxlen) + sample_shape, value, dtype=dtype)
<add> for idx, s in enumerate(sequences):
<add> if not len(s): # pylint: disable=g-explicit-length-test
<add> continue # empty list/array was found
<add> if truncating == 'pre':
<add> trunc = s[-maxlen:] # pylint: disable=invalid-unary-operand-type
<add> elif truncating == 'post':
<add> trunc = s[:maxlen]
<add> else:
<add> raise ValueError(f'Truncating type "{truncating}" not understood')
<add>
<add> # check `trunc` has expected shape
<add> trunc = np.asarray(trunc, dtype=dtype)
<add> if trunc.shape[1:] != sample_shape:
<add> raise ValueError(f'Shape of sample {trunc.shape[1:]} of sequence at '
<add> f'position {idx} is different from expected shape '
<add> f'{sample_shape}')
<add>
<add> if padding == 'post':
<add> x[idx, :len(trunc)] = trunc
<add> elif padding == 'pre':
<add> x[idx, -len(trunc):] = trunc
<add> else:
<add> raise ValueError(f'Padding type "{padding}" not understood')
<add> return x
<ide><path>keras/utils/data_utils_test.py
<ide> def test_on_epoch_end_threads(self):
<ide> enqueuer.stop()
<ide>
<ide>
<add>class PadSequencesTest(tf.test.TestCase):
<add>
<add> def test_pad_sequences(self):
<add> a = [[1], [1, 2], [1, 2, 3]]
<add>
<add> # test padding
<add> b = data_utils.pad_sequences(a, maxlen=3, padding='pre')
<add> self.assertAllClose(b, [[0, 0, 1], [0, 1, 2], [1, 2, 3]])
<add> b = data_utils.pad_sequences(a, maxlen=3, padding='post')
<add> self.assertAllClose(b, [[1, 0, 0], [1, 2, 0], [1, 2, 3]])
<add>
<add> # test truncating
<add> b = data_utils.pad_sequences(a, maxlen=2, truncating='pre')
<add> self.assertAllClose(b, [[0, 1], [1, 2], [2, 3]])
<add> b = data_utils.pad_sequences(a, maxlen=2, truncating='post')
<add> self.assertAllClose(b, [[0, 1], [1, 2], [1, 2]])
<add>
<add> # test value
<add> b = data_utils.pad_sequences(a, maxlen=3, value=1)
<add> self.assertAllClose(b, [[1, 1, 1], [1, 1, 2], [1, 2, 3]])
<add>
<add> def test_pad_sequences_str(self):
<add> a = [['1'], ['1', '2'], ['1', '2', '3']]
<add>
<add> # test padding
<add> b = data_utils.pad_sequences(
<add> a, maxlen=3, padding='pre', value='pad', dtype=object)
<add> self.assertAllEqual(
<add> b, [['pad', 'pad', '1'], ['pad', '1', '2'], ['1', '2', '3']])
<add> b = data_utils.pad_sequences(
<add> a, maxlen=3, padding='post', value='pad', dtype='<U3')
<add> self.assertAllEqual(
<add> b, [['1', 'pad', 'pad'], ['1', '2', 'pad'], ['1', '2', '3']])
<add>
<add> # test truncating
<add> b = data_utils.pad_sequences(
<add> a, maxlen=2, truncating='pre', value='pad', dtype=object)
<add> self.assertAllEqual(b, [['pad', '1'], ['1', '2'], ['2', '3']])
<add> b = data_utils.pad_sequences(
<add> a, maxlen=2, truncating='post', value='pad', dtype='<U3')
<add> self.assertAllEqual(b, [['pad', '1'], ['1', '2'], ['1', '2']])
<add>
<add> with self.assertRaisesRegex(ValueError,
<add> '`dtype` int32 is not compatible with '):
<add> data_utils.pad_sequences(a, maxlen=2, truncating='post', value='pad')
<add>
<add> def test_pad_sequences_vector(self):
<add> a = [[[1, 1]], [[2, 1], [2, 2]], [[3, 1], [3, 2], [3, 3]]]
<add>
<add> # test padding
<add> b = data_utils.pad_sequences(a, maxlen=3, padding='pre')
<add> self.assertAllClose(b, [[[0, 0], [0, 0], [1, 1]], [[0, 0], [2, 1], [2, 2]],
<add> [[3, 1], [3, 2], [3, 3]]])
<add> b = data_utils.pad_sequences(a, maxlen=3, padding='post')
<add> self.assertAllClose(b, [[[1, 1], [0, 0], [0, 0]], [[2, 1], [2, 2], [0, 0]],
<add> [[3, 1], [3, 2], [3, 3]]])
<add>
<add> # test truncating
<add> b = data_utils.pad_sequences(a, maxlen=2, truncating='pre')
<add> self.assertAllClose(b,
<add> [[[0, 0], [1, 1]], [[2, 1], [2, 2]], [[3, 2], [3, 3]]])
<add>
<add> b = data_utils.pad_sequences(a, maxlen=2, truncating='post')
<add> self.assertAllClose(b,
<add> [[[0, 0], [1, 1]], [[2, 1], [2, 2]], [[3, 1], [3, 2]]])
<add>
<add> # test value
<add> b = data_utils.pad_sequences(a, maxlen=3, value=1)
<add> self.assertAllClose(b, [[[1, 1], [1, 1], [1, 1]], [[1, 1], [2, 1], [2, 2]],
<add> [[3, 1], [3, 2], [3, 3]]])
<add>
<add>
<ide> if __name__ == '__main__':
<ide> # Bazel sets these environment variables to very long paths.
<ide> # Tempfile uses them to create long paths, and in turn multiprocessing | 6 |
Text | Text | consolidate collaborator status in governance | 41d5666aaa37aa43fc0c15220c8d2fa7929abd39 | <ide><path>GOVERNANCE.md
<ide> Typical activities of a Collaborator include:
<ide> * Participation in working groups
<ide> * Merging pull requests
<ide>
<del>The TSC periodically reviews the Collaborator list to identify inactive
<del>Collaborators. Past Collaborators are typically given _Emeritus_ status. Emeriti
<del>may request that the TSC restore them to active status.
<add>The TSC can remove inactive Collaborators or provide them with _Emeritus_
<add>status. Emeriti may request that the TSC restore them to active status.
<ide>
<ide> ## Technical Steering Committee
<ide> | 1 |
Text | Text | update some literal translations in index.md | 8559cfb06ce3990efee321da3f4a0138a7ed32ce | <ide><path>guide/spanish/agile/test-driven-development/index.md
<ide> localeTitle: Desarrollo guiado por pruebas
<ide>
<ide> Test Driven Development (TDD) es uno de los enfoques de desarrollo de software ágil. Se basa en el concepto de que
<ide>
<del>> debe escribir un caso de prueba para su código incluso antes de escribir el código
<add>> se debe escribir un caso de prueba para su código incluso antes de escribir el código
<ide>
<ide> Aquí, primero escribimos la prueba unitaria y luego escribimos el código para completar la prueba con éxito. Esto ahorra tiempo para realizar la prueba unitaria y otras pruebas similares, ya que estamos avanzando con la iteración exitosa de la prueba, lo que nos lleva a lograr una modularidad en el código. Básicamente se compone de 4 pasos.
<ide>
<ide> Cada nueva característica de su sistema debe seguir los pasos anteriores.
<ide>
<ide> #### Más información:
<ide>
<del>[Introducción de](http://agiledata.org/essays/tdd.html) Agile Data [a TDD](http://agiledata.org/essays/tdd.html)
<add>[Introducción de Agile Data a TDD](http://agiledata.org/essays/tdd.html)
<ide>
<del>Wiki en [TDD](https://en.wikipedia.org/wiki/Test-driven_development)
<add>Wiki sobre [TDD](https://en.wikipedia.org/wiki/Test-driven_development)
<ide>
<del>Martin Fowler [es TDD muerto?](https://martinfowler.com/articles/is-tdd-dead/) (Una serie de conversaciones grabadas sobre el tema).
<add>Martin Fowler: [está muerto el TDD?](https://martinfowler.com/articles/is-tdd-dead/) (Una serie de conversaciones grabadas sobre el tema).
<ide>
<ide> Libro de Kent Beck [Test Driven Development by Example](https://www.amazon.com/Test-Driven-Development-Kent-Beck/dp/0321146530)
<ide>
<del>[Los ciclos de TDD de](http://blog.cleancoder.com/uncle-bob/2014/12/17/TheCyclesOfTDD.html) tío Bob
<ide>\ No newline at end of file
<add>[Los ciclos de TDD de](http://blog.cleancoder.com/uncle-bob/2014/12/17/TheCyclesOfTDD.html) tío Bob | 1 |
Javascript | Javascript | create a registry per applicationinstance | f0aa38781b835e3463fe251f875452ba7a292113 | <ide><path>packages/ember-application/lib/system/application-instance.js
<ide> import { set } from "ember-metal/property_set";
<ide> import EmberObject from "ember-runtime/system/object";
<ide> import run from "ember-metal/run_loop";
<add>import Registry from 'container/registry';
<ide>
<ide> /**
<ide> The `ApplicationInstance` encapsulates all of the stateful aspects of a
<ide> export default EmberObject.extend({
<ide>
<ide> @property {Ember.Registry} registry
<ide> */
<add> applicationRegistry: null,
<add>
<add> /**
<add> The registry for this application instance. It should use the
<add> `applicationRegistry` as a fallback.
<add>
<add> @property {Ember.Registry} registry
<add> */
<ide> registry: null,
<ide>
<ide> /**
<ide> export default EmberObject.extend({
<ide>
<ide> init: function() {
<ide> this._super.apply(this, arguments);
<add>
<add> // Create a per-instance registry that will use the application's registry
<add> // as a fallback for resolving registrations.
<add> this.registry = new Registry({
<add> fallback: this.applicationRegistry,
<add> resolver: this.applicationRegistry.resolver
<add> });
<add> this.registry.normalizeFullName = this.applicationRegistry.normalizeFullName;
<add> this.registry.makeToString = this.applicationRegistry.makeToString;
<add>
<add> // Create a per-instance container from the instance's registry
<ide> this.container = this.registry.container();
<ide>
<del> // Currently, we cannot put the application instance into the container
<del> // because the registry is "sealed" by this point and we do not yet
<del> // support container-specific subregistries. This code puts the instance
<del> // directly into the container's cache so that lookups work, but it
<del> // would obviously be much better to support registering on the container
<del> // directly.
<add> // Register this instance in the per-instance registry.
<ide> //
<del> // Why do we need to put the instance in the container in the first place?
<add> // Why do we need to register the instance in the first place?
<ide> // Because we need a good way for the root route (a.k.a ApplicationRoute)
<ide> // to notify us when it has created the root-most view. That view is then
<ide> // appended to the rootElement, in the case of apps, to the fixture harness
<ide> // in tests, or rendered to a string in the case of FastBoot.
<del> this.container.cache['-application-instance:main'] = this;
<add> this.registry.register('-application-instance:main', this, { instantiate: false });
<ide> },
<ide>
<ide> /**
<ide><path>packages/ember-application/lib/system/application.js
<ide> var Application = Namespace.extend(DeferredMixin, {
<ide> return ApplicationInstance.create({
<ide> customEvents: get(this, 'customEvents'),
<ide> rootElement: get(this, 'rootElement'),
<del> registry: this.registry
<add> applicationRegistry: this.registry
<ide> });
<ide> },
<ide> | 2 |
Ruby | Ruby | fix an issue with duplicate preloaded records | 4b455e4f32b7bc39bc43b607c0044684d452b9b2 | <ide><path>activerecord/lib/active_record/associations/preloader.rb
<ide> def preloaders_for_hash(association, records, scope, polymorphic_parent)
<ide> association.flat_map { |parent, child|
<ide> grouped_records(parent, records, polymorphic_parent).flat_map do |reflection, reflection_records|
<ide> loaders = preloaders_for_reflection(reflection, reflection_records, scope)
<del> recs = loaders.flat_map(&:preloaded_records)
<add> recs = loaders.flat_map(&:preloaded_records).uniq
<ide> child_polymorphic_parent = reflection && reflection.options[:polymorphic]
<ide> loaders.concat Array.wrap(child).flat_map { |assoc|
<ide> preloaders_on assoc, recs, scope, child_polymorphic_parent
<ide><path>activerecord/test/cases/associations/cascaded_eager_loading_test.rb
<ide> def test_eager_association_loading_with_cascaded_interdependent_one_level_and_tw
<ide> assert_equal 3, authors[1].posts.size
<ide> assert_equal 3, authors[0].posts.collect { |post| post.categorizations.size }.inject(0) { |sum, i| sum + i }
<ide> end
<add>
<add> # Regression test for https://github.com/rails/rails/issues/37446
<add> def test_preloaded_records_are_not_duplicated
<add> author = Author.first
<add> expected = Post.where(author: author)
<add> .includes(author: :first_posts).map { |post| post.author.first_posts.size }
<add> actual = author.posts
<add> .includes(author: :first_posts).map { |post| post.author.first_posts.size }
<add>
<add> assert_equal expected, actual
<add> end
<ide> end | 2 |
Text | Text | fix typos in functional api guide | 6911fa2cba77da7e873e27a3448cadf0dce59b1e | <ide><path>docs/templates/getting-started/complete_guide_to_the_keras_functional_api.md
<ide> x = Embedding(output_dim=512, input_dim=10000, input_length=100)(main_input)
<ide> # containing information about the entire sequence
<ide> lstm_out = LSTM(32)(x)
<ide> # here we insert the auxiliary loss, allowing the LSTM and Embedding layer
<del># to be trained smoothly even the main loss will be much higher in the model
<add># to be trained smoothly even though the main loss will be much higher in the model
<ide> auxiliary_loss = Dense(1, activation='sigmoid', name='aux_output')(lstm_out)
<ide>
<ide> # at this point we feed into the model our auxiliary input data
<ide> vqa_model = Model(input=[image_input, question_input], output=output)
<ide> # the next stage would be training this model on actual data.
<ide> ```
<ide>
<del>### Video question answering model.
<add>### Video question answering model
<ide>
<ide> Now that we have trained our image QA model, we can quickly turn it into a video QA model. With appropriate training, you will be able to show it a short video (e.g. 100-frame human action) and ask a natural language question about the video (e.g. "what sport is the boy playing?" -> "footbal").
<ide> | 1 |
Javascript | Javascript | use template literals | 8baaa25aec5a052c32d63f1f7f77bbd0e4a78796 | <ide><path>lib/dns.js
<ide> function errnoException(err, syscall, hostname) {
<ide> }
<ide> var ex = null;
<ide> if (typeof err === 'string') { // c-ares error code.
<del> ex = new Error(syscall + ' ' + err + (hostname ? ' ' + hostname : ''));
<add> const errHost = hostname ? ' ' + hostname : '';
<add> ex = new Error(`${syscall} ${err}${errHost}`);
<ide> ex.code = err;
<ide> ex.errno = err;
<ide> ex.syscall = syscall;
<ide> exports.resolve = function(hostname, type_, callback_) {
<ide> if (typeof resolver === 'function') {
<ide> return resolver(hostname, callback);
<ide> } else {
<del> throw new Error('Unknown type "' + type_ + '"');
<add> throw new Error(`Unknown type "${type_}"`);
<ide> }
<ide> };
<ide>
<ide> exports.setServers = function(servers) {
<ide> if (ver)
<ide> return newSet.push([ver, s]);
<ide>
<del> throw new Error('IP address is not properly formatted: ' + serv);
<add> throw new Error(`IP address is not properly formatted: ${serv}`);
<ide> });
<ide>
<ide> var r = cares.setServers(newSet);
<ide> exports.setServers = function(servers) {
<ide> cares.setServers(orig.join(','));
<ide>
<ide> var err = cares.strerror(r);
<del> throw new Error('c-ares failed to set servers: "' + err +
<del> '" [' + servers + ']');
<add> throw new Error(`c-ares failed to set servers: "${err}" [${servers}]`);
<ide> }
<ide> };
<ide> | 1 |
Javascript | Javascript | gate a test | 0e100ed00fb52cfd107db1d1081ef18fe4b9167f | <ide><path>packages/react-dom/src/__tests__/ReactDOMFizzServer-test.js
<ide> describe('ReactDOMFizzServer', () => {
<ide> );
<ide> });
<ide>
<add> // @gate experimental
<ide> it('client renders a boundary if it errors before finishing the fallback', async () => {
<ide> function App({isClient}) {
<ide> return ( | 1 |
Javascript | Javascript | fix key name for function keys with modifiers | 17ebd464ccdf12a4fb46334ff5d7a71f0f2e70a9 | <ide><path>lib/internal/readline/utils.js
<ide> function* emitKeys(stream) {
<ide>
<ide> // Parse the key itself
<ide> switch (code) {
<del> /* xterm/gnome ESC O letter */
<add> /* xterm/gnome ESC [ letter (with modifier) */
<add> case '[P': key.name = 'f1'; break;
<add> case '[Q': key.name = 'f2'; break;
<add> case '[R': key.name = 'f3'; break;
<add> case '[S': key.name = 'f4'; break;
<add>
<add> /* xterm/gnome ESC O letter (without modifier) */
<ide> case 'OP': key.name = 'f1'; break;
<ide> case 'OQ': key.name = 'f2'; break;
<ide> case 'OR': key.name = 'f3'; break;
<ide> function* emitKeys(stream) {
<ide> } else if (ch === '\r') {
<ide> // carriage return
<ide> key.name = 'return';
<add> key.meta = escaped;
<ide> } else if (ch === '\n') {
<ide> // Enter, should have been called linefeed
<ide> key.name = 'enter';
<add> key.meta = escaped;
<ide> } else if (ch === '\t') {
<ide> // tab
<ide> key.name = 'tab';
<add> key.meta = escaped;
<ide> } else if (ch === '\b' || ch === '\x7f') {
<ide> // backspace or ctrl+h
<ide> key.name = 'backspace';
<ide><path>test/parallel/test-readline-keys.js
<ide> addTest('io.JS', [
<ide> ]);
<ide>
<ide> // Named characters
<del>addTest('\n\r\t', [
<add>addTest('\n\r\t\x1b\n\x1b\r\x1b\t', [
<ide> { name: 'enter', sequence: '\n' },
<ide> { name: 'return', sequence: '\r' },
<ide> { name: 'tab', sequence: '\t' },
<add> { name: 'enter', sequence: '\x1b\n', meta: true },
<add> { name: 'return', sequence: '\x1b\r', meta: true },
<add> { name: 'tab', sequence: '\x1b\t', meta: true },
<ide> ]);
<ide>
<ide> // Space and backspace
<ide> addTest('a\x1baA\x1bA', [
<ide> { name: 'a', sequence: '\x1bA', meta: true, shift: true },
<ide> ]);
<ide>
<add>// xterm/gnome ESC [ letter (with modifiers)
<add>/* eslint-disable max-len */
<add>addTest('\x1b[2P\x1b[3P\x1b[4P\x1b[5P\x1b[6P\x1b[7P\x1b[8P\x1b[3Q\x1b[8Q\x1b[3R\x1b[8R\x1b[3S\x1b[8S', [
<add> { name: 'f1', sequence: '\x1b[2P', code: '[P', shift: true, meta: false, ctrl: false },
<add> { name: 'f1', sequence: '\x1b[3P', code: '[P', shift: false, meta: true, ctrl: false },
<add> { name: 'f1', sequence: '\x1b[4P', code: '[P', shift: true, meta: true, ctrl: false },
<add> { name: 'f1', sequence: '\x1b[5P', code: '[P', shift: false, meta: false, ctrl: true },
<add> { name: 'f1', sequence: '\x1b[6P', code: '[P', shift: true, meta: false, ctrl: true },
<add> { name: 'f1', sequence: '\x1b[7P', code: '[P', shift: false, meta: true, ctrl: true },
<add> { name: 'f1', sequence: '\x1b[8P', code: '[P', shift: true, meta: true, ctrl: true },
<add> { name: 'f2', sequence: '\x1b[3Q', code: '[Q', meta: true },
<add> { name: 'f2', sequence: '\x1b[8Q', code: '[Q', shift: true, meta: true, ctrl: true },
<add> { name: 'f3', sequence: '\x1b[3R', code: '[R', meta: true },
<add> { name: 'f3', sequence: '\x1b[8R', code: '[R', shift: true, meta: true, ctrl: true },
<add> { name: 'f4', sequence: '\x1b[3S', code: '[S', meta: true },
<add> { name: 'f4', sequence: '\x1b[8S', code: '[S', shift: true, meta: true, ctrl: true },
<add>]);
<add>/* eslint-enable max-len */
<add>
<ide> // xterm/gnome ESC O letter
<ide> addTest('\x1bOP\x1bOQ\x1bOR\x1bOS', [
<ide> { name: 'f1', sequence: '\x1bOP', code: 'OP' }, | 2 |
Ruby | Ruby | move root method at top of routes file | 43fa48e5aa981bf86635fa31ace9eede24e93826 | <ide><path>railties/lib/rails/generators/rails/app/templates/config/routes.rb
<ide> <%= app_const %>.routes.draw do
<ide> # The priority is based upon order of creation:
<ide> # first created -> highest priority.
<add>
<add> # You can have the root of your site routed with "root"
<add> # just remember to delete public/index.html.
<add> # root :to => 'welcome#index'
<ide>
<ide> # Sample of regular route:
<ide> # get 'products/:id' => 'catalog#view'
<ide> # resources :products
<ide> # end
<ide>
<del> # You can have the root of your site routed with "root"
<del> # just remember to delete public/index.html.
<del> # root :to => 'welcome#index'
<ide>
<ide> # See how all your routes lay out with "rake routes"
<ide> end
<ide>\ No newline at end of file | 1 |
Javascript | Javascript | use chrome 51 and ff 47 in unit tests | ca812b0aebfc62287c920c07977be2ad00692d53 | <ide><path>karma-shared.conf.js
<ide> module.exports = function(config, specificOptions) {
<ide> 'SL_Chrome': {
<ide> base: 'SauceLabs',
<ide> browserName: 'chrome',
<del> version: '47'
<add> version: '51'
<ide> },
<ide> 'SL_Firefox': {
<ide> base: 'SauceLabs',
<ide> browserName: 'firefox',
<del> version: '43'
<add> version: '47'
<ide> },
<ide> 'SL_Safari_8': {
<ide> base: 'SauceLabs', | 1 |
Mixed | Ruby | remove extra decrement of transaction level | e0d59e6219c752d8cffc6b78c2240755f5728922 | <ide><path>activerecord/CHANGELOG.md
<add>* Remove extra decrement of transaction deep level.
<add>
<add> Fixes: #4566
<add>
<add> *Paul Nikitochkin*
<add>
<ide> * Reset @column_defaults when assigning `locking_column`.
<ide> We had a potential problem. For example:
<ide>
<ide><path>activerecord/lib/active_record/transactions.rb
<ide> def rolledback!(force_restore_state = false) #:nodoc:
<ide> run_callbacks :rollback
<ide> ensure
<ide> restore_transaction_record_state(force_restore_state)
<add> clear_transaction_record_state
<ide> end
<ide>
<ide> # Add the record to the current transaction so that the +after_rollback+ and +after_commit+ callbacks
<ide> def clear_transaction_record_state #:nodoc:
<ide> # Restore the new record state and id of a record that was previously saved by a call to save_record_state.
<ide> def restore_transaction_record_state(force = false) #:nodoc:
<ide> unless @_start_transaction_state.empty?
<del> @_start_transaction_state[:level] = (@_start_transaction_state[:level] || 0) - 1
<del> if @_start_transaction_state[:level] < 1 || force
<add> transaction_level = (@_start_transaction_state[:level] || 0) - 1
<add> if transaction_level < 1 || force
<ide> restore_state = @_start_transaction_state
<ide> was_frozen = restore_state[:frozen?]
<ide> @attributes = @attributes.dup if @attributes.frozen?
<ide> def restore_transaction_record_state(force = false) #:nodoc:
<ide> @attributes_cache.delete(self.class.primary_key)
<ide> end
<ide> @attributes.freeze if was_frozen
<del> @_start_transaction_state.clear
<ide> end
<ide> end
<ide> end
<ide><path>activerecord/test/cases/transactions_test.rb
<ide> def @first.after_save_for_transaction
<ide> assert !Topic.find(1).approved?
<ide> end
<ide>
<add> def test_raising_exception_in_nested_transaction_restore_state_in_save
<add> topic = Topic.new
<add>
<add> def topic.after_save_for_transaction
<add> raise 'Make the transaction rollback'
<add> end
<add>
<add> assert_raises(RuntimeError) do
<add> Topic.transaction { topic.save }
<add> end
<add>
<add> assert topic.new_record?, "#{topic.inspect} should be new record"
<add> end
<add>
<ide> def test_update_should_rollback_on_failure
<ide> author = Author.find(1)
<ide> posts_count = author.posts.size | 3 |
Python | Python | indicate python 3.9 support in setup.py | d8a1f5fd6c2a1df68582bd7923c57f6847574bf3 | <ide><path>setup.py
<ide> def run(self):
<ide> 'Programming Language :: Python :: 3.6',
<ide> 'Programming Language :: Python :: 3.7',
<ide> 'Programming Language :: Python :: 3.8',
<add> 'Programming Language :: Python :: 3.9',
<ide> 'Programming Language :: Python :: Implementation :: CPython',
<ide> 'Programming Language :: Python :: Implementation :: PyPy'
<ide> ] | 1 |
PHP | PHP | use a class constant for easier extandability | ecb3d33462db0eb30d672a6491d29d03cb03cc04 | <ide><path>src/Controller/Component/AuthComponent.php
<ide> protected function _getUser()
<ide> */
<ide> public function redirectUrl($url = null)
<ide> {
<del> $redirectUrl = $this->request->query(self::QUERY_STRING_REDIRECT);
<add> $redirectUrl = $this->request->query(static::QUERY_STRING_REDIRECT);
<ide> if ($redirectUrl && (substr($redirectUrl, 0, 1) !== '/')) {
<ide> $redirectUrl = null;
<ide> } | 1 |
PHP | PHP | use spread operators | b68603797a57e89f6bd486129e9043503c960e6c | <ide><path>src/Illuminate/Auth/Access/Gate.php
<ide> protected function buildAbilityCallback($callback)
<ide> return function () use ($callback) {
<ide> list($class, $method) = explode('@', $callback);
<ide>
<del> return call_user_func_array([$this->resolvePolicy($class), $method], func_get_args());
<add> return $this->resolvePolicy($class)->{$method}(...func_get_args());
<ide> };
<ide> }
<ide>
<ide> protected function raw($ability, $arguments = [])
<ide> */
<ide> protected function callAuthCallback($user, $ability, array $arguments)
<ide> {
<del> $callback = $this->resolveAuthCallback(
<del> $user, $ability, $arguments
<del> );
<add> $callback = $this->resolveAuthCallback($user, $ability, $arguments);
<ide>
<del> return call_user_func_array(
<del> $callback, array_merge([$user], $arguments)
<del> );
<add> return $callback($user, ...$arguments);
<ide> }
<ide>
<ide> /**
<ide> protected function callBeforeCallbacks($user, $ability, array $arguments)
<ide> $arguments = array_merge([$user, $ability], [$arguments]);
<ide>
<ide> foreach ($this->beforeCallbacks as $before) {
<del> if (! is_null($result = call_user_func_array($before, $arguments))) {
<add> if (! is_null($result = $before(...$arguments))) {
<ide> return $result;
<ide> }
<ide> }
<ide> protected function callAfterCallbacks($user, $ability, array $arguments, $result
<ide> $arguments = array_merge([$user, $ability, $result], [$arguments]);
<ide>
<ide> foreach ($this->afterCallbacks as $after) {
<del> call_user_func_array($after, $arguments);
<add> $after(...$arguments);
<ide> }
<ide> }
<ide>
<ide> protected function resolvePolicyCallback($user, $ability, array $arguments)
<ide> return function () use ($user, $ability, $arguments) {
<ide> $instance = $this->getPolicyFor($arguments[0]);
<ide>
<add> // If we receive a non-null result from the before method, we will return it
<add> // as the final result. This will allow developers to override the checks
<add> // in the policy to return a result for all rules defined in the class.
<ide> if (method_exists($instance, 'before')) {
<del> // We will prepend the user and ability onto the arguments so that the before
<del> // callback can determine which ability is being called. Then we will call
<del> // into the policy before methods with the arguments and get the result.
<del> $beforeArguments = array_merge([$user, $ability], $arguments);
<del>
<del> $result = call_user_func_array(
<del> [$instance, 'before'], $beforeArguments
<del> );
<del>
<del> // If we received a non-null result from the before method, we will return it
<del> // as the result of a check. This allows developers to override the checks
<del> // in the policy and return a result for all rules defined in the class.
<del> if (! is_null($result)) {
<add> if (! is_null($result = $instance->before($user, $ability, ...$arguments))) {
<ide> return $result;
<ide> }
<ide> }
<ide> protected function resolvePolicyCallback($user, $ability, array $arguments)
<ide> return false;
<ide> }
<ide>
<del> return call_user_func_array(
<del> [$instance, $ability], array_merge([$user], $arguments)
<del> );
<add> return $instance->{$ability}($user, ...$arguments);
<ide> };
<ide> }
<ide>
<ide><path>src/Illuminate/Auth/AuthManager.php
<ide> public function provider($name, Closure $callback)
<ide> */
<ide> public function __call($method, $parameters)
<ide> {
<del> return call_user_func_array([$this->guard(), $method], $parameters);
<add> return $this->guard()->{$method}(...$parameters);
<ide> }
<ide> }
<ide><path>src/Illuminate/Auth/Passwords/PasswordBroker.php
<ide> public function reset(array $credentials, Closure $callback)
<ide> // Once we have called this callback, we will remove this token row from the
<ide> // table and return the response from this callback so the user gets sent
<ide> // to the destination given by the developers from the callback return.
<del> call_user_func($callback, $user, $pass);
<add> $callback($user, $pass);
<ide>
<ide> $this->tokens->delete($credentials['token']);
<ide>
<ide><path>src/Illuminate/Auth/Passwords/PasswordBrokerManager.php
<ide> public function setDefaultDriver($name)
<ide> */
<ide> public function __call($method, $parameters)
<ide> {
<del> return call_user_func_array([$this->broker(), $method], $parameters);
<add> return $this->broker()->{$method}(...$parameters);
<ide> }
<ide> } | 4 |
Mixed | Text | fix tests for guide in root | 164d900e964d9776a5c9018458616d228d98cbcf | <ide><path>CONTRIBUTING.md
<ide> You can help us:
<ide>
<ide> Guide articles help you get a quick understanding of a technology concept. These are short, plain-English explanations that you can read before going on to more in-depth resources.
<ide>
<del>You can find an [example article about HTML Anchor Elements here](https://github.com/freeCodeCamp/freeCodeCamp/blob/master/client/src/pages/guide/english/html/elements/a-tag/index.md).
<add>You can find an [example article about HTML Anchor Elements here](https://github.com/freeCodeCamp/freeCodeCamp/blob/master/guide/english/html/elements/a-tag/index.md).
<ide>
<ide> **What can I write an article about?**
<ide>
<ide><path>client/plugins/fcc-create-nav-data/create-navigation-node.js
<ide> const commonREs = require('../../utils/regEx');
<ide> const readDir = require('../../utils/readDir');
<ide>
<ide> const { isAStubRE } = commonREs;
<del>const pagesDir = path.resolve(__dirname, '../../src/pages/guide/english/');
<add>const pagesDir = path.resolve(__dirname, '../../../guide/english/');
<ide>
<ide> function withGuidePrefix(str) {
<ide> return `/guide${str}`;
<ide><path>client/plugins/fcc-create-nav-data/create-navigation-node.test.js
<ide> describe('fcc-create-nav-data', () => {
<ide> },
<ide> fileAbsolutePath: path.resolve(
<ide> __dirname,
<del> '../../src/pages/guide/english/php/functions/files/file-writing/index.md'
<add> '../../../guide/english/php/functions/files/file-writing/index.md'
<ide> )
<ide> };
<ide>
<ide><path>docs/how-to-work-on-guide-articles.md
<ide> Watch the video demonstration or follow the steps below it:
<ide>
<ide> 
<ide>
<del>1. Go into the **"pages"** folder (located in [`client/src/pages/guide`](/client/src/pages/guide)) and find the article stub you'd like to write or edit.
<add>1. Go into the **"pages"** folder (located in [`guide`](/guide)) and find the article stub you'd like to write or edit.
<ide>
<ide> > All stubs will be in an index.md file
<ide>
<ide><path>docs/portuguese/how-to-work-on-guide-articles.md
<ide> Há duas maneiras para propor uma mudança num repositório, depois de editares
<ide>
<ide> Vê a demonstração em vídeo ou segue os passos abaixo:
<ide>
<del>**[A FAZER]** Atualizar a gravação do GIF.
<add>**[A FAZER]** Atualizar a gravação do GIF.
<ide>
<ide> 
<ide>
<del>1. Ir à pasta **"pages"** (localizada no [`client/src/pages/guide`](/client/src/pages/guide)) e encontrar o artigo que gostarias de escrever ou editar.
<add>1. Ir à pasta **"pages"** (localizada no [`guide`](/guide)) e encontrar o artigo que gostarias de escrever ou editar.
<ide>
<ide> > Todos os <i>stubs</i> estarão num ficheiro index.md
<ide>
<ide> Reviewers farão todos os esforços para resolver estes conflitos e combinar os
<ide>
<ide> Se um <i>pull requests</i> não é perfeito, o revisor poderá:
<ide>
<del>- pedir mudanças ao contribuidor e adicionar a label *changes requested*
<add>- pedir mudanças ao contribuidor e adicionar a label *changes requested*
<ide> - resolver problemas menores e fazer um <i>commit> no topo do PR
<ide>
<ide> #### Travis CI Build
<ide> It seems that similar changes have already been accepted earlier for this articl
<ide>
<ide> If you feel you have more to add, please feel free to open up a new PR.
<ide>
<del>Thanks again! 😊
<add>Thanks again! 😊
<ide>
<ide> > Hey @username
<ide>
<ide><path>docs/spanish/how-to-work-on-guide-articles.md
<ide> Mira este vídeo de demostración o sigue los siguientes pasos:
<ide>
<ide> 
<ide>
<del>1. Ve a la carpets **"páginas"** (situado en [`client/src/pages/guide`](/client/src/pages/guide)) donde encontrarás el artículo raiz que quieras editar.
<add>1. Ve a la carpets **"páginas"** (situado en [`guide`](/guide)) donde encontrarás el artículo raiz que quieras editar.
<ide>
<ide> > Todas las raíces estarán en un archivo index.md
<ide>
<ide><path>guide/english/python/setting-up-python-web-framework-django-and-flask/index.md
<ide> In case these assumptions are untrue, you might want to take a look at this <a>w
<ide>
<ide> But it would be unfair if we completely ignore the <a href='http://docs.python-guide.org/en/latest/starting/which-python/#the-state-of-python-2-vs-3' target='_blank' rel='nofollow'>Python 2 vs Python 3</a> debate.
<ide>
<del>If you do not have Python already installed check out our <a href='https://github.com/freeCodeCamp/freeCodeCamp/blob/master/client/src/pages/guide/english/python/installing-and-using-python-3/index.md'>Python Installation Guide</a>
<add>If you do not have Python already installed check out our <a href='https://github.com/freeCodeCamp/freeCodeCamp/blob/master/guide/english/python/installing-and-using-python-3/index.md'>Python Installation Guide</a>
<ide>
<ide> ## Virtual environment
<ide> | 7 |
PHP | PHP | add name() method to view contract | 42cca80f61eeae40396bc7c47ff694f70ebf90a1 | <ide><path>src/Illuminate/Contracts/View/View.php
<ide>
<ide> interface View extends Renderable {
<ide>
<add> /**
<add> * Get the name of the view.
<add> *
<add> * @return string
<add> */
<add> public function name();
<add>
<ide> /**
<ide> * Add a piece of data to the view.
<ide> *
<ide><path>src/Illuminate/View/View.php
<ide> public function getEngine()
<ide> return $this->engine;
<ide> }
<ide>
<add> /**
<add> * Get the name of the view.
<add> *
<add> * @return string
<add> */
<add> public function name()
<add> {
<add> return $this->getName();
<add> }
<add>
<ide> /**
<ide> * Get the name of the view.
<ide> * | 2 |
Text | Text | fix broken markdown for link | 0f01226c2a0acb445f55139a4e7f93bc0906080f | <ide><path>API.md
<ide> Human-readable reference marks for scales.
<ide> * [*axis*.tickSizeInner](https://github.com/d3/d3-axis/blob/v2.1.0/README.md#axis_tickSizeInner) - set the size of inner ticks.
<ide> * [*axis*.tickSizeOuter](https://github.com/d3/d3-axis/blob/v2.1.0/README.md#axis_tickSizeOuter) - set the size of outer (extent) ticks.
<ide> * [*axis*.tickPadding](https://github.com/d3/d3-axis/blob/v2.1.0/README.md#axis_tickPadding) - set the padding between ticks and labels.
<del>* [*axis*.offset]()https://github.com/d3/d3-axis/blob/v2.1.0/README.md#axis_offset) - set the pixel offset for crisp edges.
<add>* [*axis*.offset](https://github.com/d3/d3-axis/blob/v2.1.0/README.md#axis_offset) - set the pixel offset for crisp edges.
<ide>
<ide> ## [Brushes (d3-brush)](https://github.com/d3/d3-brush/tree/v2.0.0)
<ide> | 1 |
Python | Python | remove incompatible tests | 090ac0d1387a9f370f18afbc25314eeec2568d0c | <ide><path>tests/keras/layers/test_simplernn.py
<del>import theano
<del>import unittest
<del>from numpy.testing import assert_allclose
<del>import numpy as np
<del>from keras.layers.recurrent import SimpleRNN
<del>from mock import Mock
<del>
<del>floatX = theano.config.floatX
<del>
<del>__author__ = "Jeff Ye"
<del>
<del>
<del>class TestSimpleRNN(unittest.TestCase):
<del> left_padding_data = np.array(
<del> [
<del> [ # batch 1
<del> [0], [1], [2], [3]
<del> ],
<del> [ # batch 2
<del> [0], [0], [1], [2]
<del> ]
<del> ], dtype=floatX)
<del> left_padding_mask = np.array( # n_sample x n_time
<del> [
<del> [ # batch 1
<del> 0, 1, 1, 1
<del> ],
<del> [ # batch 2
<del> 0, 0, 1, 1
<del> ]
<del> ], dtype=np.int32)
<del>
<del> def setUp(self):
<del> W = np.array([[1]], dtype=floatX)
<del> U = np.array([[1]], dtype=floatX)
<del> b = np.array([0], dtype=floatX)
<del> weights = [W, U, b]
<del> self.forward = SimpleRNN(output_dim=1, activation='linear', weights=weights)
<del> self.backward = SimpleRNN(output_dim=1, activation='linear', weights=weights)
<del>
<del> previous = Mock()
<del> previous.nb_input = 1
<del> previous.nb_output = 1
<del> previous.output_shape = self.left_padding_data.shape
<del> previous.get_output_mask = Mock()
<del> self.previous = previous
<del>
<del> def test_left_padding(self):
<del> forward = self.forward
<del> forward.go_backwards = False
<del> forward.return_sequences = True
<del> self.previous.get_output.return_value = theano.shared(value=self.left_padding_data)
<del> self.previous.get_output_mask.return_value = theano.shared(value=self.left_padding_mask)
<del> forward.set_previous(self.previous)
<del> np.testing.assert_allclose(forward.get_output().eval(),
<del> np.array([
<del> [[0], [1], [3], [6]],
<del> [[0], [0], [1], [3]]]))
<del>
<del> backward = self.backward
<del> backward.go_backwards = True
<del> backward.return_sequences = True
<del> self.previous.get_output.return_value = theano.shared(value=self.left_padding_data)
<del> self.previous.get_output_mask.return_value = theano.shared(value=self.left_padding_mask)
<del> backward.set_previous(self.previous)
<del> np.testing.assert_allclose(backward.get_output().eval(),
<del> np.array([
<del> [[3], [5], [6], [0]],
<del> [[2], [3], [0], [0]]])) | 1 |
Javascript | Javascript | ignore class methods on comment elements | 64fd2c421ed582c16812d164a8a6f031b8e66287 | <ide><path>src/jqLite.js
<ide> function JQLiteData(element, key, value) {
<ide> }
<ide>
<ide> function JQLiteHasClass(element, selector) {
<add> if (!element.getAttribute) return false;
<ide> return ((" " + (element.getAttribute('class') || '') + " ").replace(/[\n\t]/g, " ").
<ide> indexOf( " " + selector + " " ) > -1);
<ide> }
<ide>
<ide> function JQLiteRemoveClass(element, cssClasses) {
<del> if (cssClasses) {
<add> if (cssClasses && element.setAttribute) {
<ide> forEach(cssClasses.split(' '), function(cssClass) {
<ide> element.setAttribute('class', trim(
<ide> (" " + (element.getAttribute('class') || '') + " ")
<ide> function JQLiteRemoveClass(element, cssClasses) {
<ide> }
<ide>
<ide> function JQLiteAddClass(element, cssClasses) {
<del> if (cssClasses) {
<add> if (cssClasses && element.setAttribute) {
<ide> var existingClasses = (' ' + (element.getAttribute('class') || '') + ' ')
<ide> .replace(/[\n\t]/g, " ");
<ide>
<ide><path>test/jqLiteSpec.js
<ide> describe('jqLite', function() {
<ide> });
<ide>
<ide>
<add> it('should ignore comment elements', function() {
<add> var comment = jqLite(document.createComment('something'));
<add>
<add> comment.addClass('whatever');
<add> comment.hasClass('whatever');
<add> comment.toggleClass('whatever');
<add> comment.removeClass('whatever');
<add> });
<add>
<add>
<ide> describe('hasClass', function() {
<ide> it('should check class', function() {
<ide> var selector = jqLite([a, b]); | 2 |
Java | Java | fix exception message about producible media types | e4fcad9f936ba492f28ec5f0421eea4b3f76f8aa | <ide><path>spring-webmvc/src/main/java/org/springframework/web/servlet/mvc/method/annotation/AbstractMessageConverterMethodProcessor.java
<ide> protected <T> void writeWithMessageConverters(T returnValue,
<ide> }
<ide> }
<ide> if (compatibleMediaTypes.isEmpty()) {
<del> throw new HttpMediaTypeNotAcceptableException(allSupportedMediaTypes);
<add> throw new HttpMediaTypeNotAcceptableException(producibleMediaTypes);
<ide> }
<ide>
<ide> List<MediaType> mediaTypes = new ArrayList<MediaType>(compatibleMediaTypes); | 1 |
Text | Text | remove cii badge in readme | b5444301f5207187c4e8f104868f66980e7fe430 | <ide><path>README.md
<ide> <img alt="Node.js" src="https://nodejs.org/static/images/logo-light.svg" width="400"/>
<ide> </a>
<ide> </p>
<del><p align="center">
<del> <a title="CII Best Practices" href="https://bestpractices.coreinfrastructure.org/projects/29"><img src="https://bestpractices.coreinfrastructure.org/projects/29/badge"></a>
<del></p>
<ide>
<ide> Node.js is a JavaScript runtime built on Chrome's V8 JavaScript engine. For
<ide> more information on using Node.js, see the | 1 |
Python | Python | add gold_preproc flag to cli/train | 84bb543e4df265cbce89d78d77b925324f117e31 | <ide><path>spacy/cli/train.py
<ide> resume=("Whether to resume training", "flag", "R", bool),
<ide> no_tagger=("Don't train tagger", "flag", "T", bool),
<ide> no_parser=("Don't train parser", "flag", "P", bool),
<del> no_entities=("Don't train NER", "flag", "N", bool)
<add> no_entities=("Don't train NER", "flag", "N", bool),
<add> gold_preproc=("Use gold preprocessing", "flag", "G", bool),
<ide> )
<ide> def train(cmd, lang, output_dir, train_data, dev_data, n_iter=20, n_sents=0,
<del> use_gpu=-1, resume=False, no_tagger=False, no_parser=False, no_entities=False):
<add> use_gpu=-1, resume=False, no_tagger=False, no_parser=False, no_entities=False,
<add> gold_preproc=False):
<ide> """
<ide> Train a model. Expects data in spaCy's JSON format.
<ide> """
<ide> def train(cmd, lang, output_dir, train_data, dev_data, n_iter=20, n_sents=0,
<ide> i += 20
<ide> with tqdm.tqdm(total=n_train_words, leave=False) as pbar:
<ide> train_docs = corpus.train_docs(nlp, projectivize=True,
<del> gold_preproc=False, max_length=0)
<add> gold_preproc=gold_preproc, max_length=0)
<ide> losses = {}
<ide> for batch in minibatch(train_docs, size=batch_sizes):
<ide> docs, golds = zip(*batch)
<ide> def train(cmd, lang, output_dir, train_data, dev_data, n_iter=20, n_sents=0,
<ide> scorer = nlp_loaded.evaluate(
<ide> corpus.dev_docs(
<ide> nlp_loaded,
<del> gold_preproc=False))
<add> gold_preproc=gold_preproc))
<ide> acc_loc =(output_path / ('model%d' % i) / 'accuracy.json')
<ide> with acc_loc.open('w') as file_:
<ide> file_.write(json_dumps(scorer.scores)) | 1 |
Ruby | Ruby | add deprecation warning to non-dsl fails_with_llvm | d4cfa1c0c5e68c147b790604e99b0da3c91ce689 | <ide><path>Library/Homebrew/compat/compatibility.rb
<ide> def self.resolve_alias name
<ide> # This used to be called in "def install", but should now be used
<ide> # up in the DSL section.
<ide> def fails_with_llvm msg=nil, data=nil
<add> opoo "Calling fails_with_llvm in the install method is deprecated"
<add> puts "Use the fails_with DSL instead."
<ide> FailsWithLLVM.new(msg, data).handle_failure
<ide> end
<ide> | 1 |
Javascript | Javascript | use easier readable variable name in movemodule | fc36ac366d256ec88636c675ecf580b5965e11ca | <ide><path>lib/Chunk.js
<ide> class Chunk {
<ide> });
<ide> }
<ide>
<del> moveModule(module, other) {
<add> moveModule(module, otherChunk) {
<ide> module.removeChunk(this);
<del> module.addChunk(other);
<del> other.addModule(module);
<del> module.rewriteChunkInReasons(this, [other]);
<add> module.addChunk(otherChunk);
<add> otherChunk.addModule(module);
<add> module.rewriteChunkInReasons(this, [otherChunk]);
<ide> }
<ide>
<ide> integrate(other, reason) { | 1 |
Python | Python | fix typo in unsupported version error message | 8001087e9e335140b8063a23916d9c05b615acd4 | <ide><path>setup.py
<ide> your version of Python. If you can't upgrade your pip (or Python), request
<ide> an older version of Django REST Framework:
<ide>
<del> $ python -m pip install "django<3.10"
<add> $ python -m pip install "djangorestframework<3.10"
<ide> """.format(*(REQUIRED_PYTHON + CURRENT_PYTHON)))
<ide> sys.exit(1)
<ide> | 1 |
Javascript | Javascript | add support question covering browser extensions | c37a4ff6638b5809c6c59cfc7916807073e49f98 | <ide><path>client/src/pages/support.js
<ide> const SupportPage = () => {
<ide> </Link>
<ide> .
<ide> </p>
<add> <h4>I cannot pass a challenge, but I think my code is correct</h4>
<add> <p>
<add> Some browser extensions can interfere with challenge tests. If you
<add> are using any, try disabling them and running the tests again. If
<add> the problem remains, click the challenge's 'Ask for Help' button
<add> to post on the forum. You will need to create a forum account if
<add> you don't already have one.
<add> </p>
<ide> <h4>I have a support question that isn't answered here.</h4>
<ide> <p>
<ide> You can ask for help on our forum, and the freeCodeCamp volunteer | 1 |
Ruby | Ruby | run all tests in generic mode | 932e145d9c44ef6c9f828422bf1daef8f10baa6f | <ide><path>Library/Homebrew/dev-cmd/test-bot.rb
<ide> def homebrew
<ide> tests_args_coverage << "--coverage" if ENV["TRAVIS"]
<ide> end
<ide> test "brew", "tests", *tests_args
<del> test "brew", "tests", "--generic", "--only=integration_cmds",
<del> *tests_args
<add> test "brew", "tests", "--generic", *tests_args
<ide> test "brew", "tests", "--no-compat", *tests_args_coverage
<ide> test "brew", "readall", "--syntax"
<ide> # test update from origin/master to current commit. | 1 |
Javascript | Javascript | fix more indents | 5e2f24c8c64d5e861e101ad380ec0ae5390908c4 | <ide><path>examples/js/shaders/DOFMipMapShader.js
<ide> THREE.DOFMipMapShader = {
<ide>
<ide> "void main() {",
<ide>
<del> "vUv = uv;",
<del> "gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );",
<add> " vUv = uv;",
<add> " gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );",
<ide>
<ide> "}"
<ide>
<ide> THREE.DOFMipMapShader = {
<ide>
<ide> "void main() {",
<ide>
<del> "vec4 depth = texture2D( tDepth, vUv );",
<add> " vec4 depth = texture2D( tDepth, vUv );",
<ide>
<del> "float factor = depth.x - focus;",
<add> " float factor = depth.x - focus;",
<ide>
<del> "vec4 col = texture2D( tColor, vUv, 2.0 * maxblur * abs( focus - depth.x ) );",
<add> " vec4 col = texture2D( tColor, vUv, 2.0 * maxblur * abs( focus - depth.x ) );",
<ide>
<del> "gl_FragColor = col;",
<del> "gl_FragColor.a = 1.0;",
<add> " gl_FragColor = col;",
<add> " gl_FragColor.a = 1.0;",
<ide>
<ide> "}"
<ide>
<ide><path>examples/js/shaders/DotScreenShader.js
<ide> THREE.DotScreenShader = {
<ide>
<ide> "void main() {",
<ide>
<del> "vUv = uv;",
<del> "gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );",
<add> " vUv = uv;",
<add> " gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );",
<ide>
<ide> "}"
<ide>
<ide> THREE.DotScreenShader = {
<ide>
<ide> "float pattern() {",
<ide>
<del> "float s = sin( angle ), c = cos( angle );",
<add> " float s = sin( angle ), c = cos( angle );",
<ide>
<del> "vec2 tex = vUv * tSize - center;",
<del> "vec2 point = vec2( c * tex.x - s * tex.y, s * tex.x + c * tex.y ) * scale;",
<add> " vec2 tex = vUv * tSize - center;",
<add> " vec2 point = vec2( c * tex.x - s * tex.y, s * tex.x + c * tex.y ) * scale;",
<ide>
<del> "return ( sin( point.x ) * sin( point.y ) ) * 4.0;",
<add> " return ( sin( point.x ) * sin( point.y ) ) * 4.0;",
<ide>
<ide> "}",
<ide>
<ide> "void main() {",
<ide>
<del> "vec4 color = texture2D( tDiffuse, vUv );",
<add> " vec4 color = texture2D( tDiffuse, vUv );",
<ide>
<del> "float average = ( color.r + color.g + color.b ) / 3.0;",
<add> " float average = ( color.r + color.g + color.b ) / 3.0;",
<ide>
<del> "gl_FragColor = vec4( vec3( average * 10.0 - 5.0 + pattern() ), color.a );",
<add> " gl_FragColor = vec4( vec3( average * 10.0 - 5.0 + pattern() ), color.a );",
<ide>
<ide> "}"
<ide>
<ide><path>examples/js/shaders/FilmShader.js
<ide> THREE.FilmShader = {
<ide>
<ide> "void main() {",
<ide>
<del> "vUv = uv;",
<del> "gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );",
<add> " vUv = uv;",
<add> " gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );",
<ide>
<ide> "}"
<ide>
<ide> THREE.FilmShader = {
<ide> "void main() {",
<ide>
<ide> // sample the source
<del> "vec4 cTextureScreen = texture2D( tDiffuse, vUv );",
<add> " vec4 cTextureScreen = texture2D( tDiffuse, vUv );",
<ide>
<ide> // make some noise
<del> "float dx = rand( vUv + time );",
<add> " float dx = rand( vUv + time );",
<ide>
<ide> // add noise
<del> "vec3 cResult = cTextureScreen.rgb + cTextureScreen.rgb * clamp( 0.1 + dx, 0.0, 1.0 );",
<add> " vec3 cResult = cTextureScreen.rgb + cTextureScreen.rgb * clamp( 0.1 + dx, 0.0, 1.0 );",
<ide>
<ide> // get us a sine and cosine
<del> "vec2 sc = vec2( sin( vUv.y * sCount ), cos( vUv.y * sCount ) );",
<add> " vec2 sc = vec2( sin( vUv.y * sCount ), cos( vUv.y * sCount ) );",
<ide>
<ide> // add scanlines
<del> "cResult += cTextureScreen.rgb * vec3( sc.x, sc.y, sc.x ) * sIntensity;",
<add> " cResult += cTextureScreen.rgb * vec3( sc.x, sc.y, sc.x ) * sIntensity;",
<ide>
<ide> // interpolate between source and result by intensity
<del> "cResult = cTextureScreen.rgb + clamp( nIntensity, 0.0,1.0 ) * ( cResult - cTextureScreen.rgb );",
<add> " cResult = cTextureScreen.rgb + clamp( nIntensity, 0.0,1.0 ) * ( cResult - cTextureScreen.rgb );",
<ide>
<ide> // convert to grayscale if desired
<del> "if( grayscale ) {",
<add> " if( grayscale ) {",
<ide>
<del> "cResult = vec3( cResult.r * 0.3 + cResult.g * 0.59 + cResult.b * 0.11 );",
<add> " cResult = vec3( cResult.r * 0.3 + cResult.g * 0.59 + cResult.b * 0.11 );",
<ide>
<del> "}",
<add> " }",
<ide>
<del> "gl_FragColor = vec4( cResult, cTextureScreen.a );",
<add> " gl_FragColor = vec4( cResult, cTextureScreen.a );",
<ide>
<ide> "}"
<ide>
<ide><path>examples/js/shaders/FocusShader.js
<ide> THREE.FocusShader = {
<ide>
<ide> "void main() {",
<ide>
<del> "vUv = uv;",
<del> "gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );",
<add> " vUv = uv;",
<add> " gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );",
<ide>
<ide> "}"
<ide>
<ide> THREE.FocusShader = {
<ide>
<ide> "void main() {",
<ide>
<del> "vec4 color, org, tmp, add;",
<del> "float sample_dist, f;",
<del> "vec2 vin;",
<del> "vec2 uv = vUv;",
<add> " vec4 color, org, tmp, add;",
<add> " float sample_dist, f;",
<add> " vec2 vin;",
<add> " vec2 uv = vUv;",
<ide>
<del> "add = color = org = texture2D( tDiffuse, uv );",
<add> " add = color = org = texture2D( tDiffuse, uv );",
<ide>
<del> "vin = ( uv - vec2( 0.5 ) ) * vec2( 1.4 );",
<del> "sample_dist = dot( vin, vin ) * 2.0;",
<add> " vin = ( uv - vec2( 0.5 ) ) * vec2( 1.4 );",
<add> " sample_dist = dot( vin, vin ) * 2.0;",
<ide>
<del> "f = ( waveFactor * 100.0 + sample_dist ) * sampleDistance * 4.0;",
<add> " f = ( waveFactor * 100.0 + sample_dist ) * sampleDistance * 4.0;",
<ide>
<del> "vec2 sampleSize = vec2( 1.0 / screenWidth, 1.0 / screenHeight ) * vec2( f );",
<add> " vec2 sampleSize = vec2( 1.0 / screenWidth, 1.0 / screenHeight ) * vec2( f );",
<ide>
<del> "add += tmp = texture2D( tDiffuse, uv + vec2( 0.111964, 0.993712 ) * sampleSize );",
<del> "if( tmp.b < color.b ) color = tmp;",
<add> " add += tmp = texture2D( tDiffuse, uv + vec2( 0.111964, 0.993712 ) * sampleSize );",
<add> " if( tmp.b < color.b ) color = tmp;",
<ide>
<del> "add += tmp = texture2D( tDiffuse, uv + vec2( 0.846724, 0.532032 ) * sampleSize );",
<del> "if( tmp.b < color.b ) color = tmp;",
<add> " add += tmp = texture2D( tDiffuse, uv + vec2( 0.846724, 0.532032 ) * sampleSize );",
<add> " if( tmp.b < color.b ) color = tmp;",
<ide>
<del> "add += tmp = texture2D( tDiffuse, uv + vec2( 0.943883, -0.330279 ) * sampleSize );",
<del> "if( tmp.b < color.b ) color = tmp;",
<add> " add += tmp = texture2D( tDiffuse, uv + vec2( 0.943883, -0.330279 ) * sampleSize );",
<add> " if( tmp.b < color.b ) color = tmp;",
<ide>
<del> "add += tmp = texture2D( tDiffuse, uv + vec2( 0.330279, -0.943883 ) * sampleSize );",
<del> "if( tmp.b < color.b ) color = tmp;",
<add> " add += tmp = texture2D( tDiffuse, uv + vec2( 0.330279, -0.943883 ) * sampleSize );",
<add> " if( tmp.b < color.b ) color = tmp;",
<ide>
<del> "add += tmp = texture2D( tDiffuse, uv + vec2( -0.532032, -0.846724 ) * sampleSize );",
<del> "if( tmp.b < color.b ) color = tmp;",
<add> " add += tmp = texture2D( tDiffuse, uv + vec2( -0.532032, -0.846724 ) * sampleSize );",
<add> " if( tmp.b < color.b ) color = tmp;",
<ide>
<del> "add += tmp = texture2D( tDiffuse, uv + vec2( -0.993712, -0.111964 ) * sampleSize );",
<del> "if( tmp.b < color.b ) color = tmp;",
<add> " add += tmp = texture2D( tDiffuse, uv + vec2( -0.993712, -0.111964 ) * sampleSize );",
<add> " if( tmp.b < color.b ) color = tmp;",
<ide>
<del> "add += tmp = texture2D( tDiffuse, uv + vec2( -0.707107, 0.707107 ) * sampleSize );",
<del> "if( tmp.b < color.b ) color = tmp;",
<add> " add += tmp = texture2D( tDiffuse, uv + vec2( -0.707107, 0.707107 ) * sampleSize );",
<add> " if( tmp.b < color.b ) color = tmp;",
<ide>
<del> "color = color * vec4( 2.0 ) - ( add / vec4( 8.0 ) );",
<del> "color = color + ( add / vec4( 8.0 ) - color ) * ( vec4( 1.0 ) - vec4( sample_dist * 0.5 ) );",
<add> " color = color * vec4( 2.0 ) - ( add / vec4( 8.0 ) );",
<add> " color = color + ( add / vec4( 8.0 ) - color ) * ( vec4( 1.0 ) - vec4( sample_dist * 0.5 ) );",
<ide>
<del> "gl_FragColor = vec4( color.rgb * color.rgb * vec3( 0.95 ) + color.rgb, 1.0 );",
<add> " gl_FragColor = vec4( color.rgb * color.rgb * vec3( 0.95 ) + color.rgb, 1.0 );",
<ide>
<ide> "}"
<ide>
<ide><path>examples/js/shaders/FreiChenShader.js
<ide> THREE.FreiChenShader = {
<ide>
<ide> "void main() {",
<ide>
<del> "vUv = uv;",
<del> "gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );",
<add> " vUv = uv;",
<add> " gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );",
<ide>
<ide> "}"
<ide>
<ide> THREE.FreiChenShader = {
<ide> "void main(void)",
<ide> "{",
<ide>
<del> "G[0] = g0,",
<del> "G[1] = g1,",
<del> "G[2] = g2,",
<del> "G[3] = g3,",
<del> "G[4] = g4,",
<del> "G[5] = g5,",
<del> "G[6] = g6,",
<del> "G[7] = g7,",
<del> "G[8] = g8;",
<del>
<del> "mat3 I;",
<del> "float cnv[9];",
<del> "vec3 sample;",
<del>
<del> /* fetch the 3x3 neighbourhood and use the RGB vector's length as intensity value */
<del> "for (float i=0.0; i<3.0; i++) {",
<del> "for (float j=0.0; j<3.0; j++) {",
<del> "sample = texture2D(tDiffuse, vUv + texel * vec2(i-1.0,j-1.0) ).rgb;",
<del> "I[int(i)][int(j)] = length(sample);",
<del> "}",
<del> "}",
<del>
<del> /* calculate the convolution values for all the masks */
<del> "for (int i=0; i<9; i++) {",
<del> "float dp3 = dot(G[i][0], I[0]) + dot(G[i][1], I[1]) + dot(G[i][2], I[2]);",
<del> "cnv[i] = dp3 * dp3;",
<del> "}",
<del>
<del> "float M = (cnv[0] + cnv[1]) + (cnv[2] + cnv[3]);",
<del> "float S = (cnv[4] + cnv[5]) + (cnv[6] + cnv[7]) + (cnv[8] + M);",
<del>
<del> "gl_FragColor = vec4(vec3(sqrt(M/S)), 1.0);",
<add> " G[0] = g0,",
<add> " G[1] = g1,",
<add> " G[2] = g2,",
<add> " G[3] = g3,",
<add> " G[4] = g4,",
<add> " G[5] = g5,",
<add> " G[6] = g6,",
<add> " G[7] = g7,",
<add> " G[8] = g8;",
<add>
<add> " mat3 I;",
<add> " float cnv[9];",
<add> " vec3 sample;",
<add>
<add> /* fetch the 3x3 neighbourhood and use the RGB vector's length as intensity value */
<add> " for (float i=0.0; i<3.0; i++) {",
<add> " for (float j=0.0; j<3.0; j++) {",
<add> " sample = texture2D(tDiffuse, vUv + texel * vec2(i-1.0,j-1.0) ).rgb;",
<add> " I[int(i)][int(j)] = length(sample);",
<add> " }",
<add> " }",
<add>
<add> /* calculate the convolution values for all the masks */
<add> " for (int i=0; i<9; i++) {",
<add> " float dp3 = dot(G[i][0], I[0]) + dot(G[i][1], I[1]) + dot(G[i][2], I[2]);",
<add> " cnv[i] = dp3 * dp3;",
<add> " }",
<add>
<add> " float M = (cnv[0] + cnv[1]) + (cnv[2] + cnv[3]);",
<add> " float S = (cnv[4] + cnv[5]) + (cnv[6] + cnv[7]) + (cnv[8] + M);",
<add>
<add> " gl_FragColor = vec4(vec3(sqrt(M/S)), 1.0);",
<ide> "}"
<ide>
<ide> ].join( "\n" ) | 5 |
Javascript | Javascript | remove more isomorphic www shims | e68e95284bd06c7407eaced9f44ea31caf1047a6 | <ide><path>scripts/rollup/bundles.js
<ide> const bundles = [
<ide> 'prop-types',
<ide> 'prop-types/checkPropTypes',
<ide> ],
<del> fbEntry: 'src/fb/ReactFBEntry',
<add> fbEntry: 'src/isomorphic/ReactEntry',
<ide> hasteName: 'React',
<ide> isRenderer: false,
<ide> label: 'core',
<ide><path>scripts/rollup/shims/facebook-www/PooledClass.js
<del>/**
<del> * Copyright 2013-present, Facebook, Inc.
<del> * All rights reserved.
<del> *
<del> * This source code is licensed under the BSD-style license found in the
<del> * LICENSE file in the root directory of this source tree. An additional grant
<del> * of patent rights can be found in the PATENTS file in the same directory.
<del> *
<del> * @providesModule PooledClass
<del> */
<del>
<del>'use strict';
<del>
<del>const {
<del> __SECRET_INTERNALS_DO_NOT_USE_OR_YOU_WILL_BE_FIRED,
<del>} = require('ReactDOM-fb');
<del>
<del>module.exports = __SECRET_INTERNALS_DO_NOT_USE_OR_YOU_WILL_BE_FIRED.PooledClass;
<ide><path>scripts/rollup/shims/facebook-www/ReactChildren.js
<del>/**
<del> * Copyright 2013-present, Facebook, Inc.
<del> * All rights reserved.
<del> *
<del> * This source code is licensed under the BSD-style license found in the
<del> * LICENSE file in the root directory of this source tree. An additional grant
<del> * of patent rights can be found in the PATENTS file in the same directory.
<del> *
<del> * @providesModule ReactChildren
<del> */
<del>
<del>'use strict';
<del>
<del>const {__SECRET_INTERNALS_DO_NOT_USE_OR_YOU_WILL_BE_FIRED} = require('React');
<del>
<del>// TODO: can't reexport public API because of
<del>// mapIntoWithKeyPrefixInternal() dependency in an addon.
<del>module.exports =
<del> __SECRET_INTERNALS_DO_NOT_USE_OR_YOU_WILL_BE_FIRED.ReactChildren;
<ide><path>scripts/rollup/shims/facebook-www/flattenChildren.js
<del>/**
<del> * Copyright 2013-present, Facebook, Inc.
<del> * All rights reserved.
<del> *
<del> * This source code is licensed under the BSD-style license found in the
<del> * LICENSE file in the root directory of this source tree. An additional grant
<del> * of patent rights can be found in the PATENTS file in the same directory.
<del> *
<del> * @providesModule flattenChildren
<del> */
<del>
<del>'use strict';
<del>
<del>const {__SECRET_INTERNALS_DO_NOT_USE_OR_YOU_WILL_BE_FIRED} = require('React');
<del>
<del>module.exports =
<del> __SECRET_INTERNALS_DO_NOT_USE_OR_YOU_WILL_BE_FIRED.flattenChildren;
<ide><path>scripts/rollup/shims/facebook-www/getComponentName.js
<del>/**
<del> * Copyright 2013-present, Facebook, Inc.
<del> * All rights reserved.
<del> *
<del> * This source code is licensed under the BSD-style license found in the
<del> * LICENSE file in the root directory of this source tree. An additional grant
<del> * of patent rights can be found in the PATENTS file in the same directory.
<del> *
<del> * @providesModule getComponentName
<del> */
<del>
<del>'use strict';
<del>
<del>const {__SECRET_INTERNALS_DO_NOT_USE_OR_YOU_WILL_BE_FIRED} = require('React');
<del>
<del>module.exports =
<del> __SECRET_INTERNALS_DO_NOT_USE_OR_YOU_WILL_BE_FIRED.getComponentName;
<ide><path>scripts/rollup/shims/facebook-www/onlyChild.js
<del>/**
<del> * Copyright 2013-present, Facebook, Inc.
<del> * All rights reserved.
<del> *
<del> * This source code is licensed under the BSD-style license found in the
<del> * LICENSE file in the root directory of this source tree. An additional grant
<del> * of patent rights can be found in the PATENTS file in the same directory.
<del> *
<del> * @providesModule onlyChild
<del> */
<del>'use strict';
<del>
<del>var {Children} = require('React');
<del>
<del>module.exports = Children.only;
<ide><path>src/fb/ReactFBEntry.js
<del>/**
<del> * Copyright 2013-present, Facebook, Inc.
<del> * All rights reserved.
<del> *
<del> * This source code is licensed under the BSD-style license found in the
<del> * LICENSE file in the root directory of this source tree. An additional grant
<del> * of patent rights can be found in the PATENTS file in the same directory.
<del> */
<del>
<del>'use strict';
<del>
<del>var React = require('ReactEntry');
<del>
<del>// Add existing internal dependencies from www codebase.
<del>// The goal is to get rid of these with time or turn them into public APIs.
<del>Object.assign(React.__SECRET_INTERNALS_DO_NOT_USE_OR_YOU_WILL_BE_FIRED, {
<del> ReactChildren: require('ReactChildren'),
<del> getComponentName: require('getComponentName'),
<del> flattenChildren: require('flattenChildren'),
<del>});
<del>
<del>module.exports = React; | 7 |
Python | Python | convert indentation from 2 spaces to 4 spaces | 8163baab646c8ad76877ef073a707e88ea096bab | <ide><path>create_pretraining_data.py
<ide>
<ide>
<ide> class TrainingInstance(object):
<del> """A single training instance (sentence pair)."""
<del>
<del> def __init__(self, tokens, segment_ids, masked_lm_positions, masked_lm_labels,
<del> is_random_next):
<del> self.tokens = tokens
<del> self.segment_ids = segment_ids
<del> self.is_random_next = is_random_next
<del> self.masked_lm_positions = masked_lm_positions
<del> self.masked_lm_labels = masked_lm_labels
<del>
<del> def __str__(self):
<del> s = ""
<del> s += "tokens: %s\n" % (" ".join(
<del> [tokenization.printable_text(x) for x in self.tokens]))
<del> s += "segment_ids: %s\n" % (" ".join([str(x) for x in self.segment_ids]))
<del> s += "is_random_next: %s\n" % self.is_random_next
<del> s += "masked_lm_positions: %s\n" % (" ".join(
<del> [str(x) for x in self.masked_lm_positions]))
<del> s += "masked_lm_labels: %s\n" % (" ".join(
<del> [tokenization.printable_text(x) for x in self.masked_lm_labels]))
<del> s += "\n"
<del> return s
<del>
<del> def __repr__(self):
<del> return self.__str__()
<add> """A single training instance (sentence pair)."""
<add>
<add> def __init__(self, tokens, segment_ids, masked_lm_positions, masked_lm_labels,
<add> is_random_next):
<add> self.tokens = tokens
<add> self.segment_ids = segment_ids
<add> self.is_random_next = is_random_next
<add> self.masked_lm_positions = masked_lm_positions
<add> self.masked_lm_labels = masked_lm_labels
<add>
<add> def __str__(self):
<add> s = ""
<add> s += "tokens: %s\n" % (" ".join(
<add> [tokenization.printable_text(x) for x in self.tokens]))
<add> s += "segment_ids: %s\n" % (" ".join([str(x) for x in self.segment_ids]))
<add> s += "is_random_next: %s\n" % self.is_random_next
<add> s += "masked_lm_positions: %s\n" % (" ".join(
<add> [str(x) for x in self.masked_lm_positions]))
<add> s += "masked_lm_labels: %s\n" % (" ".join(
<add> [tokenization.printable_text(x) for x in self.masked_lm_labels]))
<add> s += "\n"
<add> return s
<add>
<add> def __repr__(self):
<add> return self.__str__()
<ide>
<ide>
<ide> def write_instance_to_example_files(instances, tokenizer, max_seq_length,
<ide> max_predictions_per_seq, output_files):
<del> """Create TF example files from `TrainingInstance`s."""
<del> writers = []
<del> for output_file in output_files:
<del> writers.append(tf.python_io.TFRecordWriter(output_file))
<add> """Create TF example files from `TrainingInstance`s."""
<add> writers = []
<add> for output_file in output_files:
<add> writers.append(tf.python_io.TFRecordWriter(output_file))
<ide>
<del> writer_index = 0
<add> writer_index = 0
<ide>
<del> total_written = 0
<del> for (inst_index, instance) in enumerate(instances):
<del> input_ids = tokenizer.convert_tokens_to_ids(instance.tokens)
<del> input_mask = [1] * len(input_ids)
<del> segment_ids = list(instance.segment_ids)
<del> assert len(input_ids) <= max_seq_length
<add> total_written = 0
<add> for (inst_index, instance) in enumerate(instances):
<add> input_ids = tokenizer.convert_tokens_to_ids(instance.tokens)
<add> input_mask = [1] * len(input_ids)
<add> segment_ids = list(instance.segment_ids)
<add> assert len(input_ids) <= max_seq_length
<ide>
<del> while len(input_ids) < max_seq_length:
<del> input_ids.append(0)
<del> input_mask.append(0)
<del> segment_ids.append(0)
<add> while len(input_ids) < max_seq_length:
<add> input_ids.append(0)
<add> input_mask.append(0)
<add> segment_ids.append(0)
<ide>
<del> assert len(input_ids) == max_seq_length
<del> assert len(input_mask) == max_seq_length
<del> assert len(segment_ids) == max_seq_length
<add> assert len(input_ids) == max_seq_length
<add> assert len(input_mask) == max_seq_length
<add> assert len(segment_ids) == max_seq_length
<ide>
<del> masked_lm_positions = list(instance.masked_lm_positions)
<del> masked_lm_ids = tokenizer.convert_tokens_to_ids(instance.masked_lm_labels)
<del> masked_lm_weights = [1.0] * len(masked_lm_ids)
<add> masked_lm_positions = list(instance.masked_lm_positions)
<add> masked_lm_ids = tokenizer.convert_tokens_to_ids(instance.masked_lm_labels)
<add> masked_lm_weights = [1.0] * len(masked_lm_ids)
<ide>
<del> while len(masked_lm_positions) < max_predictions_per_seq:
<del> masked_lm_positions.append(0)
<del> masked_lm_ids.append(0)
<del> masked_lm_weights.append(0.0)
<add> while len(masked_lm_positions) < max_predictions_per_seq:
<add> masked_lm_positions.append(0)
<add> masked_lm_ids.append(0)
<add> masked_lm_weights.append(0.0)
<ide>
<del> next_sentence_label = 1 if instance.is_random_next else 0
<add> next_sentence_label = 1 if instance.is_random_next else 0
<ide>
<del> features = collections.OrderedDict()
<del> features["input_ids"] = create_int_feature(input_ids)
<del> features["input_mask"] = create_int_feature(input_mask)
<del> features["segment_ids"] = create_int_feature(segment_ids)
<del> features["masked_lm_positions"] = create_int_feature(masked_lm_positions)
<del> features["masked_lm_ids"] = create_int_feature(masked_lm_ids)
<del> features["masked_lm_weights"] = create_float_feature(masked_lm_weights)
<del> features["next_sentence_labels"] = create_int_feature([next_sentence_label])
<add> features = collections.OrderedDict()
<add> features["input_ids"] = create_int_feature(input_ids)
<add> features["input_mask"] = create_int_feature(input_mask)
<add> features["segment_ids"] = create_int_feature(segment_ids)
<add> features["masked_lm_positions"] = create_int_feature(masked_lm_positions)
<add> features["masked_lm_ids"] = create_int_feature(masked_lm_ids)
<add> features["masked_lm_weights"] = create_float_feature(masked_lm_weights)
<add> features["next_sentence_labels"] = create_int_feature([next_sentence_label])
<ide>
<del> tf_example = tf.train.Example(features=tf.train.Features(feature=features))
<add> tf_example = tf.train.Example(features=tf.train.Features(feature=features))
<ide>
<del> writers[writer_index].write(tf_example.SerializeToString())
<del> writer_index = (writer_index + 1) % len(writers)
<add> writers[writer_index].write(tf_example.SerializeToString())
<add> writer_index = (writer_index + 1) % len(writers)
<ide>
<del> total_written += 1
<add> total_written += 1
<ide>
<del> if inst_index < 20:
<del> tf.logging.info("*** Example ***")
<del> tf.logging.info("tokens: %s" % " ".join(
<del> [tokenization.printable_text(x) for x in instance.tokens]))
<add> if inst_index < 20:
<add> tf.logging.info("*** Example ***")
<add> tf.logging.info("tokens: %s" % " ".join(
<add> [tokenization.printable_text(x) for x in instance.tokens]))
<ide>
<del> for feature_name in features.keys():
<del> feature = features[feature_name]
<del> values = []
<del> if feature.int64_list.value:
<del> values = feature.int64_list.value
<del> elif feature.float_list.value:
<del> values = feature.float_list.value
<del> tf.logging.info(
<del> "%s: %s" % (feature_name, " ".join([str(x) for x in values])))
<add> for feature_name in features.keys():
<add> feature = features[feature_name]
<add> values = []
<add> if feature.int64_list.value:
<add> values = feature.int64_list.value
<add> elif feature.float_list.value:
<add> values = feature.float_list.value
<add> tf.logging.info(
<add> "%s: %s" % (feature_name, " ".join([str(x) for x in values])))
<ide>
<del> for writer in writers:
<del> writer.close()
<add> for writer in writers:
<add> writer.close()
<ide>
<del> tf.logging.info("Wrote %d total instances", total_written)
<add> tf.logging.info("Wrote %d total instances", total_written)
<ide>
<ide>
<ide> def create_int_feature(values):
<del> feature = tf.train.Feature(int64_list=tf.train.Int64List(value=list(values)))
<del> return feature
<add> feature = tf.train.Feature(int64_list=tf.train.Int64List(value=list(values)))
<add> return feature
<ide>
<ide>
<ide> def create_float_feature(values):
<del> feature = tf.train.Feature(float_list=tf.train.FloatList(value=list(values)))
<del> return feature
<add> feature = tf.train.Feature(float_list=tf.train.FloatList(value=list(values)))
<add> return feature
<ide>
<ide>
<ide> def create_training_instances(input_files, tokenizer, max_seq_length,
<ide> dupe_factor, short_seq_prob, masked_lm_prob,
<ide> max_predictions_per_seq, rng):
<del> """Create `TrainingInstance`s from raw text."""
<del> all_documents = [[]]
<del>
<del> # Input file format:
<del> # (1) One sentence per line. These should ideally be actual sentences, not
<del> # entire paragraphs or arbitrary spans of text. (Because we use the
<del> # sentence boundaries for the "next sentence prediction" task).
<del> # (2) Blank lines between documents. Document boundaries are needed so
<del> # that the "next sentence prediction" task doesn't span between documents.
<del> for input_file in input_files:
<del> with tf.gfile.GFile(input_file, "r") as reader:
<del> while True:
<del> line = tokenization.convert_to_unicode(reader.readline())
<del> if not line:
<del> break
<del> line = line.strip()
<del>
<del> # Empty lines are used as document delimiters
<del> if not line:
<del> all_documents.append([])
<del> tokens = tokenizer.tokenize(line)
<del> if tokens:
<del> all_documents[-1].append(tokens)
<del>
<del> # Remove empty documents
<del> all_documents = [x for x in all_documents if x]
<del> rng.shuffle(all_documents)
<del>
<del> vocab_words = list(tokenizer.vocab.keys())
<del> instances = []
<del> for _ in range(dupe_factor):
<del> for document_index in range(len(all_documents)):
<del> instances.extend(
<del> create_instances_from_document(
<del> all_documents, document_index, max_seq_length, short_seq_prob,
<del> masked_lm_prob, max_predictions_per_seq, vocab_words, rng))
<del>
<del> rng.shuffle(instances)
<del> return instances
<add> """Create `TrainingInstance`s from raw text."""
<add> all_documents = [[]]
<add>
<add> # Input file format:
<add> # (1) One sentence per line. These should ideally be actual sentences, not
<add> # entire paragraphs or arbitrary spans of text. (Because we use the
<add> # sentence boundaries for the "next sentence prediction" task).
<add> # (2) Blank lines between documents. Document boundaries are needed so
<add> # that the "next sentence prediction" task doesn't span between documents.
<add> for input_file in input_files:
<add> with tf.gfile.GFile(input_file, "r") as reader:
<add> while True:
<add> line = tokenization.convert_to_unicode(reader.readline())
<add> if not line:
<add> break
<add> line = line.strip()
<add>
<add> # Empty lines are used as document delimiters
<add> if not line:
<add> all_documents.append([])
<add> tokens = tokenizer.tokenize(line)
<add> if tokens:
<add> all_documents[-1].append(tokens)
<add>
<add> # Remove empty documents
<add> all_documents = [x for x in all_documents if x]
<add> rng.shuffle(all_documents)
<add>
<add> vocab_words = list(tokenizer.vocab.keys())
<add> instances = []
<add> for _ in range(dupe_factor):
<add> for document_index in range(len(all_documents)):
<add> instances.extend(
<add> create_instances_from_document(
<add> all_documents, document_index, max_seq_length, short_seq_prob,
<add> masked_lm_prob, max_predictions_per_seq, vocab_words, rng))
<add>
<add> rng.shuffle(instances)
<add> return instances
<ide>
<ide>
<ide> def create_instances_from_document(
<del> all_documents, document_index, max_seq_length, short_seq_prob,
<del> masked_lm_prob, max_predictions_per_seq, vocab_words, rng):
<del> """Creates `TrainingInstance`s for a single document."""
<del> document = all_documents[document_index]
<del>
<del> # Account for [CLS], [SEP], [SEP]
<del> max_num_tokens = max_seq_length - 3
<del>
<del> # We *usually* want to fill up the entire sequence since we are padding
<del> # to `max_seq_length` anyways, so short sequences are generally wasted
<del> # computation. However, we *sometimes*
<del> # (i.e., short_seq_prob == 0.1 == 10% of the time) want to use shorter
<del> # sequences to minimize the mismatch between pre-training and fine-tuning.
<del> # The `target_seq_length` is just a rough target however, whereas
<del> # `max_seq_length` is a hard limit.
<del> target_seq_length = max_num_tokens
<del> if rng.random() < short_seq_prob:
<del> target_seq_length = rng.randint(2, max_num_tokens)
<del>
<del> # We DON'T just concatenate all of the tokens from a document into a long
<del> # sequence and choose an arbitrary split point because this would make the
<del> # next sentence prediction task too easy. Instead, we split the input into
<del> # segments "A" and "B" based on the actual "sentences" provided by the user
<del> # input.
<del> instances = []
<del> current_chunk = []
<del> current_length = 0
<del> i = 0
<del> while i < len(document):
<del> segment = document[i]
<del> current_chunk.append(segment)
<del> current_length += len(segment)
<del> if i == len(document) - 1 or current_length >= target_seq_length:
<del> if current_chunk:
<del> # `a_end` is how many segments from `current_chunk` go into the `A`
<del> # (first) sentence.
<del> a_end = 1
<del> if len(current_chunk) >= 2:
<del> a_end = rng.randint(1, len(current_chunk) - 1)
<del>
<del> tokens_a = []
<del> for j in range(a_end):
<del> tokens_a.extend(current_chunk[j])
<del>
<del> tokens_b = []
<del> # Random next
<del> is_random_next = False
<del> if len(current_chunk) == 1 or rng.random() < 0.5:
<del> is_random_next = True
<del> target_b_length = target_seq_length - len(tokens_a)
<del>
<del> # This should rarely go for more than one iteration for large
<del> # corpora. However, just to be careful, we try to make sure that
<del> # the random document is not the same as the document
<del> # we're processing.
<del> for _ in range(10):
<del> random_document_index = rng.randint(0, len(all_documents) - 1)
<del> if random_document_index != document_index:
<del> break
<del>
<del> random_document = all_documents[random_document_index]
<del> random_start = rng.randint(0, len(random_document) - 1)
<del> for j in range(random_start, len(random_document)):
<del> tokens_b.extend(random_document[j])
<del> if len(tokens_b) >= target_b_length:
<del> break
<del> # We didn't actually use these segments so we "put them back" so
<del> # they don't go to waste.
<del> num_unused_segments = len(current_chunk) - a_end
<del> i -= num_unused_segments
<del> # Actual next
<del> else:
<del> is_random_next = False
<del> for j in range(a_end, len(current_chunk)):
<del> tokens_b.extend(current_chunk[j])
<del> truncate_seq_pair(tokens_a, tokens_b, max_num_tokens, rng)
<del>
<del> assert len(tokens_a) >= 1
<del> assert len(tokens_b) >= 1
<del>
<del> tokens = []
<del> segment_ids = []
<del> tokens.append("[CLS]")
<del> segment_ids.append(0)
<del> for token in tokens_a:
<del> tokens.append(token)
<del> segment_ids.append(0)
<del>
<del> tokens.append("[SEP]")
<del> segment_ids.append(0)
<del>
<del> for token in tokens_b:
<del> tokens.append(token)
<del> segment_ids.append(1)
<del> tokens.append("[SEP]")
<del> segment_ids.append(1)
<del>
<del> (tokens, masked_lm_positions,
<del> masked_lm_labels) = create_masked_lm_predictions(
<del> tokens, masked_lm_prob, max_predictions_per_seq, vocab_words, rng)
<del> instance = TrainingInstance(
<del> tokens=tokens,
<del> segment_ids=segment_ids,
<del> is_random_next=is_random_next,
<del> masked_lm_positions=masked_lm_positions,
<del> masked_lm_labels=masked_lm_labels)
<del> instances.append(instance)
<del> current_chunk = []
<del> current_length = 0
<del> i += 1
<del>
<del> return instances
<add> all_documents, document_index, max_seq_length, short_seq_prob,
<add> masked_lm_prob, max_predictions_per_seq, vocab_words, rng):
<add> """Creates `TrainingInstance`s for a single document."""
<add> document = all_documents[document_index]
<add>
<add> # Account for [CLS], [SEP], [SEP]
<add> max_num_tokens = max_seq_length - 3
<add>
<add> # We *usually* want to fill up the entire sequence since we are padding
<add> # to `max_seq_length` anyways, so short sequences are generally wasted
<add> # computation. However, we *sometimes*
<add> # (i.e., short_seq_prob == 0.1 == 10% of the time) want to use shorter
<add> # sequences to minimize the mismatch between pre-training and fine-tuning.
<add> # The `target_seq_length` is just a rough target however, whereas
<add> # `max_seq_length` is a hard limit.
<add> target_seq_length = max_num_tokens
<add> if rng.random() < short_seq_prob:
<add> target_seq_length = rng.randint(2, max_num_tokens)
<add>
<add> # We DON'T just concatenate all of the tokens from a document into a long
<add> # sequence and choose an arbitrary split point because this would make the
<add> # next sentence prediction task too easy. Instead, we split the input into
<add> # segments "A" and "B" based on the actual "sentences" provided by the user
<add> # input.
<add> instances = []
<add> current_chunk = []
<add> current_length = 0
<add> i = 0
<add> while i < len(document):
<add> segment = document[i]
<add> current_chunk.append(segment)
<add> current_length += len(segment)
<add> if i == len(document) - 1 or current_length >= target_seq_length:
<add> if current_chunk:
<add> # `a_end` is how many segments from `current_chunk` go into the `A`
<add> # (first) sentence.
<add> a_end = 1
<add> if len(current_chunk) >= 2:
<add> a_end = rng.randint(1, len(current_chunk) - 1)
<add>
<add> tokens_a = []
<add> for j in range(a_end):
<add> tokens_a.extend(current_chunk[j])
<add>
<add> tokens_b = []
<add> # Random next
<add> is_random_next = False
<add> if len(current_chunk) == 1 or rng.random() < 0.5:
<add> is_random_next = True
<add> target_b_length = target_seq_length - len(tokens_a)
<add>
<add> # This should rarely go for more than one iteration for large
<add> # corpora. However, just to be careful, we try to make sure that
<add> # the random document is not the same as the document
<add> # we're processing.
<add> for _ in range(10):
<add> random_document_index = rng.randint(0, len(all_documents) - 1)
<add> if random_document_index != document_index:
<add> break
<add>
<add> random_document = all_documents[random_document_index]
<add> random_start = rng.randint(0, len(random_document) - 1)
<add> for j in range(random_start, len(random_document)):
<add> tokens_b.extend(random_document[j])
<add> if len(tokens_b) >= target_b_length:
<add> break
<add> # We didn't actually use these segments so we "put them back" so
<add> # they don't go to waste.
<add> num_unused_segments = len(current_chunk) - a_end
<add> i -= num_unused_segments
<add> # Actual next
<add> else:
<add> is_random_next = False
<add> for j in range(a_end, len(current_chunk)):
<add> tokens_b.extend(current_chunk[j])
<add> truncate_seq_pair(tokens_a, tokens_b, max_num_tokens, rng)
<add>
<add> assert len(tokens_a) >= 1
<add> assert len(tokens_b) >= 1
<add>
<add> tokens = []
<add> segment_ids = []
<add> tokens.append("[CLS]")
<add> segment_ids.append(0)
<add> for token in tokens_a:
<add> tokens.append(token)
<add> segment_ids.append(0)
<add>
<add> tokens.append("[SEP]")
<add> segment_ids.append(0)
<add>
<add> for token in tokens_b:
<add> tokens.append(token)
<add> segment_ids.append(1)
<add> tokens.append("[SEP]")
<add> segment_ids.append(1)
<add>
<add> (tokens, masked_lm_positions,
<add> masked_lm_labels) = create_masked_lm_predictions(
<add> tokens, masked_lm_prob, max_predictions_per_seq, vocab_words, rng)
<add> instance = TrainingInstance(
<add> tokens=tokens,
<add> segment_ids=segment_ids,
<add> is_random_next=is_random_next,
<add> masked_lm_positions=masked_lm_positions,
<add> masked_lm_labels=masked_lm_labels)
<add> instances.append(instance)
<add> current_chunk = []
<add> current_length = 0
<add> i += 1
<add>
<add> return instances
<ide>
<ide>
<ide> def create_masked_lm_predictions(tokens, masked_lm_prob,
<ide> max_predictions_per_seq, vocab_words, rng):
<del> """Creates the predictis for the masked LM objective."""
<add> """Creates the predictis for the masked LM objective."""
<ide>
<del> cand_indexes = []
<del> for (i, token) in enumerate(tokens):
<del> if token == "[CLS]" or token == "[SEP]":
<del> continue
<del> cand_indexes.append(i)
<add> cand_indexes = []
<add> for (i, token) in enumerate(tokens):
<add> if token == "[CLS]" or token == "[SEP]":
<add> continue
<add> cand_indexes.append(i)
<ide>
<del> rng.shuffle(cand_indexes)
<add> rng.shuffle(cand_indexes)
<ide>
<del> output_tokens = list(tokens)
<add> output_tokens = list(tokens)
<ide>
<del> masked_lm = collections.namedtuple("masked_lm", ["index", "label"]) # pylint: disable=invalid-name
<add> masked_lm = collections.namedtuple("masked_lm", ["index", "label"]) # pylint: disable=invalid-name
<ide>
<del> num_to_predict = min(max_predictions_per_seq,
<del> max(1, int(round(len(tokens) * masked_lm_prob))))
<add> num_to_predict = min(max_predictions_per_seq,
<add> max(1, int(round(len(tokens) * masked_lm_prob))))
<ide>
<del> masked_lms = []
<del> covered_indexes = set()
<del> for index in cand_indexes:
<del> if len(masked_lms) >= num_to_predict:
<del> break
<del> if index in covered_indexes:
<del> continue
<del> covered_indexes.add(index)
<add> masked_lms = []
<add> covered_indexes = set()
<add> for index in cand_indexes:
<add> if len(masked_lms) >= num_to_predict:
<add> break
<add> if index in covered_indexes:
<add> continue
<add> covered_indexes.add(index)
<ide>
<del> masked_token = None
<del> # 80% of the time, replace with [MASK]
<del> if rng.random() < 0.8:
<del> masked_token = "[MASK]"
<del> else:
<del> # 10% of the time, keep original
<del> if rng.random() < 0.5:
<del> masked_token = tokens[index]
<del> # 10% of the time, replace with random word
<del> else:
<del> masked_token = vocab_words[rng.randint(0, len(vocab_words) - 1)]
<add> masked_token = None
<add> # 80% of the time, replace with [MASK]
<add> if rng.random() < 0.8:
<add> masked_token = "[MASK]"
<add> else:
<add> # 10% of the time, keep original
<add> if rng.random() < 0.5:
<add> masked_token = tokens[index]
<add> # 10% of the time, replace with random word
<add> else:
<add> masked_token = vocab_words[rng.randint(0, len(vocab_words) - 1)]
<ide>
<del> output_tokens[index] = masked_token
<add> output_tokens[index] = masked_token
<ide>
<del> masked_lms.append(masked_lm(index=index, label=tokens[index]))
<add> masked_lms.append(masked_lm(index=index, label=tokens[index]))
<ide>
<del> masked_lms = sorted(masked_lms, key=lambda x: x.index)
<add> masked_lms = sorted(masked_lms, key=lambda x: x.index)
<ide>
<del> masked_lm_positions = []
<del> masked_lm_labels = []
<del> for p in masked_lms:
<del> masked_lm_positions.append(p.index)
<del> masked_lm_labels.append(p.label)
<add> masked_lm_positions = []
<add> masked_lm_labels = []
<add> for p in masked_lms:
<add> masked_lm_positions.append(p.index)
<add> masked_lm_labels.append(p.label)
<ide>
<del> return (output_tokens, masked_lm_positions, masked_lm_labels)
<add> return (output_tokens, masked_lm_positions, masked_lm_labels)
<ide>
<ide>
<ide> def truncate_seq_pair(tokens_a, tokens_b, max_num_tokens, rng):
<del> """Truncates a pair of sequences to a maximum sequence length."""
<del> while True:
<del> total_length = len(tokens_a) + len(tokens_b)
<del> if total_length <= max_num_tokens:
<del> break
<del>
<del> trunc_tokens = tokens_a if len(tokens_a) > len(tokens_b) else tokens_b
<del> assert len(trunc_tokens) >= 1
<del>
<del> # We want to sometimes truncate from the front and sometimes from the
<del> # back to add more randomness and avoid biases.
<del> if rng.random() < 0.5:
<del> del trunc_tokens[0]
<del> else:
<del> trunc_tokens.pop()
<add> """Truncates a pair of sequences to a maximum sequence length."""
<add> while True:
<add> total_length = len(tokens_a) + len(tokens_b)
<add> if total_length <= max_num_tokens:
<add> break
<add>
<add> trunc_tokens = tokens_a if len(tokens_a) > len(tokens_b) else tokens_b
<add> assert len(trunc_tokens) >= 1
<add>
<add> # We want to sometimes truncate from the front and sometimes from the
<add> # back to add more randomness and avoid biases.
<add> if rng.random() < 0.5:
<add> del trunc_tokens[0]
<add> else:
<add> trunc_tokens.pop()
<ide>
<ide>
<ide> def main(_):
<del> tf.logging.set_verbosity(tf.logging.INFO)
<add> tf.logging.set_verbosity(tf.logging.INFO)
<ide>
<del> tokenizer = tokenization.FullTokenizer(
<del> vocab_file=FLAGS.vocab_file, do_lower_case=FLAGS.do_lower_case)
<add> tokenizer = tokenization.FullTokenizer(
<add> vocab_file=FLAGS.vocab_file, do_lower_case=FLAGS.do_lower_case)
<ide>
<del> input_files = []
<del> for input_pattern in FLAGS.input_file.split(","):
<del> input_files.extend(tf.gfile.Glob(input_pattern))
<add> input_files = []
<add> for input_pattern in FLAGS.input_file.split(","):
<add> input_files.extend(tf.gfile.Glob(input_pattern))
<ide>
<del> tf.logging.info("*** Reading from input files ***")
<del> for input_file in input_files:
<del> tf.logging.info(" %s", input_file)
<add> tf.logging.info("*** Reading from input files ***")
<add> for input_file in input_files:
<add> tf.logging.info(" %s", input_file)
<ide>
<del> rng = random.Random(FLAGS.random_seed)
<del> instances = create_training_instances(
<del> input_files, tokenizer, FLAGS.max_seq_length, FLAGS.dupe_factor,
<del> FLAGS.short_seq_prob, FLAGS.masked_lm_prob, FLAGS.max_predictions_per_seq,
<del> rng)
<add> rng = random.Random(FLAGS.random_seed)
<add> instances = create_training_instances(
<add> input_files, tokenizer, FLAGS.max_seq_length, FLAGS.dupe_factor,
<add> FLAGS.short_seq_prob, FLAGS.masked_lm_prob, FLAGS.max_predictions_per_seq,
<add> rng)
<ide>
<del> output_files = FLAGS.output_file.split(",")
<del> tf.logging.info("*** Writing to output files ***")
<del> for output_file in output_files:
<del> tf.logging.info(" %s", output_file)
<add> output_files = FLAGS.output_file.split(",")
<add> tf.logging.info("*** Writing to output files ***")
<add> for output_file in output_files:
<add> tf.logging.info(" %s", output_file)
<ide>
<del> write_instance_to_example_files(instances, tokenizer, FLAGS.max_seq_length,
<del> FLAGS.max_predictions_per_seq, output_files)
<add> write_instance_to_example_files(instances, tokenizer, FLAGS.max_seq_length,
<add> FLAGS.max_predictions_per_seq, output_files)
<ide>
<ide>
<ide> if __name__ == "__main__":
<del> flags.mark_flag_as_required("input_file")
<del> flags.mark_flag_as_required("output_file")
<del> flags.mark_flag_as_required("vocab_file")
<del> tf.app.run()
<add> flags.mark_flag_as_required("input_file")
<add> flags.mark_flag_as_required("output_file")
<add> flags.mark_flag_as_required("vocab_file")
<add> tf.app.run()
<ide><path>extract_features.py
<ide>
<ide> class InputExample(object):
<ide>
<del> def __init__(self, unique_id, text_a, text_b):
<del> self.unique_id = unique_id
<del> self.text_a = text_a
<del> self.text_b = text_b
<add> def __init__(self, unique_id, text_a, text_b):
<add> self.unique_id = unique_id
<add> self.text_a = text_a
<add> self.text_b = text_b
<ide>
<ide>
<ide> class InputFeatures(object):
<del> """A single set of features of data."""
<add> """A single set of features of data."""
<ide>
<del> def __init__(self, unique_id, tokens, input_ids, input_mask, input_type_ids):
<del> self.unique_id = unique_id
<del> self.tokens = tokens
<del> self.input_ids = input_ids
<del> self.input_mask = input_mask
<del> self.input_type_ids = input_type_ids
<add> def __init__(self, unique_id, tokens, input_ids, input_mask, input_type_ids):
<add> self.unique_id = unique_id
<add> self.tokens = tokens
<add> self.input_ids = input_ids
<add> self.input_mask = input_mask
<add> self.input_type_ids = input_type_ids
<ide>
<ide>
<ide> def input_fn_builder(features, seq_length):
<del> """Creates an `input_fn` closure to be passed to TPUEstimator."""
<del>
<del> all_unique_ids = []
<del> all_input_ids = []
<del> all_input_mask = []
<del> all_input_type_ids = []
<del>
<del> for feature in features:
<del> all_unique_ids.append(feature.unique_id)
<del> all_input_ids.append(feature.input_ids)
<del> all_input_mask.append(feature.input_mask)
<del> all_input_type_ids.append(feature.input_type_ids)
<del>
<del> def input_fn(params):
<del> """The actual input function."""
<del> batch_size = params["batch_size"]
<del>
<del> num_examples = len(features)
<del>
<del> # This is for demo purposes and does NOT scale to large data sets. We do
<del> # not use Dataset.from_generator() because that uses tf.py_func which is
<del> # not TPU compatible. The right way to load data is with TFRecordReader.
<del> d = tf.data.Dataset.from_tensor_slices({
<del> "unique_ids":
<del> tf.constant(all_unique_ids, shape=[num_examples], dtype=tf.int32),
<del> "input_ids":
<del> tf.constant(
<del> all_input_ids, shape=[num_examples, seq_length],
<del> dtype=tf.int32),
<del> "input_mask":
<del> tf.constant(
<del> all_input_mask,
<del> shape=[num_examples, seq_length],
<del> dtype=tf.int32),
<del> "input_type_ids":
<del> tf.constant(
<del> all_input_type_ids,
<del> shape=[num_examples, seq_length],
<del> dtype=tf.int32),
<del> })
<del>
<del> d = d.batch(batch_size=batch_size, drop_remainder=False)
<del> return d
<del>
<del> return input_fn
<add> """Creates an `input_fn` closure to be passed to TPUEstimator."""
<add>
<add> all_unique_ids = []
<add> all_input_ids = []
<add> all_input_mask = []
<add> all_input_type_ids = []
<add>
<add> for feature in features:
<add> all_unique_ids.append(feature.unique_id)
<add> all_input_ids.append(feature.input_ids)
<add> all_input_mask.append(feature.input_mask)
<add> all_input_type_ids.append(feature.input_type_ids)
<add>
<add> def input_fn(params):
<add> """The actual input function."""
<add> batch_size = params["batch_size"]
<add>
<add> num_examples = len(features)
<add>
<add> # This is for demo purposes and does NOT scale to large data sets. We do
<add> # not use Dataset.from_generator() because that uses tf.py_func which is
<add> # not TPU compatible. The right way to load data is with TFRecordReader.
<add> d = tf.data.Dataset.from_tensor_slices({
<add> "unique_ids":
<add> tf.constant(all_unique_ids, shape=[num_examples], dtype=tf.int32),
<add> "input_ids":
<add> tf.constant(
<add> all_input_ids, shape=[num_examples, seq_length],
<add> dtype=tf.int32),
<add> "input_mask":
<add> tf.constant(
<add> all_input_mask,
<add> shape=[num_examples, seq_length],
<add> dtype=tf.int32),
<add> "input_type_ids":
<add> tf.constant(
<add> all_input_type_ids,
<add> shape=[num_examples, seq_length],
<add> dtype=tf.int32),
<add> })
<add>
<add> d = d.batch(batch_size=batch_size, drop_remainder=False)
<add> return d
<add>
<add> return input_fn
<ide>
<ide>
<ide> def model_fn_builder(bert_config, init_checkpoint, layer_indexes, use_tpu,
<ide> use_one_hot_embeddings):
<del> """Returns `model_fn` closure for TPUEstimator."""
<add> """Returns `model_fn` closure for TPUEstimator."""
<ide>
<del> def model_fn(features, labels, mode, params): # pylint: disable=unused-argument
<del> """The `model_fn` for TPUEstimator."""
<add> def model_fn(features, labels, mode, params): # pylint: disable=unused-argument
<add> """The `model_fn` for TPUEstimator."""
<ide>
<del> unique_ids = features["unique_ids"]
<del> input_ids = features["input_ids"]
<del> input_mask = features["input_mask"]
<del> input_type_ids = features["input_type_ids"]
<add> unique_ids = features["unique_ids"]
<add> input_ids = features["input_ids"]
<add> input_mask = features["input_mask"]
<add> input_type_ids = features["input_type_ids"]
<ide>
<del> model = modeling.BertModel(
<del> config=bert_config,
<del> is_training=False,
<del> input_ids=input_ids,
<del> input_mask=input_mask,
<del> token_type_ids=input_type_ids,
<del> use_one_hot_embeddings=use_one_hot_embeddings)
<add> model = modeling.BertModel(
<add> config=bert_config,
<add> is_training=False,
<add> input_ids=input_ids,
<add> input_mask=input_mask,
<add> token_type_ids=input_type_ids,
<add> use_one_hot_embeddings=use_one_hot_embeddings)
<ide>
<del> if mode != tf.estimator.ModeKeys.PREDICT:
<del> raise ValueError("Only PREDICT modes are supported: %s" % (mode))
<add> if mode != tf.estimator.ModeKeys.PREDICT:
<add> raise ValueError("Only PREDICT modes are supported: %s" % (mode))
<ide>
<del> tvars = tf.trainable_variables()
<del> scaffold_fn = None
<del> (assignment_map, _) = modeling.get_assigment_map_from_checkpoint(
<del> tvars, init_checkpoint)
<del> if use_tpu:
<add> tvars = tf.trainable_variables()
<add> scaffold_fn = None
<add> (assignment_map, _) = modeling.get_assigment_map_from_checkpoint(
<add> tvars, init_checkpoint)
<add> if use_tpu:
<ide>
<del> def tpu_scaffold():
<del> tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
<del> return tf.train.Scaffold()
<add> def tpu_scaffold():
<add> tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
<add> return tf.train.Scaffold()
<ide>
<del> scaffold_fn = tpu_scaffold
<del> else:
<del> tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
<add> scaffold_fn = tpu_scaffold
<add> else:
<add> tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
<ide>
<del> all_layers = model.get_all_encoder_layers()
<add> all_layers = model.get_all_encoder_layers()
<ide>
<del> predictions = {
<del> "unique_id": unique_ids,
<del> }
<add> predictions = {
<add> "unique_id": unique_ids,
<add> }
<ide>
<del> for (i, layer_index) in enumerate(layer_indexes):
<del> predictions["layer_output_%d" % i] = all_layers[layer_index]
<add> for (i, layer_index) in enumerate(layer_indexes):
<add> predictions["layer_output_%d" % i] = all_layers[layer_index]
<ide>
<del> output_spec = tf.contrib.tpu.TPUEstimatorSpec(
<del> mode=mode, predictions=predictions, scaffold_fn=scaffold_fn)
<del> return output_spec
<add> output_spec = tf.contrib.tpu.TPUEstimatorSpec(
<add> mode=mode, predictions=predictions, scaffold_fn=scaffold_fn)
<add> return output_spec
<ide>
<del> return model_fn
<add> return model_fn
<ide>
<ide>
<ide> def convert_examples_to_features(examples, seq_length, tokenizer):
<del> """Loads a data file into a list of `InputBatch`s."""
<del>
<del> features = []
<del> for (ex_index, example) in enumerate(examples):
<del> tokens_a = tokenizer.tokenize(example.text_a)
<del>
<del> tokens_b = None
<del> if example.text_b:
<del> tokens_b = tokenizer.tokenize(example.text_b)
<del>
<del> if tokens_b:
<del> # Modifies `tokens_a` and `tokens_b` in place so that the total
<del> # length is less than the specified length.
<del> # Account for [CLS], [SEP], [SEP] with "- 3"
<del> _truncate_seq_pair(tokens_a, tokens_b, seq_length - 3)
<del> else:
<del> # Account for [CLS] and [SEP] with "- 2"
<del> if len(tokens_a) > seq_length - 2:
<del> tokens_a = tokens_a[0:(seq_length - 2)]
<del>
<del> # The convention in BERT is:
<del> # (a) For sequence pairs:
<del> # tokens: [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP]
<del> # type_ids: 0 0 0 0 0 0 0 0 1 1 1 1 1 1
<del> # (b) For single sequences:
<del> # tokens: [CLS] the dog is hairy . [SEP]
<del> # type_ids: 0 0 0 0 0 0 0
<del> #
<del> # Where "type_ids" are used to indicate whether this is the first
<del> # sequence or the second sequence. The embedding vectors for `type=0` and
<del> # `type=1` were learned during pre-training and are added to the wordpiece
<del> # embedding vector (and position vector). This is not *strictly* necessary
<del> # since the [SEP] token unambigiously separates the sequences, but it makes
<del> # it easier for the model to learn the concept of sequences.
<del> #
<del> # For classification tasks, the first vector (corresponding to [CLS]) is
<del> # used as as the "sentence vector". Note that this only makes sense because
<del> # the entire model is fine-tuned.
<del> tokens = []
<del> input_type_ids = []
<del> tokens.append("[CLS]")
<del> input_type_ids.append(0)
<del> for token in tokens_a:
<del> tokens.append(token)
<del> input_type_ids.append(0)
<del> tokens.append("[SEP]")
<del> input_type_ids.append(0)
<del>
<del> if tokens_b:
<del> for token in tokens_b:
<del> tokens.append(token)
<del> input_type_ids.append(1)
<del> tokens.append("[SEP]")
<del> input_type_ids.append(1)
<del>
<del> input_ids = tokenizer.convert_tokens_to_ids(tokens)
<del>
<del> # The mask has 1 for real tokens and 0 for padding tokens. Only real
<del> # tokens are attended to.
<del> input_mask = [1] * len(input_ids)
<del>
<del> # Zero-pad up to the sequence length.
<del> while len(input_ids) < seq_length:
<del> input_ids.append(0)
<del> input_mask.append(0)
<del> input_type_ids.append(0)
<del>
<del> assert len(input_ids) == seq_length
<del> assert len(input_mask) == seq_length
<del> assert len(input_type_ids) == seq_length
<del>
<del> if ex_index < 5:
<del> tf.logging.info("*** Example ***")
<del> tf.logging.info("unique_id: %s" % (example.unique_id))
<del> tf.logging.info("tokens: %s" % " ".join([str(x) for x in tokens]))
<del> tf.logging.info("input_ids: %s" % " ".join([str(x) for x in input_ids]))
<del> tf.logging.info("input_mask: %s" % " ".join([str(x) for x in input_mask]))
<del> tf.logging.info(
<del> "input_type_ids: %s" % " ".join([str(x) for x in input_type_ids]))
<del>
<del> features.append(
<del> InputFeatures(
<del> unique_id=example.unique_id,
<del> tokens=tokens,
<del> input_ids=input_ids,
<del> input_mask=input_mask,
<del> input_type_ids=input_type_ids))
<del> return features
<add> """Loads a data file into a list of `InputBatch`s."""
<add>
<add> features = []
<add> for (ex_index, example) in enumerate(examples):
<add> tokens_a = tokenizer.tokenize(example.text_a)
<add>
<add> tokens_b = None
<add> if example.text_b:
<add> tokens_b = tokenizer.tokenize(example.text_b)
<add>
<add> if tokens_b:
<add> # Modifies `tokens_a` and `tokens_b` in place so that the total
<add> # length is less than the specified length.
<add> # Account for [CLS], [SEP], [SEP] with "- 3"
<add> _truncate_seq_pair(tokens_a, tokens_b, seq_length - 3)
<add> else:
<add> # Account for [CLS] and [SEP] with "- 2"
<add> if len(tokens_a) > seq_length - 2:
<add> tokens_a = tokens_a[0:(seq_length - 2)]
<add>
<add> # The convention in BERT is:
<add> # (a) For sequence pairs:
<add> # tokens: [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP]
<add> # type_ids: 0 0 0 0 0 0 0 0 1 1 1 1 1 1
<add> # (b) For single sequences:
<add> # tokens: [CLS] the dog is hairy . [SEP]
<add> # type_ids: 0 0 0 0 0 0 0
<add> #
<add> # Where "type_ids" are used to indicate whether this is the first
<add> # sequence or the second sequence. The embedding vectors for `type=0` and
<add> # `type=1` were learned during pre-training and are added to the wordpiece
<add> # embedding vector (and position vector). This is not *strictly* necessary
<add> # since the [SEP] token unambigiously separates the sequences, but it makes
<add> # it easier for the model to learn the concept of sequences.
<add> #
<add> # For classification tasks, the first vector (corresponding to [CLS]) is
<add> # used as as the "sentence vector". Note that this only makes sense because
<add> # the entire model is fine-tuned.
<add> tokens = []
<add> input_type_ids = []
<add> tokens.append("[CLS]")
<add> input_type_ids.append(0)
<add> for token in tokens_a:
<add> tokens.append(token)
<add> input_type_ids.append(0)
<add> tokens.append("[SEP]")
<add> input_type_ids.append(0)
<add>
<add> if tokens_b:
<add> for token in tokens_b:
<add> tokens.append(token)
<add> input_type_ids.append(1)
<add> tokens.append("[SEP]")
<add> input_type_ids.append(1)
<add>
<add> input_ids = tokenizer.convert_tokens_to_ids(tokens)
<add>
<add> # The mask has 1 for real tokens and 0 for padding tokens. Only real
<add> # tokens are attended to.
<add> input_mask = [1] * len(input_ids)
<add>
<add> # Zero-pad up to the sequence length.
<add> while len(input_ids) < seq_length:
<add> input_ids.append(0)
<add> input_mask.append(0)
<add> input_type_ids.append(0)
<add>
<add> assert len(input_ids) == seq_length
<add> assert len(input_mask) == seq_length
<add> assert len(input_type_ids) == seq_length
<add>
<add> if ex_index < 5:
<add> tf.logging.info("*** Example ***")
<add> tf.logging.info("unique_id: %s" % (example.unique_id))
<add> tf.logging.info("tokens: %s" % " ".join([str(x) for x in tokens]))
<add> tf.logging.info("input_ids: %s" % " ".join([str(x) for x in input_ids]))
<add> tf.logging.info("input_mask: %s" % " ".join([str(x) for x in input_mask]))
<add> tf.logging.info(
<add> "input_type_ids: %s" % " ".join([str(x) for x in input_type_ids]))
<add>
<add> features.append(
<add> InputFeatures(
<add> unique_id=example.unique_id,
<add> tokens=tokens,
<add> input_ids=input_ids,
<add> input_mask=input_mask,
<add> input_type_ids=input_type_ids))
<add> return features
<ide>
<ide>
<ide> def _truncate_seq_pair(tokens_a, tokens_b, max_length):
<del> """Truncates a sequence pair in place to the maximum length."""
<del>
<del> # This is a simple heuristic which will always truncate the longer sequence
<del> # one token at a time. This makes more sense than truncating an equal percent
<del> # of tokens from each, since if one sequence is very short then each token
<del> # that's truncated likely contains more information than a longer sequence.
<del> while True:
<del> total_length = len(tokens_a) + len(tokens_b)
<del> if total_length <= max_length:
<del> break
<del> if len(tokens_a) > len(tokens_b):
<del> tokens_a.pop()
<del> else:
<del> tokens_b.pop()
<add> """Truncates a sequence pair in place to the maximum length."""
<add>
<add> # This is a simple heuristic which will always truncate the longer sequence
<add> # one token at a time. This makes more sense than truncating an equal percent
<add> # of tokens from each, since if one sequence is very short then each token
<add> # that's truncated likely contains more information than a longer sequence.
<add> while True:
<add> total_length = len(tokens_a) + len(tokens_b)
<add> if total_length <= max_length:
<add> break
<add> if len(tokens_a) > len(tokens_b):
<add> tokens_a.pop()
<add> else:
<add> tokens_b.pop()
<ide>
<ide>
<ide> def read_examples(input_file):
<del> """Read a list of `InputExample`s from an input file."""
<del> examples = []
<del> unique_id = 0
<del> with tf.gfile.GFile(input_file, "r") as reader:
<del> while True:
<del> line = tokenization.convert_to_unicode(reader.readline())
<del> if not line:
<del> break
<del> line = line.strip()
<del> text_a = None
<del> text_b = None
<del> m = re.match(r"^(.*) \|\|\| (.*)$", line)
<del> if m is None:
<del> text_a = line
<del> else:
<del> text_a = m.group(1)
<del> text_b = m.group(2)
<del> examples.append(
<del> InputExample(unique_id=unique_id, text_a=text_a, text_b=text_b))
<del> unique_id += 1
<del> return examples
<add> """Read a list of `InputExample`s from an input file."""
<add> examples = []
<add> unique_id = 0
<add> with tf.gfile.GFile(input_file, "r") as reader:
<add> while True:
<add> line = tokenization.convert_to_unicode(reader.readline())
<add> if not line:
<add> break
<add> line = line.strip()
<add> text_a = None
<add> text_b = None
<add> m = re.match(r"^(.*) \|\|\| (.*)$", line)
<add> if m is None:
<add> text_a = line
<add> else:
<add> text_a = m.group(1)
<add> text_b = m.group(2)
<add> examples.append(
<add> InputExample(unique_id=unique_id, text_a=text_a, text_b=text_b))
<add> unique_id += 1
<add> return examples
<ide>
<ide>
<ide> def main(_):
<del> tf.logging.set_verbosity(tf.logging.INFO)
<del>
<del> layer_indexes = [int(x) for x in FLAGS.layers.split(",")]
<del>
<del> bert_config = modeling.BertConfig.from_json_file(FLAGS.bert_config_file)
<del>
<del> tokenizer = tokenization.FullTokenizer(
<del> vocab_file=FLAGS.vocab_file, do_lower_case=FLAGS.do_lower_case)
<del>
<del> is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2
<del> run_config = tf.contrib.tpu.RunConfig(
<del> master=FLAGS.master,
<del> tpu_config=tf.contrib.tpu.TPUConfig(
<del> num_shards=FLAGS.num_tpu_cores,
<del> per_host_input_for_training=is_per_host))
<del>
<del> examples = read_examples(FLAGS.input_file)
<del>
<del> features = convert_examples_to_features(
<del> examples=examples, seq_length=FLAGS.max_seq_length, tokenizer=tokenizer)
<del>
<del> unique_id_to_feature = {}
<del> for feature in features:
<del> unique_id_to_feature[feature.unique_id] = feature
<del>
<del> model_fn = model_fn_builder(
<del> bert_config=bert_config,
<del> init_checkpoint=FLAGS.init_checkpoint,
<del> layer_indexes=layer_indexes,
<del> use_tpu=FLAGS.use_tpu,
<del> use_one_hot_embeddings=FLAGS.use_one_hot_embeddings)
<del>
<del> # If TPU is not available, this will fall back to normal Estimator on CPU
<del> # or GPU.
<del> estimator = tf.contrib.tpu.TPUEstimator(
<del> use_tpu=FLAGS.use_tpu,
<del> model_fn=model_fn,
<del> config=run_config,
<del> predict_batch_size=FLAGS.batch_size)
<del>
<del> input_fn = input_fn_builder(
<del> features=features, seq_length=FLAGS.max_seq_length)
<del>
<del> with codecs.getwriter("utf-8")(tf.gfile.Open(FLAGS.output_file,
<del> "w")) as writer:
<del> for result in estimator.predict(input_fn, yield_single_examples=True):
<del> unique_id = int(result["unique_id"])
<del> feature = unique_id_to_feature[unique_id]
<del> output_json = collections.OrderedDict()
<del> output_json["linex_index"] = unique_id
<del> all_features = []
<del> for (i, token) in enumerate(feature.tokens):
<del> all_layers = []
<del> for (j, layer_index) in enumerate(layer_indexes):
<del> layer_output = result["layer_output_%d" % j]
<del> layers = collections.OrderedDict()
<del> layers["index"] = layer_index
<del> layers["values"] = [
<del> round(float(x), 6) for x in layer_output[i:(i + 1)].flat
<del> ]
<del> all_layers.append(layers)
<del> features = collections.OrderedDict()
<del> features["token"] = token
<del> features["layers"] = all_layers
<del> all_features.append(features)
<del> output_json["features"] = all_features
<del> writer.write(json.dumps(output_json) + "\n")
<add> tf.logging.set_verbosity(tf.logging.INFO)
<add>
<add> layer_indexes = [int(x) for x in FLAGS.layers.split(",")]
<add>
<add> bert_config = modeling.BertConfig.from_json_file(FLAGS.bert_config_file)
<add>
<add> tokenizer = tokenization.FullTokenizer(
<add> vocab_file=FLAGS.vocab_file, do_lower_case=FLAGS.do_lower_case)
<add>
<add> is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2
<add> run_config = tf.contrib.tpu.RunConfig(
<add> master=FLAGS.master,
<add> tpu_config=tf.contrib.tpu.TPUConfig(
<add> num_shards=FLAGS.num_tpu_cores,
<add> per_host_input_for_training=is_per_host))
<add>
<add> examples = read_examples(FLAGS.input_file)
<add>
<add> features = convert_examples_to_features(
<add> examples=examples, seq_length=FLAGS.max_seq_length, tokenizer=tokenizer)
<add>
<add> unique_id_to_feature = {}
<add> for feature in features:
<add> unique_id_to_feature[feature.unique_id] = feature
<add>
<add> model_fn = model_fn_builder(
<add> bert_config=bert_config,
<add> init_checkpoint=FLAGS.init_checkpoint,
<add> layer_indexes=layer_indexes,
<add> use_tpu=FLAGS.use_tpu,
<add> use_one_hot_embeddings=FLAGS.use_one_hot_embeddings)
<add>
<add> # If TPU is not available, this will fall back to normal Estimator on CPU
<add> # or GPU.
<add> estimator = tf.contrib.tpu.TPUEstimator(
<add> use_tpu=FLAGS.use_tpu,
<add> model_fn=model_fn,
<add> config=run_config,
<add> predict_batch_size=FLAGS.batch_size)
<add>
<add> input_fn = input_fn_builder(
<add> features=features, seq_length=FLAGS.max_seq_length)
<add>
<add> with codecs.getwriter("utf-8")(tf.gfile.Open(FLAGS.output_file,
<add> "w")) as writer:
<add> for result in estimator.predict(input_fn, yield_single_examples=True):
<add> unique_id = int(result["unique_id"])
<add> feature = unique_id_to_feature[unique_id]
<add> output_json = collections.OrderedDict()
<add> output_json["linex_index"] = unique_id
<add> all_features = []
<add> for (i, token) in enumerate(feature.tokens):
<add> all_layers = []
<add> for (j, layer_index) in enumerate(layer_indexes):
<add> layer_output = result["layer_output_%d" % j]
<add> layers = collections.OrderedDict()
<add> layers["index"] = layer_index
<add> layers["values"] = [
<add> round(float(x), 6) for x in layer_output[i:(i + 1)].flat
<add> ]
<add> all_layers.append(layers)
<add> features = collections.OrderedDict()
<add> features["token"] = token
<add> features["layers"] = all_layers
<add> all_features.append(features)
<add> output_json["features"] = all_features
<add> writer.write(json.dumps(output_json) + "\n")
<ide>
<ide>
<ide> if __name__ == "__main__":
<del> flags.mark_flag_as_required("input_file")
<del> flags.mark_flag_as_required("vocab_file")
<del> flags.mark_flag_as_required("bert_config_file")
<del> flags.mark_flag_as_required("init_checkpoint")
<del> flags.mark_flag_as_required("output_file")
<del> tf.app.run()
<add> flags.mark_flag_as_required("input_file")
<add> flags.mark_flag_as_required("vocab_file")
<add> flags.mark_flag_as_required("bert_config_file")
<add> flags.mark_flag_as_required("init_checkpoint")
<add> flags.mark_flag_as_required("output_file")
<add> tf.app.run()
<ide><path>modeling.py
<ide>
<ide>
<ide> class BertConfig(object):
<del> """Configuration for `BertModel`."""
<del>
<del> def __init__(self,
<del> vocab_size,
<del> hidden_size=768,
<del> num_hidden_layers=12,
<del> num_attention_heads=12,
<del> intermediate_size=3072,
<del> hidden_act="gelu",
<del> hidden_dropout_prob=0.1,
<del> attention_probs_dropout_prob=0.1,
<del> max_position_embeddings=512,
<del> type_vocab_size=16,
<del> initializer_range=0.02):
<del> """Constructs BertConfig.
<add> """Configuration for `BertModel`."""
<add>
<add> def __init__(self,
<add> vocab_size,
<add> hidden_size=768,
<add> num_hidden_layers=12,
<add> num_attention_heads=12,
<add> intermediate_size=3072,
<add> hidden_act="gelu",
<add> hidden_dropout_prob=0.1,
<add> attention_probs_dropout_prob=0.1,
<add> max_position_embeddings=512,
<add> type_vocab_size=16,
<add> initializer_range=0.02):
<add> """Constructs BertConfig.
<add>
<add> Args:
<add> vocab_size: Vocabulary size of `inputs_ids` in `BertModel`.
<add> hidden_size: Size of the encoder layers and the pooler layer.
<add> num_hidden_layers: Number of hidden layers in the Transformer encoder.
<add> num_attention_heads: Number of attention heads for each attention layer in
<add> the Transformer encoder.
<add> intermediate_size: The size of the "intermediate" (i.e., feed-forward)
<add> layer in the Transformer encoder.
<add> hidden_act: The non-linear activation function (function or string) in the
<add> encoder and pooler.
<add> hidden_dropout_prob: The dropout probabilitiy for all fully connected
<add> layers in the embeddings, encoder, and pooler.
<add> attention_probs_dropout_prob: The dropout ratio for the attention
<add> probabilities.
<add> max_position_embeddings: The maximum sequence length that this model might
<add> ever be used with. Typically set this to something large just in case
<add> (e.g., 512 or 1024 or 2048).
<add> type_vocab_size: The vocabulary size of the `token_type_ids` passed into
<add> `BertModel`.
<add> initializer_range: The sttdev of the truncated_normal_initializer for
<add> initializing all weight matrices.
<add> """
<add> self.vocab_size = vocab_size
<add> self.hidden_size = hidden_size
<add> self.num_hidden_layers = num_hidden_layers
<add> self.num_attention_heads = num_attention_heads
<add> self.hidden_act = hidden_act
<add> self.intermediate_size = intermediate_size
<add> self.hidden_dropout_prob = hidden_dropout_prob
<add> self.attention_probs_dropout_prob = attention_probs_dropout_prob
<add> self.max_position_embeddings = max_position_embeddings
<add> self.type_vocab_size = type_vocab_size
<add> self.initializer_range = initializer_range
<add>
<add> @classmethod
<add> def from_dict(cls, json_object):
<add> """Constructs a `BertConfig` from a Python dictionary of parameters."""
<add> config = BertConfig(vocab_size=None)
<add> for (key, value) in six.iteritems(json_object):
<add> config.__dict__[key] = value
<add> return config
<add>
<add> @classmethod
<add> def from_json_file(cls, json_file):
<add> """Constructs a `BertConfig` from a json file of parameters."""
<add> with tf.gfile.GFile(json_file, "r") as reader:
<add> text = reader.read()
<add> return cls.from_dict(json.loads(text))
<add>
<add> def to_dict(self):
<add> """Serializes this instance to a Python dictionary."""
<add> output = copy.deepcopy(self.__dict__)
<add> return output
<add>
<add> def to_json_string(self):
<add> """Serializes this instance to a JSON string."""
<add> return json.dumps(self.to_dict(), indent=2, sort_keys=True) + "\n"
<ide>
<del> Args:
<del> vocab_size: Vocabulary size of `inputs_ids` in `BertModel`.
<del> hidden_size: Size of the encoder layers and the pooler layer.
<del> num_hidden_layers: Number of hidden layers in the Transformer encoder.
<del> num_attention_heads: Number of attention heads for each attention layer in
<del> the Transformer encoder.
<del> intermediate_size: The size of the "intermediate" (i.e., feed-forward)
<del> layer in the Transformer encoder.
<del> hidden_act: The non-linear activation function (function or string) in the
<del> encoder and pooler.
<del> hidden_dropout_prob: The dropout probabilitiy for all fully connected
<del> layers in the embeddings, encoder, and pooler.
<del> attention_probs_dropout_prob: The dropout ratio for the attention
<del> probabilities.
<del> max_position_embeddings: The maximum sequence length that this model might
<del> ever be used with. Typically set this to something large just in case
<del> (e.g., 512 or 1024 or 2048).
<del> type_vocab_size: The vocabulary size of the `token_type_ids` passed into
<del> `BertModel`.
<del> initializer_range: The sttdev of the truncated_normal_initializer for
<del> initializing all weight matrices.
<del> """
<del> self.vocab_size = vocab_size
<del> self.hidden_size = hidden_size
<del> self.num_hidden_layers = num_hidden_layers
<del> self.num_attention_heads = num_attention_heads
<del> self.hidden_act = hidden_act
<del> self.intermediate_size = intermediate_size
<del> self.hidden_dropout_prob = hidden_dropout_prob
<del> self.attention_probs_dropout_prob = attention_probs_dropout_prob
<del> self.max_position_embeddings = max_position_embeddings
<del> self.type_vocab_size = type_vocab_size
<del> self.initializer_range = initializer_range
<del>
<del> @classmethod
<del> def from_dict(cls, json_object):
<del> """Constructs a `BertConfig` from a Python dictionary of parameters."""
<del> config = BertConfig(vocab_size=None)
<del> for (key, value) in six.iteritems(json_object):
<del> config.__dict__[key] = value
<del> return config
<del>
<del> @classmethod
<del> def from_json_file(cls, json_file):
<del> """Constructs a `BertConfig` from a json file of parameters."""
<del> with tf.gfile.GFile(json_file, "r") as reader:
<del> text = reader.read()
<del> return cls.from_dict(json.loads(text))
<del>
<del> def to_dict(self):
<del> """Serializes this instance to a Python dictionary."""
<del> output = copy.deepcopy(self.__dict__)
<del> return output
<ide>
<del> def to_json_string(self):
<del> """Serializes this instance to a JSON string."""
<del> return json.dumps(self.to_dict(), indent=2, sort_keys=True) + "\n"
<add>class BertModel(object):
<add> """BERT model ("Bidirectional Embedding Representations from a Transformer").
<ide>
<add> Example usage:
<ide>
<del>class BertModel(object):
<del> """BERT model ("Bidirectional Embedding Representations from a Transformer").
<del>
<del> Example usage:
<del>
<del> ```python
<del> # Already been converted into WordPiece token ids
<del> input_ids = tf.constant([[31, 51, 99], [15, 5, 0]])
<del> input_mask = tf.constant([[1, 1, 1], [1, 1, 0]])
<del> token_type_ids = tf.constant([[0, 0, 1], [0, 2, 0]])
<del>
<del> config = modeling.BertConfig(vocab_size=32000, hidden_size=512,
<del> num_hidden_layers=8, num_attention_heads=6, intermediate_size=1024)
<del>
<del> model = modeling.BertModel(config=config, is_training=True,
<del> input_ids=input_ids, input_mask=input_mask, token_type_ids=token_type_ids)
<del>
<del> label_embeddings = tf.get_variable(...)
<del> pooled_output = model.get_pooled_output()
<del> logits = tf.matmul(pooled_output, label_embeddings)
<del> ...
<del> ```
<del> """
<del>
<del> def __init__(self,
<del> config,
<del> is_training,
<del> input_ids,
<del> input_mask=None,
<del> token_type_ids=None,
<del> use_one_hot_embeddings=True,
<del> scope=None):
<del> """Constructor for BertModel.
<add> ```python
<add> # Already been converted into WordPiece token ids
<add> input_ids = tf.constant([[31, 51, 99], [15, 5, 0]])
<add> input_mask = tf.constant([[1, 1, 1], [1, 1, 0]])
<add> token_type_ids = tf.constant([[0, 0, 1], [0, 2, 0]])
<ide>
<del> Args:
<del> config: `BertConfig` instance.
<del> is_training: bool. rue for training model, false for eval model. Controls
<del> whether dropout will be applied.
<del> input_ids: int32 Tensor of shape [batch_size, seq_length].
<del> input_mask: (optional) int32 Tensor of shape [batch_size, seq_length].
<del> token_type_ids: (optional) int32 Tensor of shape [batch_size, seq_length].
<del> use_one_hot_embeddings: (optional) bool. Whether to use one-hot word
<del> embeddings or tf.embedding_lookup() for the word embeddings. On the TPU,
<del> it is must faster if this is True, on the CPU or GPU, it is faster if
<del> this is False.
<del> scope: (optional) variable scope. Defaults to "bert".
<add> config = modeling.BertConfig(vocab_size=32000, hidden_size=512,
<add> num_hidden_layers=8, num_attention_heads=6, intermediate_size=1024)
<ide>
<del> Raises:
<del> ValueError: The config is invalid or one of the input tensor shapes
<del> is invalid.
<add> model = modeling.BertModel(config=config, is_training=True,
<add> input_ids=input_ids, input_mask=input_mask, token_type_ids=token_type_ids)
<add>
<add> label_embeddings = tf.get_variable(...)
<add> pooled_output = model.get_pooled_output()
<add> logits = tf.matmul(pooled_output, label_embeddings)
<add> ...
<add> ```
<ide> """
<del> config = copy.deepcopy(config)
<del> if not is_training:
<del> config.hidden_dropout_prob = 0.0
<del> config.attention_probs_dropout_prob = 0.0
<ide>
<del> input_shape = get_shape_list(input_ids, expected_rank=2)
<del> batch_size = input_shape[0]
<del> seq_length = input_shape[1]
<add> def __init__(self,
<add> config,
<add> is_training,
<add> input_ids,
<add> input_mask=None,
<add> token_type_ids=None,
<add> use_one_hot_embeddings=True,
<add> scope=None):
<add> """Constructor for BertModel.
<add>
<add> Args:
<add> config: `BertConfig` instance.
<add> is_training: bool. rue for training model, false for eval model. Controls
<add> whether dropout will be applied.
<add> input_ids: int32 Tensor of shape [batch_size, seq_length].
<add> input_mask: (optional) int32 Tensor of shape [batch_size, seq_length].
<add> token_type_ids: (optional) int32 Tensor of shape [batch_size, seq_length].
<add> use_one_hot_embeddings: (optional) bool. Whether to use one-hot word
<add> embeddings or tf.embedding_lookup() for the word embeddings. On the TPU,
<add> it is must faster if this is True, on the CPU or GPU, it is faster if
<add> this is False.
<add> scope: (optional) variable scope. Defaults to "bert".
<add>
<add> Raises:
<add> ValueError: The config is invalid or one of the input tensor shapes
<add> is invalid.
<add> """
<add> config = copy.deepcopy(config)
<add> if not is_training:
<add> config.hidden_dropout_prob = 0.0
<add> config.attention_probs_dropout_prob = 0.0
<add>
<add> input_shape = get_shape_list(input_ids, expected_rank=2)
<add> batch_size = input_shape[0]
<add> seq_length = input_shape[1]
<add>
<add> if input_mask is None:
<add> input_mask = tf.ones(shape=[batch_size, seq_length], dtype=tf.int32)
<add>
<add> if token_type_ids is None:
<add> token_type_ids = tf.zeros(shape=[batch_size, seq_length], dtype=tf.int32)
<add>
<add> with tf.variable_scope("bert", scope):
<add> with tf.variable_scope("embeddings"):
<add> # Perform embedding lookup on the word ids.
<add> (self.embedding_output, self.embedding_table) = embedding_lookup(
<add> input_ids=input_ids,
<add> vocab_size=config.vocab_size,
<add> embedding_size=config.hidden_size,
<add> initializer_range=config.initializer_range,
<add> word_embedding_name="word_embeddings",
<add> use_one_hot_embeddings=use_one_hot_embeddings)
<add>
<add> # Add positional embeddings and token type embeddings, then layer
<add> # normalize and perform dropout.
<add> self.embedding_output = embedding_postprocessor(
<add> input_tensor=self.embedding_output,
<add> use_token_type=True,
<add> token_type_ids=token_type_ids,
<add> token_type_vocab_size=config.type_vocab_size,
<add> token_type_embedding_name="token_type_embeddings",
<add> use_position_embeddings=True,
<add> position_embedding_name="position_embeddings",
<add> initializer_range=config.initializer_range,
<add> max_position_embeddings=config.max_position_embeddings,
<add> dropout_prob=config.hidden_dropout_prob)
<add>
<add> with tf.variable_scope("encoder"):
<add> # This converts a 2D mask of shape [batch_size, seq_length] to a 3D
<add> # mask of shape [batch_size, seq_length, seq_length] which is used
<add> # for the attention scores.
<add> attention_mask = create_attention_mask_from_input_mask(
<add> input_ids, input_mask)
<add>
<add> # Run the stacked transformer.
<add> # `sequence_output` shape = [batch_size, seq_length, hidden_size].
<add> self.all_encoder_layers = transformer_model(
<add> input_tensor=self.embedding_output,
<add> attention_mask=attention_mask,
<add> hidden_size=config.hidden_size,
<add> num_hidden_layers=config.num_hidden_layers,
<add> num_attention_heads=config.num_attention_heads,
<add> intermediate_size=config.intermediate_size,
<add> intermediate_act_fn=get_activation(config.hidden_act),
<add> hidden_dropout_prob=config.hidden_dropout_prob,
<add> attention_probs_dropout_prob=config.attention_probs_dropout_prob,
<add> initializer_range=config.initializer_range,
<add> do_return_all_layers=True)
<add>
<add> self.sequence_output = self.all_encoder_layers[-1]
<add> # The "pooler" converts the encoded sequence tensor of shape
<add> # [batch_size, seq_length, hidden_size] to a tensor of shape
<add> # [batch_size, hidden_size]. This is necessary for segment-level
<add> # (or segment-pair-level) classification tasks where we need a fixed
<add> # dimensional representation of the segment.
<add> with tf.variable_scope("pooler"):
<add> # We "pool" the model by simply taking the hidden state corresponding
<add> # to the first token. We assume that this has been pre-trained
<add> first_token_tensor = tf.squeeze(self.sequence_output[:, 0:1, :], axis=1)
<add> self.pooled_output = tf.layers.dense(
<add> first_token_tensor,
<add> config.hidden_size,
<add> activation=tf.tanh,
<add> kernel_initializer=create_initializer(config.initializer_range))
<add>
<add> def get_pooled_output(self):
<add> return self.pooled_output
<add>
<add> def get_sequence_output(self):
<add> """Gets final hidden layer of encoder.
<add>
<add> Returns:
<add> float Tensor of shape [batch_size, seq_length, hidden_size] corresponding
<add> to the final hidden of the transformer encoder.
<add> """
<add> return self.sequence_output
<add>
<add> def get_all_encoder_layers(self):
<add> return self.all_encoder_layers
<add>
<add> def get_embedding_output(self):
<add> """Gets output of the embedding lookup (i.e., input to the transformer).
<add>
<add> Returns:
<add> float Tensor of shape [batch_size, seq_length, hidden_size] corresponding
<add> to the output of the embedding layer, after summing the word
<add> embeddings with the positional embeddings and the token type embeddings,
<add> then performing layer normalization. This is the input to the transformer.
<add> """
<add> return self.embedding_output
<add>
<add> def get_embedding_table(self):
<add> return self.embedding_table
<ide>
<del> if input_mask is None:
<del> input_mask = tf.ones(shape=[batch_size, seq_length], dtype=tf.int32)
<del>
<del> if token_type_ids is None:
<del> token_type_ids = tf.zeros(shape=[batch_size, seq_length], dtype=tf.int32)
<del>
<del> with tf.variable_scope("bert", scope):
<del> with tf.variable_scope("embeddings"):
<del> # Perform embedding lookup on the word ids.
<del> (self.embedding_output, self.embedding_table) = embedding_lookup(
<del> input_ids=input_ids,
<del> vocab_size=config.vocab_size,
<del> embedding_size=config.hidden_size,
<del> initializer_range=config.initializer_range,
<del> word_embedding_name="word_embeddings",
<del> use_one_hot_embeddings=use_one_hot_embeddings)
<del>
<del> # Add positional embeddings and token type embeddings, then layer
<del> # normalize and perform dropout.
<del> self.embedding_output = embedding_postprocessor(
<del> input_tensor=self.embedding_output,
<del> use_token_type=True,
<del> token_type_ids=token_type_ids,
<del> token_type_vocab_size=config.type_vocab_size,
<del> token_type_embedding_name="token_type_embeddings",
<del> use_position_embeddings=True,
<del> position_embedding_name="position_embeddings",
<del> initializer_range=config.initializer_range,
<del> max_position_embeddings=config.max_position_embeddings,
<del> dropout_prob=config.hidden_dropout_prob)
<del>
<del> with tf.variable_scope("encoder"):
<del> # This converts a 2D mask of shape [batch_size, seq_length] to a 3D
<del> # mask of shape [batch_size, seq_length, seq_length] which is used
<del> # for the attention scores.
<del> attention_mask = create_attention_mask_from_input_mask(
<del> input_ids, input_mask)
<del>
<del> # Run the stacked transformer.
<del> # `sequence_output` shape = [batch_size, seq_length, hidden_size].
<del> self.all_encoder_layers = transformer_model(
<del> input_tensor=self.embedding_output,
<del> attention_mask=attention_mask,
<del> hidden_size=config.hidden_size,
<del> num_hidden_layers=config.num_hidden_layers,
<del> num_attention_heads=config.num_attention_heads,
<del> intermediate_size=config.intermediate_size,
<del> intermediate_act_fn=get_activation(config.hidden_act),
<del> hidden_dropout_prob=config.hidden_dropout_prob,
<del> attention_probs_dropout_prob=config.attention_probs_dropout_prob,
<del> initializer_range=config.initializer_range,
<del> do_return_all_layers=True)
<del>
<del> self.sequence_output = self.all_encoder_layers[-1]
<del> # The "pooler" converts the encoded sequence tensor of shape
<del> # [batch_size, seq_length, hidden_size] to a tensor of shape
<del> # [batch_size, hidden_size]. This is necessary for segment-level
<del> # (or segment-pair-level) classification tasks where we need a fixed
<del> # dimensional representation of the segment.
<del> with tf.variable_scope("pooler"):
<del> # We "pool" the model by simply taking the hidden state corresponding
<del> # to the first token. We assume that this has been pre-trained
<del> first_token_tensor = tf.squeeze(self.sequence_output[:, 0:1, :], axis=1)
<del> self.pooled_output = tf.layers.dense(
<del> first_token_tensor,
<del> config.hidden_size,
<del> activation=tf.tanh,
<del> kernel_initializer=create_initializer(config.initializer_range))
<del>
<del> def get_pooled_output(self):
<del> return self.pooled_output
<del>
<del> def get_sequence_output(self):
<del> """Gets final hidden layer of encoder.
<ide>
<del> Returns:
<del> float Tensor of shape [batch_size, seq_length, hidden_size] corresponding
<del> to the final hidden of the transformer encoder.
<del> """
<del> return self.sequence_output
<add>def gelu(input_tensor):
<add> """Gaussian Error Linear Unit.
<ide>
<del> def get_all_encoder_layers(self):
<del> return self.all_encoder_layers
<add> This is a smoother version of the RELU.
<add> Original paper: https://arxiv.org/abs/1606.08415
<ide>
<del> def get_embedding_output(self):
<del> """Gets output of the embedding lookup (i.e., input to the transformer).
<add> Args:
<add> input_tensor: float Tensor to perform activation.
<ide>
<ide> Returns:
<del> float Tensor of shape [batch_size, seq_length, hidden_size] corresponding
<del> to the output of the embedding layer, after summing the word
<del> embeddings with the positional embeddings and the token type embeddings,
<del> then performing layer normalization. This is the input to the transformer.
<add> `input_tensor` with the GELU activation applied.
<ide> """
<del> return self.embedding_output
<del>
<del> def get_embedding_table(self):
<del> return self.embedding_table
<add> cdf = 0.5 * (1.0 + tf.erf(input_tensor / tf.sqrt(2.0)))
<add> return input_tensor * cdf
<ide>
<ide>
<del>def gelu(input_tensor):
<del> """Gaussian Error Linear Unit.
<del>
<del> This is a smoother version of the RELU.
<del> Original paper: https://arxiv.org/abs/1606.08415
<add>def get_activation(activation_string):
<add> """Maps a string to a Python function, e.g., "relu" => `tf.nn.relu`.
<ide>
<del> Args:
<del> input_tensor: float Tensor to perform activation.
<add> Args:
<add> activation_string: String name of the activation function.
<ide>
<del> Returns:
<del> `input_tensor` with the GELU activation applied.
<del> """
<del> cdf = 0.5 * (1.0 + tf.erf(input_tensor / tf.sqrt(2.0)))
<del> return input_tensor * cdf
<add> Returns:
<add> A Python function corresponding to the activation function. If
<add> `activation_string` is None, empty, or "linear", this will return None.
<add> If `activation_string` is not a string, it will return `activation_string`.
<ide>
<add> Raises:
<add> ValueError: The `activation_string` does not correspond to a known
<add> activation.
<add> """
<ide>
<del>def get_activation(activation_string):
<del> """Maps a string to a Python function, e.g., "relu" => `tf.nn.relu`.
<del>
<del> Args:
<del> activation_string: String name of the activation function.
<del>
<del> Returns:
<del> A Python function corresponding to the activation function. If
<del> `activation_string` is None, empty, or "linear", this will return None.
<del> If `activation_string` is not a string, it will return `activation_string`.
<del>
<del> Raises:
<del> ValueError: The `activation_string` does not correspond to a known
<del> activation.
<del> """
<del>
<del> # We assume that anything that"s not a string is already an activation
<del> # function, so we just return it.
<del> if not isinstance(activation_string, six.string_types):
<del> return activation_string
<del>
<del> if not activation_string:
<del> return None
<del>
<del> act = activation_string.lower()
<del> if act == "linear":
<del> return None
<del> elif act == "relu":
<del> return tf.nn.relu
<del> elif act == "gelu":
<del> return gelu
<del> elif act == "tanh":
<del> return tf.tanh
<del> else:
<del> raise ValueError("Unsupported activation: %s" % act)
<add> # We assume that anything that"s not a string is already an activation
<add> # function, so we just return it.
<add> if not isinstance(activation_string, six.string_types):
<add> return activation_string
<add>
<add> if not activation_string:
<add> return None
<add>
<add> act = activation_string.lower()
<add> if act == "linear":
<add> return None
<add> elif act == "relu":
<add> return tf.nn.relu
<add> elif act == "gelu":
<add> return gelu
<add> elif act == "tanh":
<add> return tf.tanh
<add> else:
<add> raise ValueError("Unsupported activation: %s" % act)
<ide>
<ide>
<ide> def get_assigment_map_from_checkpoint(tvars, init_checkpoint):
<del> """Compute the union of the current variables and checkpoint variables."""
<del> assignment_map = {}
<del> initialized_variable_names = {}
<add> """Compute the union of the current variables and checkpoint variables."""
<add> assignment_map = {}
<add> initialized_variable_names = {}
<ide>
<del> name_to_variable = collections.OrderedDict()
<del> for var in tvars:
<del> name = var.name
<del> m = re.match("^(.*):\\d+$", name)
<del> if m is not None:
<del> name = m.group(1)
<del> name_to_variable[name] = var
<add> name_to_variable = collections.OrderedDict()
<add> for var in tvars:
<add> name = var.name
<add> m = re.match("^(.*):\\d+$", name)
<add> if m is not None:
<add> name = m.group(1)
<add> name_to_variable[name] = var
<ide>
<del> init_vars = tf.train.list_variables(init_checkpoint)
<add> init_vars = tf.train.list_variables(init_checkpoint)
<ide>
<del> assignment_map = collections.OrderedDict()
<del> for x in init_vars:
<del> (name, var) = (x[0], x[1])
<del> if name not in name_to_variable:
<del> continue
<del> assignment_map[name] = name
<del> initialized_variable_names[name] = 1
<del> initialized_variable_names[name + ":0"] = 1
<add> assignment_map = collections.OrderedDict()
<add> for x in init_vars:
<add> (name, var) = (x[0], x[1])
<add> if name not in name_to_variable:
<add> continue
<add> assignment_map[name] = name
<add> initialized_variable_names[name] = 1
<add> initialized_variable_names[name + ":0"] = 1
<ide>
<del> return (assignment_map, initialized_variable_names)
<add> return (assignment_map, initialized_variable_names)
<ide>
<ide>
<ide> def dropout(input_tensor, dropout_prob):
<del> """Perform dropout.
<add> """Perform dropout.
<ide>
<del> Args:
<del> input_tensor: float Tensor.
<del> dropout_prob: Python float. The probabiltiy of dropping out a value (NOT of
<del> *keeping* a dimension as in `tf.nn.dropout`).
<add> Args:
<add> input_tensor: float Tensor.
<add> dropout_prob: Python float. The probabiltiy of dropping out a value (NOT of
<add> *keeping* a dimension as in `tf.nn.dropout`).
<ide>
<del> Returns:
<del> A version of `input_tensor` with dropout applied.
<del> """
<del> if dropout_prob is None or dropout_prob == 0.0:
<del> return input_tensor
<add> Returns:
<add> A version of `input_tensor` with dropout applied.
<add> """
<add> if dropout_prob is None or dropout_prob == 0.0:
<add> return input_tensor
<ide>
<del> output = tf.nn.dropout(input_tensor, 1.0 - dropout_prob)
<del> return output
<add> output = tf.nn.dropout(input_tensor, 1.0 - dropout_prob)
<add> return output
<ide>
<ide>
<ide> def layer_norm(input_tensor, name=None):
<del> """Run layer normalization on the last dimension of the tensor."""
<del> return tf.contrib.layers.layer_norm(
<del> inputs=input_tensor, begin_norm_axis=-1, begin_params_axis=-1, scope=name)
<add> """Run layer normalization on the last dimension of the tensor."""
<add> return tf.contrib.layers.layer_norm(
<add> inputs=input_tensor, begin_norm_axis=-1, begin_params_axis=-1, scope=name)
<ide>
<ide>
<ide> def layer_norm_and_dropout(input_tensor, dropout_prob, name=None):
<del> """Runs layer normalization followed by dropout."""
<del> output_tensor = layer_norm(input_tensor, name)
<del> output_tensor = dropout(output_tensor, dropout_prob)
<del> return output_tensor
<add> """Runs layer normalization followed by dropout."""
<add> output_tensor = layer_norm(input_tensor, name)
<add> output_tensor = dropout(output_tensor, dropout_prob)
<add> return output_tensor
<ide>
<ide>
<ide> def create_initializer(initializer_range=0.02):
<del> """Creates a `truncated_normal_initializer` with the given range."""
<del> return tf.truncated_normal_initializer(stddev=initializer_range)
<add> """Creates a `truncated_normal_initializer` with the given range."""
<add> return tf.truncated_normal_initializer(stddev=initializer_range)
<ide>
<ide>
<ide> def embedding_lookup(input_ids,
<ide> def embedding_lookup(input_ids,
<ide> initializer_range=0.02,
<ide> word_embedding_name="word_embeddings",
<ide> use_one_hot_embeddings=False):
<del> """Looks up words embeddings for id tensor.
<del>
<del> Args:
<del> input_ids: int32 Tensor of shape [batch_size, seq_length] containing word
<del> ids.
<del> vocab_size: int. Size of the embedding vocabulary.
<del> embedding_size: int. Width of the word embeddings.
<del> initializer_range: float. Embedding initialization range.
<del> word_embedding_name: string. Name of the embedding table.
<del> use_one_hot_embeddings: bool. If True, use one-hot method for word
<del> embeddings. If False, use `tf.nn.embedding_lookup()`. One hot is better
<del> for TPUs.
<del>
<del> Returns:
<del> float Tensor of shape [batch_size, seq_length, embedding_size].
<del> """
<del> # This function assumes that the input is of shape [batch_size, seq_length,
<del> # num_inputs].
<del> #
<del> # If the input is a 2D tensor of shape [batch_size, seq_length], we
<del> # reshape to [batch_size, seq_length, 1].
<del> if input_ids.shape.ndims == 2:
<del> input_ids = tf.expand_dims(input_ids, axis=[-1])
<del>
<del> embedding_table = tf.get_variable(
<del> name=word_embedding_name,
<del> shape=[vocab_size, embedding_size],
<del> initializer=create_initializer(initializer_range))
<del>
<del> if use_one_hot_embeddings:
<del> flat_input_ids = tf.reshape(input_ids, [-1])
<del> one_hot_input_ids = tf.one_hot(flat_input_ids, depth=vocab_size)
<del> output = tf.matmul(one_hot_input_ids, embedding_table)
<del> else:
<del> output = tf.nn.embedding_lookup(embedding_table, input_ids)
<del>
<del> input_shape = get_shape_list(input_ids)
<del>
<del> output = tf.reshape(output,
<del> input_shape[0:-1] + [input_shape[-1] * embedding_size])
<del> return (output, embedding_table)
<add> """Looks up words embeddings for id tensor.
<add>
<add> Args:
<add> input_ids: int32 Tensor of shape [batch_size, seq_length] containing word
<add> ids.
<add> vocab_size: int. Size of the embedding vocabulary.
<add> embedding_size: int. Width of the word embeddings.
<add> initializer_range: float. Embedding initialization range.
<add> word_embedding_name: string. Name of the embedding table.
<add> use_one_hot_embeddings: bool. If True, use one-hot method for word
<add> embeddings. If False, use `tf.nn.embedding_lookup()`. One hot is better
<add> for TPUs.
<add>
<add> Returns:
<add> float Tensor of shape [batch_size, seq_length, embedding_size].
<add> """
<add> # This function assumes that the input is of shape [batch_size, seq_length,
<add> # num_inputs].
<add> #
<add> # If the input is a 2D tensor of shape [batch_size, seq_length], we
<add> # reshape to [batch_size, seq_length, 1].
<add> if input_ids.shape.ndims == 2:
<add> input_ids = tf.expand_dims(input_ids, axis=[-1])
<add>
<add> embedding_table = tf.get_variable(
<add> name=word_embedding_name,
<add> shape=[vocab_size, embedding_size],
<add> initializer=create_initializer(initializer_range))
<add>
<add> if use_one_hot_embeddings:
<add> flat_input_ids = tf.reshape(input_ids, [-1])
<add> one_hot_input_ids = tf.one_hot(flat_input_ids, depth=vocab_size)
<add> output = tf.matmul(one_hot_input_ids, embedding_table)
<add> else:
<add> output = tf.nn.embedding_lookup(embedding_table, input_ids)
<add>
<add> input_shape = get_shape_list(input_ids)
<add>
<add> output = tf.reshape(output,
<add> input_shape[0:-1] + [input_shape[-1] * embedding_size])
<add> return (output, embedding_table)
<ide>
<ide>
<ide> def embedding_postprocessor(input_tensor,
<ide> def embedding_postprocessor(input_tensor,
<ide> initializer_range=0.02,
<ide> max_position_embeddings=512,
<ide> dropout_prob=0.1):
<del> """Performs various post-processing on a word embedding tensor.
<del>
<del> Args:
<del> input_tensor: float Tensor of shape [batch_size, seq_length,
<del> embedding_size].
<del> use_token_type: bool. Whether to add embeddings for `token_type_ids`.
<del> token_type_ids: (optional) int32 Tensor of shape [batch_size, seq_length].
<del> Must be specified if `use_token_type` is True.
<del> token_type_vocab_size: int. The vocabulary size of `token_type_ids`.
<del> token_type_embedding_name: string. The name of the embedding table variable
<del> for token type ids.
<del> use_position_embeddings: bool. Whether to add position embeddings for the
<del> position of each token in the sequence.
<del> position_embedding_name: string. The name of the embedding table variable
<del> for positional embeddings.
<del> initializer_range: float. Range of the weight initialization.
<del> max_position_embeddings: int. Maximum sequence length that might ever be
<del> used with this model. This can be longer than the sequence length of
<del> input_tensor, but cannot be shorter.
<del> dropout_prob: float. Dropout probability applied to the final output tensor.
<del>
<del> Returns:
<del> float tensor with same shape as `input_tensor`.
<del>
<del> Raises:
<del> ValueError: One of the tensor shapes or input values is invalid.
<del> """
<del> input_shape = get_shape_list(input_tensor, expected_rank=3)
<del> batch_size = input_shape[0]
<del> seq_length = input_shape[1]
<del> width = input_shape[2]
<del>
<del> if seq_length > max_position_embeddings:
<del> raise ValueError("The seq length (%d) cannot be greater than "
<del> "`max_position_embeddings` (%d)" %
<del> (seq_length, max_position_embeddings))
<del>
<del> output = input_tensor
<del>
<del> if use_token_type:
<del> if token_type_ids is None:
<del> raise ValueError("`token_type_ids` must be specified if"
<del> "`use_token_type` is True.")
<del> token_type_table = tf.get_variable(
<del> name=token_type_embedding_name,
<del> shape=[token_type_vocab_size, width],
<del> initializer=create_initializer(initializer_range))
<del> # This vocab will be small so we always do one-hot here, since it is always
<del> # faster for a small vocabulary.
<del> flat_token_type_ids = tf.reshape(token_type_ids, [-1])
<del> one_hot_ids = tf.one_hot(flat_token_type_ids, depth=token_type_vocab_size)
<del> token_type_embeddings = tf.matmul(one_hot_ids, token_type_table)
<del> token_type_embeddings = tf.reshape(token_type_embeddings,
<del> [batch_size, seq_length, width])
<del> output += token_type_embeddings
<del>
<del> if use_position_embeddings:
<del> full_position_embeddings = tf.get_variable(
<del> name=position_embedding_name,
<del> shape=[max_position_embeddings, width],
<del> initializer=create_initializer(initializer_range))
<del> # Since the position embedding table is a learned variable, we create it
<del> # using a (long) sequence length `max_position_embeddings`. The actual
<del> # sequence length might be shorter than this, for faster training of
<del> # tasks that do not have long sequences.
<del> #
<del> # So `full_position_embeddings` is effectively an embedding table
<del> # for position [0, 1, 2, ..., max_position_embeddings-1], and the current
<del> # sequence has positions [0, 1, 2, ... seq_length-1], so we can just
<del> # perform a slice.
<del> if seq_length < max_position_embeddings:
<del> position_embeddings = tf.slice(full_position_embeddings, [0, 0],
<del> [seq_length, -1])
<del> else:
<del> position_embeddings = full_position_embeddings
<add> """Performs various post-processing on a word embedding tensor.
<ide>
<del> num_dims = len(output.shape.as_list())
<add> Args:
<add> input_tensor: float Tensor of shape [batch_size, seq_length,
<add> embedding_size].
<add> use_token_type: bool. Whether to add embeddings for `token_type_ids`.
<add> token_type_ids: (optional) int32 Tensor of shape [batch_size, seq_length].
<add> Must be specified if `use_token_type` is True.
<add> token_type_vocab_size: int. The vocabulary size of `token_type_ids`.
<add> token_type_embedding_name: string. The name of the embedding table variable
<add> for token type ids.
<add> use_position_embeddings: bool. Whether to add position embeddings for the
<add> position of each token in the sequence.
<add> position_embedding_name: string. The name of the embedding table variable
<add> for positional embeddings.
<add> initializer_range: float. Range of the weight initialization.
<add> max_position_embeddings: int. Maximum sequence length that might ever be
<add> used with this model. This can be longer than the sequence length of
<add> input_tensor, but cannot be shorter.
<add> dropout_prob: float. Dropout probability applied to the final output tensor.
<ide>
<del> # Only the last two dimensions are relevant (`seq_length` and `width`), so
<del> # we broadcast among the first dimensions, which is typically just
<del> # the batch size.
<del> position_broadcast_shape = []
<del> for _ in range(num_dims - 2):
<del> position_broadcast_shape.append(1)
<del> position_broadcast_shape.extend([seq_length, width])
<del> position_embeddings = tf.reshape(position_embeddings,
<del> position_broadcast_shape)
<del> output += position_embeddings
<add> Returns:
<add> float tensor with same shape as `input_tensor`.
<ide>
<del> output = layer_norm_and_dropout(output, dropout_prob)
<del> return output
<add> Raises:
<add> ValueError: One of the tensor shapes or input values is invalid.
<add> """
<add> input_shape = get_shape_list(input_tensor, expected_rank=3)
<add> batch_size = input_shape[0]
<add> seq_length = input_shape[1]
<add> width = input_shape[2]
<add>
<add> if seq_length > max_position_embeddings:
<add> raise ValueError("The seq length (%d) cannot be greater than "
<add> "`max_position_embeddings` (%d)" %
<add> (seq_length, max_position_embeddings))
<add>
<add> output = input_tensor
<add>
<add> if use_token_type:
<add> if token_type_ids is None:
<add> raise ValueError("`token_type_ids` must be specified if"
<add> "`use_token_type` is True.")
<add> token_type_table = tf.get_variable(
<add> name=token_type_embedding_name,
<add> shape=[token_type_vocab_size, width],
<add> initializer=create_initializer(initializer_range))
<add> # This vocab will be small so we always do one-hot here, since it is always
<add> # faster for a small vocabulary.
<add> flat_token_type_ids = tf.reshape(token_type_ids, [-1])
<add> one_hot_ids = tf.one_hot(flat_token_type_ids, depth=token_type_vocab_size)
<add> token_type_embeddings = tf.matmul(one_hot_ids, token_type_table)
<add> token_type_embeddings = tf.reshape(token_type_embeddings,
<add> [batch_size, seq_length, width])
<add> output += token_type_embeddings
<add>
<add> if use_position_embeddings:
<add> full_position_embeddings = tf.get_variable(
<add> name=position_embedding_name,
<add> shape=[max_position_embeddings, width],
<add> initializer=create_initializer(initializer_range))
<add> # Since the position embedding table is a learned variable, we create it
<add> # using a (long) sequence length `max_position_embeddings`. The actual
<add> # sequence length might be shorter than this, for faster training of
<add> # tasks that do not have long sequences.
<add> #
<add> # So `full_position_embeddings` is effectively an embedding table
<add> # for position [0, 1, 2, ..., max_position_embeddings-1], and the current
<add> # sequence has positions [0, 1, 2, ... seq_length-1], so we can just
<add> # perform a slice.
<add> if seq_length < max_position_embeddings:
<add> position_embeddings = tf.slice(full_position_embeddings, [0, 0],
<add> [seq_length, -1])
<add> else:
<add> position_embeddings = full_position_embeddings
<add>
<add> num_dims = len(output.shape.as_list())
<add>
<add> # Only the last two dimensions are relevant (`seq_length` and `width`), so
<add> # we broadcast among the first dimensions, which is typically just
<add> # the batch size.
<add> position_broadcast_shape = []
<add> for _ in range(num_dims - 2):
<add> position_broadcast_shape.append(1)
<add> position_broadcast_shape.extend([seq_length, width])
<add> position_embeddings = tf.reshape(position_embeddings,
<add> position_broadcast_shape)
<add> output += position_embeddings
<add>
<add> output = layer_norm_and_dropout(output, dropout_prob)
<add> return output
<ide>
<ide>
<ide> def create_attention_mask_from_input_mask(from_tensor, to_mask):
<del> """Create 3D attention mask from a 2D tensor mask.
<add> """Create 3D attention mask from a 2D tensor mask.
<ide>
<del> Args:
<del> from_tensor: 2D or 3D Tensor of shape [batch_size, from_seq_length, ...].
<del> to_mask: int32 Tensor of shape [batch_size, to_seq_length].
<add> Args:
<add> from_tensor: 2D or 3D Tensor of shape [batch_size, from_seq_length, ...].
<add> to_mask: int32 Tensor of shape [batch_size, to_seq_length].
<ide>
<del> Returns:
<del> float Tensor of shape [batch_size, from_seq_length, to_seq_length].
<del> """
<del> from_shape = get_shape_list(from_tensor, expected_rank=[2, 3])
<del> batch_size = from_shape[0]
<del> from_seq_length = from_shape[1]
<add> Returns:
<add> float Tensor of shape [batch_size, from_seq_length, to_seq_length].
<add> """
<add> from_shape = get_shape_list(from_tensor, expected_rank=[2, 3])
<add> batch_size = from_shape[0]
<add> from_seq_length = from_shape[1]
<ide>
<del> to_shape = get_shape_list(to_mask, expected_rank=2)
<del> to_seq_length = to_shape[1]
<add> to_shape = get_shape_list(to_mask, expected_rank=2)
<add> to_seq_length = to_shape[1]
<ide>
<del> to_mask = tf.cast(
<del> tf.reshape(to_mask, [batch_size, 1, to_seq_length]), tf.float32)
<add> to_mask = tf.cast(
<add> tf.reshape(to_mask, [batch_size, 1, to_seq_length]), tf.float32)
<ide>
<del> # We don't assume that `from_tensor` is a mask (although it could be). We
<del> # don't actually care if we attend *from* padding tokens (only *to* padding)
<del> # tokens so we create a tensor of all ones.
<del> #
<del> # `broadcast_ones` = [batch_size, from_seq_length, 1]
<del> broadcast_ones = tf.ones(
<del> shape=[batch_size, from_seq_length, 1], dtype=tf.float32)
<add> # We don't assume that `from_tensor` is a mask (although it could be). We
<add> # don't actually care if we attend *from* padding tokens (only *to* padding)
<add> # tokens so we create a tensor of all ones.
<add> #
<add> # `broadcast_ones` = [batch_size, from_seq_length, 1]
<add> broadcast_ones = tf.ones(
<add> shape=[batch_size, from_seq_length, 1], dtype=tf.float32)
<ide>
<del> # Here we broadcast along two dimensions to create the mask.
<del> mask = broadcast_ones * to_mask
<add> # Here we broadcast along two dimensions to create the mask.
<add> mask = broadcast_ones * to_mask
<ide>
<del> return mask
<add> return mask
<ide>
<ide>
<ide> def attention_layer(from_tensor,
<ide> def attention_layer(from_tensor,
<ide> batch_size=None,
<ide> from_seq_length=None,
<ide> to_seq_length=None):
<del> """Performs multi-headed attention from `from_tensor` to `to_tensor`.
<del>
<del> This is an implementation of multi-headed attention based on "Attention
<del> is all you Need". If `from_tensor` and `to_tensor` are the same, then
<del> this is self-attention. Each timestep in `from_tensor` attends to the
<del> corresponding sequence in `to_tensor`, and returns a fixed-with vector.
<del>
<del> This function first projects `from_tensor` into a "query" tensor and
<del> `to_tensor` into "key" and "value" tensors. These are (effectively) a list
<del> of tensors of length `num_attention_heads`, where each tensor is of shape
<del> [batch_size, seq_length, size_per_head].
<del>
<del> Then, the query and key tensors are dot-producted and scaled. These are
<del> softmaxed to obtain attention probabilities. The value tensors are then
<del> interpolated by these probabilities, then concatenated back to a single
<del> tensor and returned.
<del>
<del> In practice, the multi-headed attention are done with transposes and
<del> reshapes rather than actual separate tensors.
<del>
<del> Args:
<del> from_tensor: float Tensor of shape [batch_size, from_seq_length,
<del> from_width].
<del> to_tensor: float Tensor of shape [batch_size, to_seq_length, to_width].
<del> attention_mask: (optional) int32 Tensor of shape [batch_size,
<del> from_seq_length, to_seq_length]. The values should be 1 or 0. The
<del> attention scores will effectively be set to -infinity for any positions in
<del> the mask that are 0, and will be unchaged for positions that are 1.
<del> num_attention_heads: int. Number of attention heads.
<del> size_per_head: int. Size of each attention head.
<del> query_act: (optional) Activation function for the query transform.
<del> key_act: (optional) Activation function for the key transform.
<del> value_act: (optional) Activation function for the value transform.
<del> attention_probs_dropout_prob:
<del> initializer_range: float. Range of the weight initializer.
<del> do_return_2d_tensor: bool. If True, the output will be of shape [batch_size
<del> * from_seq_length, num_attention_heads * size_per_head]. If False, the
<del> output will be of shape [batch_size, from_seq_length, num_attention_heads
<del> * size_per_head].
<del> batch_size: (Optional) int. If the input is 2D, this might be the batch size
<del> of the 3D version of the `from_tensor` and `to_tensor`.
<del> from_seq_length: (Optional) If the input is 2D, this might be the seq length
<del> of the 3D version of the `from_tensor`.
<del> to_seq_length: (Optional) If the input is 2D, this might be the seq length
<del> of the 3D version of the `to_tensor`.
<del>
<del> Returns:
<del> float Tensor of shape [batch_size, from_seq_length,
<del> num_attention_heads * size_per_head]. (If `do_return_2d_tensor` is
<del> true, this will be of shape [batch_size * from_seq_length,
<del> num_attention_heads * size_per_head]).
<del>
<del> Raises:
<del> ValueError: Any of the arguments or tensor shapes are invalid.
<del> """
<del>
<del> def transpose_for_scores(input_tensor, batch_size, num_attention_heads,
<del> seq_length, width):
<del> output_tensor = tf.reshape(
<del> input_tensor, [batch_size, seq_length, num_attention_heads, width])
<del>
<del> output_tensor = tf.transpose(output_tensor, [0, 2, 1, 3])
<del> return output_tensor
<add> """Performs multi-headed attention from `from_tensor` to `to_tensor`.
<ide>
<del> from_shape = get_shape_list(from_tensor, expected_rank=[2, 3])
<del> to_shape = get_shape_list(to_tensor, expected_rank=[2, 3])
<add> This is an implementation of multi-headed attention based on "Attention
<add> is all you Need". If `from_tensor` and `to_tensor` are the same, then
<add> this is self-attention. Each timestep in `from_tensor` attends to the
<add> corresponding sequence in `to_tensor`, and returns a fixed-with vector.
<ide>
<del> if len(from_shape) != len(to_shape):
<del> raise ValueError(
<del> "The rank of `from_tensor` must match the rank of `to_tensor`.")
<add> This function first projects `from_tensor` into a "query" tensor and
<add> `to_tensor` into "key" and "value" tensors. These are (effectively) a list
<add> of tensors of length `num_attention_heads`, where each tensor is of shape
<add> [batch_size, seq_length, size_per_head].
<ide>
<del> if len(from_shape) == 3:
<del> batch_size = from_shape[0]
<del> from_seq_length = from_shape[1]
<del> to_seq_length = to_shape[1]
<del> elif len(from_shape) == 2:
<del> if (batch_size is None or from_seq_length is None or to_seq_length is None):
<del> raise ValueError(
<del> "When passing in rank 2 tensors to attention_layer, the values "
<del> "for `batch_size`, `from_seq_length`, and `to_seq_length` "
<del> "must all be specified.")
<del>
<del> # Scalar dimensions referenced here:
<del> # B = batch size (number of sequences)
<del> # F = `from_tensor` sequence length
<del> # T = `to_tensor` sequence length
<del> # N = `num_attention_heads`
<del> # H = `size_per_head`
<del>
<del> from_tensor_2d = reshape_to_matrix(from_tensor)
<del> to_tensor_2d = reshape_to_matrix(to_tensor)
<del>
<del> # `query_layer` = [B*F, N*H]
<del> query_layer = tf.layers.dense(
<del> from_tensor_2d,
<del> num_attention_heads * size_per_head,
<del> activation=query_act,
<del> name="query",
<del> kernel_initializer=create_initializer(initializer_range))
<del>
<del> # `key_layer` = [B*T, N*H]
<del> key_layer = tf.layers.dense(
<del> to_tensor_2d,
<del> num_attention_heads * size_per_head,
<del> activation=key_act,
<del> name="key",
<del> kernel_initializer=create_initializer(initializer_range))
<del>
<del> # `value_layer` = [B*T, N*H]
<del> value_layer = tf.layers.dense(
<del> to_tensor_2d,
<del> num_attention_heads * size_per_head,
<del> activation=value_act,
<del> name="value",
<del> kernel_initializer=create_initializer(initializer_range))
<del>
<del> # `query_layer` = [B, N, F, H]
<del> query_layer = transpose_for_scores(query_layer, batch_size,
<del> num_attention_heads, from_seq_length,
<del> size_per_head)
<del>
<del> # `key_layer` = [B, N, T, H]
<del> key_layer = transpose_for_scores(key_layer, batch_size, num_attention_heads,
<del> to_seq_length, size_per_head)
<del>
<del> # Take the dot product between "query" and "key" to get the raw
<del> # attention scores.
<del> # `attention_scores` = [B, N, F, T]
<del> attention_scores = tf.matmul(query_layer, key_layer, transpose_b=True)
<del> attention_scores = tf.multiply(attention_scores,
<del> 1.0 / math.sqrt(float(size_per_head)))
<del>
<del> if attention_mask is not None:
<del> # `attention_mask` = [B, 1, F, T]
<del> attention_mask = tf.expand_dims(attention_mask, axis=[1])
<del>
<del> # Since attention_mask is 1.0 for positions we want to attend and 0.0 for
<del> # masked positions, this operation will create a tensor which is 0.0 for
<del> # positions we want to attend and -10000.0 for masked positions.
<del> adder = (1.0 - tf.cast(attention_mask, tf.float32)) * -10000.0
<del>
<del> # Since we are adding it to the raw scores before the softmax, this is
<del> # effectively the same as removing these entirely.
<del> attention_scores += adder
<del>
<del> # Normalize the attention scores to probabilities.
<del> # `attention_probs` = [B, N, F, T]
<del> attention_probs = tf.nn.softmax(attention_scores)
<del>
<del> # This is actually dropping out entire tokens to attend to, which might
<del> # seem a bit unusual, but is taken from the original Transformer paper.
<del> attention_probs = dropout(attention_probs, attention_probs_dropout_prob)
<del>
<del> # `value_layer` = [B, T, N, H]
<del> value_layer = tf.reshape(
<del> value_layer,
<del> [batch_size, to_seq_length, num_attention_heads, size_per_head])
<del>
<del> # `value_layer` = [B, N, T, H]
<del> value_layer = tf.transpose(value_layer, [0, 2, 1, 3])
<del>
<del> # `context_layer` = [B, N, F, H]
<del> context_layer = tf.matmul(attention_probs, value_layer)
<del>
<del> # `context_layer` = [B, F, N, H]
<del> context_layer = tf.transpose(context_layer, [0, 2, 1, 3])
<del>
<del> if do_return_2d_tensor:
<del> # `context_layer` = [B*F, N*V]
<del> context_layer = tf.reshape(
<del> context_layer,
<del> [batch_size * from_seq_length, num_attention_heads * size_per_head])
<del> else:
<del> # `context_layer` = [B, F, N*V]
<del> context_layer = tf.reshape(
<del> context_layer,
<del> [batch_size, from_seq_length, num_attention_heads * size_per_head])
<del>
<del> return context_layer
<add> Then, the query and key tensors are dot-producted and scaled. These are
<add> softmaxed to obtain attention probabilities. The value tensors are then
<add> interpolated by these probabilities, then concatenated back to a single
<add> tensor and returned.
<add>
<add> In practice, the multi-headed attention are done with transposes and
<add> reshapes rather than actual separate tensors.
<add>
<add> Args:
<add> from_tensor: float Tensor of shape [batch_size, from_seq_length,
<add> from_width].
<add> to_tensor: float Tensor of shape [batch_size, to_seq_length, to_width].
<add> attention_mask: (optional) int32 Tensor of shape [batch_size,
<add> from_seq_length, to_seq_length]. The values should be 1 or 0. The
<add> attention scores will effectively be set to -infinity for any positions in
<add> the mask that are 0, and will be unchaged for positions that are 1.
<add> num_attention_heads: int. Number of attention heads.
<add> size_per_head: int. Size of each attention head.
<add> query_act: (optional) Activation function for the query transform.
<add> key_act: (optional) Activation function for the key transform.
<add> value_act: (optional) Activation function for the value transform.
<add> attention_probs_dropout_prob:
<add> initializer_range: float. Range of the weight initializer.
<add> do_return_2d_tensor: bool. If True, the output will be of shape [batch_size
<add> * from_seq_length, num_attention_heads * size_per_head]. If False, the
<add> output will be of shape [batch_size, from_seq_length, num_attention_heads
<add> * size_per_head].
<add> batch_size: (Optional) int. If the input is 2D, this might be the batch size
<add> of the 3D version of the `from_tensor` and `to_tensor`.
<add> from_seq_length: (Optional) If the input is 2D, this might be the seq length
<add> of the 3D version of the `from_tensor`.
<add> to_seq_length: (Optional) If the input is 2D, this might be the seq length
<add> of the 3D version of the `to_tensor`.
<add>
<add> Returns:
<add> float Tensor of shape [batch_size, from_seq_length,
<add> num_attention_heads * size_per_head]. (If `do_return_2d_tensor` is
<add> true, this will be of shape [batch_size * from_seq_length,
<add> num_attention_heads * size_per_head]).
<add>
<add> Raises:
<add> ValueError: Any of the arguments or tensor shapes are invalid.
<add> """
<add>
<add> def transpose_for_scores(input_tensor, batch_size, num_attention_heads,
<add> seq_length, width):
<add> output_tensor = tf.reshape(
<add> input_tensor, [batch_size, seq_length, num_attention_heads, width])
<add>
<add> output_tensor = tf.transpose(output_tensor, [0, 2, 1, 3])
<add> return output_tensor
<add>
<add> from_shape = get_shape_list(from_tensor, expected_rank=[2, 3])
<add> to_shape = get_shape_list(to_tensor, expected_rank=[2, 3])
<add>
<add> if len(from_shape) != len(to_shape):
<add> raise ValueError(
<add> "The rank of `from_tensor` must match the rank of `to_tensor`.")
<add>
<add> if len(from_shape) == 3:
<add> batch_size = from_shape[0]
<add> from_seq_length = from_shape[1]
<add> to_seq_length = to_shape[1]
<add> elif len(from_shape) == 2:
<add> if (batch_size is None or from_seq_length is None or to_seq_length is None):
<add> raise ValueError(
<add> "When passing in rank 2 tensors to attention_layer, the values "
<add> "for `batch_size`, `from_seq_length`, and `to_seq_length` "
<add> "must all be specified.")
<add>
<add> # Scalar dimensions referenced here:
<add> # B = batch size (number of sequences)
<add> # F = `from_tensor` sequence length
<add> # T = `to_tensor` sequence length
<add> # N = `num_attention_heads`
<add> # H = `size_per_head`
<add>
<add> from_tensor_2d = reshape_to_matrix(from_tensor)
<add> to_tensor_2d = reshape_to_matrix(to_tensor)
<add>
<add> # `query_layer` = [B*F, N*H]
<add> query_layer = tf.layers.dense(
<add> from_tensor_2d,
<add> num_attention_heads * size_per_head,
<add> activation=query_act,
<add> name="query",
<add> kernel_initializer=create_initializer(initializer_range))
<add>
<add> # `key_layer` = [B*T, N*H]
<add> key_layer = tf.layers.dense(
<add> to_tensor_2d,
<add> num_attention_heads * size_per_head,
<add> activation=key_act,
<add> name="key",
<add> kernel_initializer=create_initializer(initializer_range))
<add>
<add> # `value_layer` = [B*T, N*H]
<add> value_layer = tf.layers.dense(
<add> to_tensor_2d,
<add> num_attention_heads * size_per_head,
<add> activation=value_act,
<add> name="value",
<add> kernel_initializer=create_initializer(initializer_range))
<add>
<add> # `query_layer` = [B, N, F, H]
<add> query_layer = transpose_for_scores(query_layer, batch_size,
<add> num_attention_heads, from_seq_length,
<add> size_per_head)
<add>
<add> # `key_layer` = [B, N, T, H]
<add> key_layer = transpose_for_scores(key_layer, batch_size, num_attention_heads,
<add> to_seq_length, size_per_head)
<add>
<add> # Take the dot product between "query" and "key" to get the raw
<add> # attention scores.
<add> # `attention_scores` = [B, N, F, T]
<add> attention_scores = tf.matmul(query_layer, key_layer, transpose_b=True)
<add> attention_scores = tf.multiply(attention_scores,
<add> 1.0 / math.sqrt(float(size_per_head)))
<add>
<add> if attention_mask is not None:
<add> # `attention_mask` = [B, 1, F, T]
<add> attention_mask = tf.expand_dims(attention_mask, axis=[1])
<add>
<add> # Since attention_mask is 1.0 for positions we want to attend and 0.0 for
<add> # masked positions, this operation will create a tensor which is 0.0 for
<add> # positions we want to attend and -10000.0 for masked positions.
<add> adder = (1.0 - tf.cast(attention_mask, tf.float32)) * -10000.0
<add>
<add> # Since we are adding it to the raw scores before the softmax, this is
<add> # effectively the same as removing these entirely.
<add> attention_scores += adder
<add>
<add> # Normalize the attention scores to probabilities.
<add> # `attention_probs` = [B, N, F, T]
<add> attention_probs = tf.nn.softmax(attention_scores)
<add>
<add> # This is actually dropping out entire tokens to attend to, which might
<add> # seem a bit unusual, but is taken from the original Transformer paper.
<add> attention_probs = dropout(attention_probs, attention_probs_dropout_prob)
<add>
<add> # `value_layer` = [B, T, N, H]
<add> value_layer = tf.reshape(
<add> value_layer,
<add> [batch_size, to_seq_length, num_attention_heads, size_per_head])
<add>
<add> # `value_layer` = [B, N, T, H]
<add> value_layer = tf.transpose(value_layer, [0, 2, 1, 3])
<add>
<add> # `context_layer` = [B, N, F, H]
<add> context_layer = tf.matmul(attention_probs, value_layer)
<add>
<add> # `context_layer` = [B, F, N, H]
<add> context_layer = tf.transpose(context_layer, [0, 2, 1, 3])
<add>
<add> if do_return_2d_tensor:
<add> # `context_layer` = [B*F, N*V]
<add> context_layer = tf.reshape(
<add> context_layer,
<add> [batch_size * from_seq_length, num_attention_heads * size_per_head])
<add> else:
<add> # `context_layer` = [B, F, N*V]
<add> context_layer = tf.reshape(
<add> context_layer,
<add> [batch_size, from_seq_length, num_attention_heads * size_per_head])
<add>
<add> return context_layer
<ide>
<ide>
<ide> def transformer_model(input_tensor,
<ide> def transformer_model(input_tensor,
<ide> attention_probs_dropout_prob=0.1,
<ide> initializer_range=0.02,
<ide> do_return_all_layers=False):
<del> """Multi-headed, multi-layer Transformer from "Attention is All You Need".
<del>
<del> This is almost an exact implementation of the original Transformer encoder.
<del>
<del> See the original paper:
<del> https://arxiv.org/abs/1706.03762
<del>
<del> Also see:
<del> https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/models/transformer.py
<del>
<del> Args:
<del> input_tensor: float Tensor of shape [batch_size, seq_length, hidden_size].
<del> attention_mask: (optional) int32 Tensor of shape [batch_size, seq_length,
<del> seq_length], with 1 for positions that can be attended to and 0 in
<del> positions that should not be.
<del> hidden_size: int. Hidden size of the Transformer.
<del> num_hidden_layers: int. Number of layers (blocks) in the Transformer.
<del> num_attention_heads: int. Number of attention heads in the Transformer.
<del> intermediate_size: int. The size of the "intermediate" (a.k.a., feed
<del> forward) layer.
<del> intermediate_act_fn: function. The non-linear activation function to apply
<del> to the output of the intermediate/feed-forward layer.
<del> hidden_dropout_prob: float. Dropout probability for the hidden layers.
<del> attention_probs_dropout_prob: float. Dropout probability of the attention
<del> probabilities.
<del> initializer_range: float. Range of the initializer (stddev of truncated
<del> normal).
<del> do_return_all_layers: Whether to also return all layers or just the final
<del> layer.
<del>
<del> Returns:
<del> float Tensor of shape [batch_size, seq_length, hidden_size], the final
<del> hidden layer of the Transformer.
<del>
<del> Raises:
<del> ValueError: A Tensor shape or parameter is invalid.
<del> """
<del> if hidden_size % num_attention_heads != 0:
<del> raise ValueError(
<del> "The hidden size (%d) is not a multiple of the number of attention "
<del> "heads (%d)" % (hidden_size, num_attention_heads))
<del>
<del> attention_head_size = int(hidden_size / num_attention_heads)
<del> input_shape = get_shape_list(input_tensor, expected_rank=3)
<del> batch_size = input_shape[0]
<del> seq_length = input_shape[1]
<del> input_width = input_shape[2]
<del>
<del> # The Transformer performs sum residuals on all layers so the input needs
<del> # to be the same as the hidden size.
<del> if input_width != hidden_size:
<del> raise ValueError("The width of the input tensor (%d) != hidden size (%d)" %
<del> (input_width, hidden_size))
<del>
<del> # We keep the representation as a 2D tensor to avoid re-shaping it back and
<del> # forth from a 3D tensor to a 2D tensor. Re-shapes are normally free on
<del> # the GPU/CPU but may not be free on the TPU, so we want to minimize them to
<del> # help the optimizer.
<del> prev_output = reshape_to_matrix(input_tensor)
<del>
<del> all_layer_outputs = []
<del> for layer_idx in range(num_hidden_layers):
<del> with tf.variable_scope("layer_%d" % layer_idx):
<del> layer_input = prev_output
<del>
<del> with tf.variable_scope("attention"):
<del> attention_heads = []
<del> with tf.variable_scope("self"):
<del> attention_head = attention_layer(
<del> from_tensor=layer_input,
<del> to_tensor=layer_input,
<del> attention_mask=attention_mask,
<del> num_attention_heads=num_attention_heads,
<del> size_per_head=attention_head_size,
<del> attention_probs_dropout_prob=attention_probs_dropout_prob,
<del> initializer_range=initializer_range,
<del> do_return_2d_tensor=True,
<del> batch_size=batch_size,
<del> from_seq_length=seq_length,
<del> to_seq_length=seq_length)
<del> attention_heads.append(attention_head)
<del>
<del> attention_output = None
<del> if len(attention_heads) == 1:
<del> attention_output = attention_heads[0]
<del> else:
<del> # In the case where we have other sequences, we just concatenate
<del> # them to the self-attention head before the projection.
<del> attention_output = tf.concat(attention_heads, axis=-1)
<del>
<del> # Run a linear projection of `hidden_size` then add a residual
<del> # with `layer_input`.
<del> with tf.variable_scope("output"):
<del> attention_output = tf.layers.dense(
<del> attention_output,
<del> hidden_size,
<del> kernel_initializer=create_initializer(initializer_range))
<del> attention_output = dropout(attention_output, hidden_dropout_prob)
<del> attention_output = layer_norm(attention_output + layer_input)
<del>
<del> # The activation is only applied to the "intermediate" hidden layer.
<del> with tf.variable_scope("intermediate"):
<del> intermediate_output = tf.layers.dense(
<del> attention_output,
<del> intermediate_size,
<del> activation=intermediate_act_fn,
<del> kernel_initializer=create_initializer(initializer_range))
<del>
<del> # Down-project back to `hidden_size` then add the residual.
<del> with tf.variable_scope("output"):
<del> layer_output = tf.layers.dense(
<del> intermediate_output,
<del> hidden_size,
<del> kernel_initializer=create_initializer(initializer_range))
<del> layer_output = dropout(layer_output, hidden_dropout_prob)
<del> layer_output = layer_norm(layer_output + attention_output)
<del> prev_output = layer_output
<del> all_layer_outputs.append(layer_output)
<del>
<del> if do_return_all_layers:
<del> final_outputs = []
<del> for layer_output in all_layer_outputs:
<del> final_output = reshape_from_matrix(layer_output, input_shape)
<del> final_outputs.append(final_output)
<del> return final_outputs
<del> else:
<del> final_output = reshape_from_matrix(prev_output, input_shape)
<del> return final_output
<add> """Multi-headed, multi-layer Transformer from "Attention is All You Need".
<add>
<add> This is almost an exact implementation of the original Transformer encoder.
<add>
<add> See the original paper:
<add> https://arxiv.org/abs/1706.03762
<add>
<add> Also see:
<add> https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/models/transformer.py
<add>
<add> Args:
<add> input_tensor: float Tensor of shape [batch_size, seq_length, hidden_size].
<add> attention_mask: (optional) int32 Tensor of shape [batch_size, seq_length,
<add> seq_length], with 1 for positions that can be attended to and 0 in
<add> positions that should not be.
<add> hidden_size: int. Hidden size of the Transformer.
<add> num_hidden_layers: int. Number of layers (blocks) in the Transformer.
<add> num_attention_heads: int. Number of attention heads in the Transformer.
<add> intermediate_size: int. The size of the "intermediate" (a.k.a., feed
<add> forward) layer.
<add> intermediate_act_fn: function. The non-linear activation function to apply
<add> to the output of the intermediate/feed-forward layer.
<add> hidden_dropout_prob: float. Dropout probability for the hidden layers.
<add> attention_probs_dropout_prob: float. Dropout probability of the attention
<add> probabilities.
<add> initializer_range: float. Range of the initializer (stddev of truncated
<add> normal).
<add> do_return_all_layers: Whether to also return all layers or just the final
<add> layer.
<add>
<add> Returns:
<add> float Tensor of shape [batch_size, seq_length, hidden_size], the final
<add> hidden layer of the Transformer.
<add>
<add> Raises:
<add> ValueError: A Tensor shape or parameter is invalid.
<add> """
<add> if hidden_size % num_attention_heads != 0:
<add> raise ValueError(
<add> "The hidden size (%d) is not a multiple of the number of attention "
<add> "heads (%d)" % (hidden_size, num_attention_heads))
<add>
<add> attention_head_size = int(hidden_size / num_attention_heads)
<add> input_shape = get_shape_list(input_tensor, expected_rank=3)
<add> batch_size = input_shape[0]
<add> seq_length = input_shape[1]
<add> input_width = input_shape[2]
<add>
<add> # The Transformer performs sum residuals on all layers so the input needs
<add> # to be the same as the hidden size.
<add> if input_width != hidden_size:
<add> raise ValueError("The width of the input tensor (%d) != hidden size (%d)" %
<add> (input_width, hidden_size))
<add>
<add> # We keep the representation as a 2D tensor to avoid re-shaping it back and
<add> # forth from a 3D tensor to a 2D tensor. Re-shapes are normally free on
<add> # the GPU/CPU but may not be free on the TPU, so we want to minimize them to
<add> # help the optimizer.
<add> prev_output = reshape_to_matrix(input_tensor)
<add>
<add> all_layer_outputs = []
<add> for layer_idx in range(num_hidden_layers):
<add> with tf.variable_scope("layer_%d" % layer_idx):
<add> layer_input = prev_output
<add>
<add> with tf.variable_scope("attention"):
<add> attention_heads = []
<add> with tf.variable_scope("self"):
<add> attention_head = attention_layer(
<add> from_tensor=layer_input,
<add> to_tensor=layer_input,
<add> attention_mask=attention_mask,
<add> num_attention_heads=num_attention_heads,
<add> size_per_head=attention_head_size,
<add> attention_probs_dropout_prob=attention_probs_dropout_prob,
<add> initializer_range=initializer_range,
<add> do_return_2d_tensor=True,
<add> batch_size=batch_size,
<add> from_seq_length=seq_length,
<add> to_seq_length=seq_length)
<add> attention_heads.append(attention_head)
<add>
<add> attention_output = None
<add> if len(attention_heads) == 1:
<add> attention_output = attention_heads[0]
<add> else:
<add> # In the case where we have other sequences, we just concatenate
<add> # them to the self-attention head before the projection.
<add> attention_output = tf.concat(attention_heads, axis=-1)
<add>
<add> # Run a linear projection of `hidden_size` then add a residual
<add> # with `layer_input`.
<add> with tf.variable_scope("output"):
<add> attention_output = tf.layers.dense(
<add> attention_output,
<add> hidden_size,
<add> kernel_initializer=create_initializer(initializer_range))
<add> attention_output = dropout(attention_output, hidden_dropout_prob)
<add> attention_output = layer_norm(attention_output + layer_input)
<add>
<add> # The activation is only applied to the "intermediate" hidden layer.
<add> with tf.variable_scope("intermediate"):
<add> intermediate_output = tf.layers.dense(
<add> attention_output,
<add> intermediate_size,
<add> activation=intermediate_act_fn,
<add> kernel_initializer=create_initializer(initializer_range))
<add>
<add> # Down-project back to `hidden_size` then add the residual.
<add> with tf.variable_scope("output"):
<add> layer_output = tf.layers.dense(
<add> intermediate_output,
<add> hidden_size,
<add> kernel_initializer=create_initializer(initializer_range))
<add> layer_output = dropout(layer_output, hidden_dropout_prob)
<add> layer_output = layer_norm(layer_output + attention_output)
<add> prev_output = layer_output
<add> all_layer_outputs.append(layer_output)
<add>
<add> if do_return_all_layers:
<add> final_outputs = []
<add> for layer_output in all_layer_outputs:
<add> final_output = reshape_from_matrix(layer_output, input_shape)
<add> final_outputs.append(final_output)
<add> return final_outputs
<add> else:
<add> final_output = reshape_from_matrix(prev_output, input_shape)
<add> return final_output
<ide>
<ide>
<ide> def get_shape_list(tensor, expected_rank=None, name=None):
<del> """Returns a list of the shape of tensor, preferring static dimensions.
<del>
<del> Args:
<del> tensor: A tf.Tensor object to find the shape of.
<del> expected_rank: (optional) int. The expected rank of `tensor`. If this is
<del> specified and the `tensor` has a different rank, and exception will be
<del> thrown.
<del> name: Optional name of the tensor for the error message.
<del>
<del> Returns:
<del> A list of dimensions of the shape of tensor. All static dimensions will
<del> be returned as python integers, and dynamic dimensions will be returned
<del> as tf.Tensor scalars.
<del> """
<del> if name is None:
<del> name = tensor.name
<del>
<del> if expected_rank is not None:
<del> assert_rank(tensor, expected_rank, name)
<del>
<del> shape = tensor.shape.as_list()
<del>
<del> non_static_indexes = []
<del> for (index, dim) in enumerate(shape):
<del> if dim is None:
<del> non_static_indexes.append(index)
<del>
<del> if not non_static_indexes:
<del> return shape
<add> """Returns a list of the shape of tensor, preferring static dimensions.
<ide>
<del> dyn_shape = tf.shape(tensor)
<del> for index in non_static_indexes:
<del> shape[index] = dyn_shape[index]
<del> return shape
<add> Args:
<add> tensor: A tf.Tensor object to find the shape of.
<add> expected_rank: (optional) int. The expected rank of `tensor`. If this is
<add> specified and the `tensor` has a different rank, and exception will be
<add> thrown.
<add> name: Optional name of the tensor for the error message.
<ide>
<add> Returns:
<add> A list of dimensions of the shape of tensor. All static dimensions will
<add> be returned as python integers, and dynamic dimensions will be returned
<add> as tf.Tensor scalars.
<add> """
<add> if name is None:
<add> name = tensor.name
<ide>
<del>def reshape_to_matrix(input_tensor):
<del> """Reshapes a >= rank 2 tensor to a rank 2 tensor (i.e., a matrix)."""
<del> ndims = input_tensor.shape.ndims
<del> if ndims < 2:
<del> raise ValueError("Input tensor must have at least rank 2. Shape = %s" %
<del> (input_tensor.shape))
<del> if ndims == 2:
<del> return input_tensor
<add> if expected_rank is not None:
<add> assert_rank(tensor, expected_rank, name)
<ide>
<del> width = input_tensor.shape[-1]
<del> output_tensor = tf.reshape(input_tensor, [-1, width])
<del> return output_tensor
<add> shape = tensor.shape.as_list()
<ide>
<add> non_static_indexes = []
<add> for (index, dim) in enumerate(shape):
<add> if dim is None:
<add> non_static_indexes.append(index)
<ide>
<del>def reshape_from_matrix(output_tensor, orig_shape_list):
<del> """Reshapes a rank 2 tensor back to its original rank >= 2 tensor."""
<del> if len(orig_shape_list) == 2:
<add> if not non_static_indexes:
<add> return shape
<add>
<add> dyn_shape = tf.shape(tensor)
<add> for index in non_static_indexes:
<add> shape[index] = dyn_shape[index]
<add> return shape
<add>
<add>
<add>def reshape_to_matrix(input_tensor):
<add> """Reshapes a >= rank 2 tensor to a rank 2 tensor (i.e., a matrix)."""
<add> ndims = input_tensor.shape.ndims
<add> if ndims < 2:
<add> raise ValueError("Input tensor must have at least rank 2. Shape = %s" %
<add> (input_tensor.shape))
<add> if ndims == 2:
<add> return input_tensor
<add>
<add> width = input_tensor.shape[-1]
<add> output_tensor = tf.reshape(input_tensor, [-1, width])
<ide> return output_tensor
<ide>
<del> output_shape = get_shape_list(output_tensor)
<ide>
<del> orig_dims = orig_shape_list[0:-1]
<del> width = output_shape[-1]
<add>def reshape_from_matrix(output_tensor, orig_shape_list):
<add> """Reshapes a rank 2 tensor back to its original rank >= 2 tensor."""
<add> if len(orig_shape_list) == 2:
<add> return output_tensor
<add>
<add> output_shape = get_shape_list(output_tensor)
<add>
<add> orig_dims = orig_shape_list[0:-1]
<add> width = output_shape[-1]
<ide>
<del> return tf.reshape(output_tensor, orig_dims + [width])
<add> return tf.reshape(output_tensor, orig_dims + [width])
<ide>
<ide>
<ide> def assert_rank(tensor, expected_rank, name=None):
<del> """Raises an exception if the tensor rank is not of the expected rank.
<del>
<del> Args:
<del> tensor: A tf.Tensor to check the rank of.
<del> expected_rank: Python integer or list of integers, expected rank.
<del> name: Optional name of the tensor for the error message.
<del>
<del> Raises:
<del> ValueError: If the expected shape doesn"t match the actual shape.
<del> """
<del> if name is None:
<del> name = tensor.name
<del>
<del> expected_rank_dict = {}
<del> if isinstance(expected_rank, six.integer_types):
<del> expected_rank_dict[expected_rank] = True
<del> else:
<del> for x in expected_rank:
<del> expected_rank_dict[x] = True
<del>
<del> actual_rank = tensor.shape.ndims
<del> if actual_rank not in expected_rank_dict:
<del> scope_name = tf.get_variable_scope().name
<del> raise ValueError(
<del> "For the tensor `%s` in scope `%s`, the actual rank "
<del> "`%d` (shape = %s) is not equal to the expected rank `%s`" %
<del> (name, scope_name, actual_rank, str(tensor.shape), str(expected_rank)))
<add> """Raises an exception if the tensor rank is not of the expected rank.
<add>
<add> Args:
<add> tensor: A tf.Tensor to check the rank of.
<add> expected_rank: Python integer or list of integers, expected rank.
<add> name: Optional name of the tensor for the error message.
<add>
<add> Raises:
<add> ValueError: If the expected shape doesn"t match the actual shape.
<add> """
<add> if name is None:
<add> name = tensor.name
<add>
<add> expected_rank_dict = {}
<add> if isinstance(expected_rank, six.integer_types):
<add> expected_rank_dict[expected_rank] = True
<add> else:
<add> for x in expected_rank:
<add> expected_rank_dict[x] = True
<add>
<add> actual_rank = tensor.shape.ndims
<add> if actual_rank not in expected_rank_dict:
<add> scope_name = tf.get_variable_scope().name
<add> raise ValueError(
<add> "For the tensor `%s` in scope `%s`, the actual rank "
<add> "`%d` (shape = %s) is not equal to the expected rank `%s`" %
<add> (name, scope_name, actual_rank, str(tensor.shape), str(expected_rank)))
<ide><path>modeling_test.py
<ide>
<ide>
<ide> class BertModelTest(tf.test.TestCase):
<del>
<del> class BertModelTester(object):
<del>
<del> def __init__(self,
<del> parent,
<del> batch_size=13,
<del> seq_length=7,
<del> is_training=True,
<del> use_input_mask=True,
<del> use_token_type_ids=True,
<del> vocab_size=99,
<del> hidden_size=32,
<del> num_hidden_layers=5,
<del> num_attention_heads=4,
<del> intermediate_size=37,
<del> hidden_act="gelu",
<del> hidden_dropout_prob=0.1,
<del> attention_probs_dropout_prob=0.1,
<del> max_position_embeddings=512,
<del> type_vocab_size=16,
<del> initializer_range=0.02,
<del> scope=None):
<del> self.parent = parent
<del> self.batch_size = batch_size
<del> self.seq_length = seq_length
<del> self.is_training = is_training
<del> self.use_input_mask = use_input_mask
<del> self.use_token_type_ids = use_token_type_ids
<del> self.vocab_size = vocab_size
<del> self.hidden_size = hidden_size
<del> self.num_hidden_layers = num_hidden_layers
<del> self.num_attention_heads = num_attention_heads
<del> self.intermediate_size = intermediate_size
<del> self.hidden_act = hidden_act
<del> self.hidden_dropout_prob = hidden_dropout_prob
<del> self.attention_probs_dropout_prob = attention_probs_dropout_prob
<del> self.max_position_embeddings = max_position_embeddings
<del> self.type_vocab_size = type_vocab_size
<del> self.initializer_range = initializer_range
<del> self.scope = scope
<del>
<del> def create_model(self):
<del> input_ids = BertModelTest.ids_tensor([self.batch_size, self.seq_length],
<del> self.vocab_size)
<del>
<del> input_mask = None
<del> if self.use_input_mask:
<del> input_mask = BertModelTest.ids_tensor(
<del> [self.batch_size, self.seq_length], vocab_size=2)
<del>
<del> token_type_ids = None
<del> if self.use_token_type_ids:
<del> token_type_ids = BertModelTest.ids_tensor(
<del> [self.batch_size, self.seq_length], self.type_vocab_size)
<del>
<del> config = modeling.BertConfig(
<del> vocab_size=self.vocab_size,
<del> hidden_size=self.hidden_size,
<del> num_hidden_layers=self.num_hidden_layers,
<del> num_attention_heads=self.num_attention_heads,
<del> intermediate_size=self.intermediate_size,
<del> hidden_act=self.hidden_act,
<del> hidden_dropout_prob=self.hidden_dropout_prob,
<del> attention_probs_dropout_prob=self.attention_probs_dropout_prob,
<del> max_position_embeddings=self.max_position_embeddings,
<del> type_vocab_size=self.type_vocab_size,
<del> initializer_range=self.initializer_range)
<del>
<del> model = modeling.BertModel(
<del> config=config,
<del> is_training=self.is_training,
<del> input_ids=input_ids,
<del> input_mask=input_mask,
<del> token_type_ids=token_type_ids,
<del> scope=self.scope)
<del>
<del> outputs = {
<del> "embedding_output": model.get_embedding_output(),
<del> "sequence_output": model.get_sequence_output(),
<del> "pooled_output": model.get_pooled_output(),
<del> "all_encoder_layers": model.get_all_encoder_layers(),
<del> }
<del> return outputs
<del>
<del> def check_output(self, result):
<del> self.parent.assertAllEqual(
<del> result["embedding_output"].shape,
<del> [self.batch_size, self.seq_length, self.hidden_size])
<del>
<del> self.parent.assertAllEqual(
<del> result["sequence_output"].shape,
<del> [self.batch_size, self.seq_length, self.hidden_size])
<del>
<del> self.parent.assertAllEqual(result["pooled_output"].shape,
<del> [self.batch_size, self.hidden_size])
<del>
<del> def test_default(self):
<del> self.run_tester(BertModelTest.BertModelTester(self))
<del>
<del> def test_config_to_json_string(self):
<del> config = modeling.BertConfig(vocab_size=99, hidden_size=37)
<del> obj = json.loads(config.to_json_string())
<del> self.assertEqual(obj["vocab_size"], 99)
<del> self.assertEqual(obj["hidden_size"], 37)
<del>
<del> def run_tester(self, tester):
<del> with self.test_session() as sess:
<del> ops = tester.create_model()
<del> init_op = tf.group(tf.global_variables_initializer(),
<del> tf.local_variables_initializer())
<del> sess.run(init_op)
<del> output_result = sess.run(ops)
<del> tester.check_output(output_result)
<del>
<del> self.assert_all_tensors_reachable(sess, [init_op, ops])
<del>
<del> @classmethod
<del> def ids_tensor(cls, shape, vocab_size, rng=None, name=None):
<del> """Creates a random int32 tensor of the shape within the vocab size."""
<del> if rng is None:
<del> rng = random.Random()
<del>
<del> total_dims = 1
<del> for dim in shape:
<del> total_dims *= dim
<del>
<del> values = []
<del> for _ in range(total_dims):
<del> values.append(rng.randint(0, vocab_size - 1))
<del>
<del> return tf.constant(value=values, dtype=tf.int32, shape=shape, name=name)
<del>
<del> def assert_all_tensors_reachable(self, sess, outputs):
<del> """Checks that all the tensors in the graph are reachable from outputs."""
<del> graph = sess.graph
<del>
<del> ignore_strings = [
<del> "^.*/dilation_rate$",
<del> "^.*/Tensordot/concat$",
<del> "^.*/Tensordot/concat/axis$",
<del> "^testing/.*$",
<del> ]
<del>
<del> ignore_regexes = [re.compile(x) for x in ignore_strings]
<del>
<del> unreachable = self.get_unreachable_ops(graph, outputs)
<del> filtered_unreachable = []
<del> for x in unreachable:
<del> do_ignore = False
<del> for r in ignore_regexes:
<del> m = r.match(x.name)
<del> if m is not None:
<del> do_ignore = True
<del> if do_ignore:
<del> continue
<del> filtered_unreachable.append(x)
<del> unreachable = filtered_unreachable
<del>
<del> self.assertEqual(
<del> len(unreachable), 0, "The following ops are unreachable: %s" %
<del> (" ".join([x.name for x in unreachable])))
<del>
<del> @classmethod
<del> def get_unreachable_ops(cls, graph, outputs):
<del> """Finds all of the tensors in graph that are unreachable from outputs."""
<del> outputs = cls.flatten_recursive(outputs)
<del> output_to_op = collections.defaultdict(list)
<del> op_to_all = collections.defaultdict(list)
<del> assign_out_to_in = collections.defaultdict(list)
<del>
<del> for op in graph.get_operations():
<del> for x in op.inputs:
<del> op_to_all[op.name].append(x.name)
<del> for y in op.outputs:
<del> output_to_op[y.name].append(op.name)
<del> op_to_all[op.name].append(y.name)
<del> if str(op.type) == "Assign":
<del> for y in op.outputs:
<del> for x in op.inputs:
<del> assign_out_to_in[y.name].append(x.name)
<del>
<del> assign_groups = collections.defaultdict(list)
<del> for out_name in assign_out_to_in.keys():
<del> name_group = assign_out_to_in[out_name]
<del> for n1 in name_group:
<del> assign_groups[n1].append(out_name)
<del> for n2 in name_group:
<del> if n1 != n2:
<del> assign_groups[n1].append(n2)
<del>
<del> seen_tensors = {}
<del> stack = [x.name for x in outputs]
<del> while stack:
<del> name = stack.pop()
<del> if name in seen_tensors:
<del> continue
<del> seen_tensors[name] = True
<del>
<del> if name in output_to_op:
<del> for op_name in output_to_op[name]:
<del> if op_name in op_to_all:
<del> for input_name in op_to_all[op_name]:
<del> if input_name not in stack:
<del> stack.append(input_name)
<del>
<del> expanded_names = []
<del> if name in assign_groups:
<del> for assign_name in assign_groups[name]:
<del> expanded_names.append(assign_name)
<del>
<del> for expanded_name in expanded_names:
<del> if expanded_name not in stack:
<del> stack.append(expanded_name)
<del>
<del> unreachable_ops = []
<del> for op in graph.get_operations():
<del> is_unreachable = False
<del> all_names = [x.name for x in op.inputs] + [x.name for x in op.outputs]
<del> for name in all_names:
<del> if name not in seen_tensors:
<del> is_unreachable = True
<del> if is_unreachable:
<del> unreachable_ops.append(op)
<del> return unreachable_ops
<del>
<del> @classmethod
<del> def flatten_recursive(cls, item):
<del> """Flattens (potentially nested) a tuple/dictionary/list to a list."""
<del> output = []
<del> if isinstance(item, list):
<del> output.extend(item)
<del> elif isinstance(item, tuple):
<del> output.extend(list(item))
<del> elif isinstance(item, dict):
<del> for (_, v) in six.iteritems(item):
<del> output.append(v)
<del> else:
<del> return [item]
<del>
<del> flat_output = []
<del> for x in output:
<del> flat_output.extend(cls.flatten_recursive(x))
<del> return flat_output
<add> class BertModelTester(object):
<add>
<add> def __init__(self,
<add> parent,
<add> batch_size=13,
<add> seq_length=7,
<add> is_training=True,
<add> use_input_mask=True,
<add> use_token_type_ids=True,
<add> vocab_size=99,
<add> hidden_size=32,
<add> num_hidden_layers=5,
<add> num_attention_heads=4,
<add> intermediate_size=37,
<add> hidden_act="gelu",
<add> hidden_dropout_prob=0.1,
<add> attention_probs_dropout_prob=0.1,
<add> max_position_embeddings=512,
<add> type_vocab_size=16,
<add> initializer_range=0.02,
<add> scope=None):
<add> self.parent = parent
<add> self.batch_size = batch_size
<add> self.seq_length = seq_length
<add> self.is_training = is_training
<add> self.use_input_mask = use_input_mask
<add> self.use_token_type_ids = use_token_type_ids
<add> self.vocab_size = vocab_size
<add> self.hidden_size = hidden_size
<add> self.num_hidden_layers = num_hidden_layers
<add> self.num_attention_heads = num_attention_heads
<add> self.intermediate_size = intermediate_size
<add> self.hidden_act = hidden_act
<add> self.hidden_dropout_prob = hidden_dropout_prob
<add> self.attention_probs_dropout_prob = attention_probs_dropout_prob
<add> self.max_position_embeddings = max_position_embeddings
<add> self.type_vocab_size = type_vocab_size
<add> self.initializer_range = initializer_range
<add> self.scope = scope
<add>
<add> def create_model(self):
<add> input_ids = BertModelTest.ids_tensor([self.batch_size, self.seq_length],
<add> self.vocab_size)
<add>
<add> input_mask = None
<add> if self.use_input_mask:
<add> input_mask = BertModelTest.ids_tensor(
<add> [self.batch_size, self.seq_length], vocab_size=2)
<add>
<add> token_type_ids = None
<add> if self.use_token_type_ids:
<add> token_type_ids = BertModelTest.ids_tensor(
<add> [self.batch_size, self.seq_length], self.type_vocab_size)
<add>
<add> config = modeling.BertConfig(
<add> vocab_size=self.vocab_size,
<add> hidden_size=self.hidden_size,
<add> num_hidden_layers=self.num_hidden_layers,
<add> num_attention_heads=self.num_attention_heads,
<add> intermediate_size=self.intermediate_size,
<add> hidden_act=self.hidden_act,
<add> hidden_dropout_prob=self.hidden_dropout_prob,
<add> attention_probs_dropout_prob=self.attention_probs_dropout_prob,
<add> max_position_embeddings=self.max_position_embeddings,
<add> type_vocab_size=self.type_vocab_size,
<add> initializer_range=self.initializer_range)
<add>
<add> model = modeling.BertModel(
<add> config=config,
<add> is_training=self.is_training,
<add> input_ids=input_ids,
<add> input_mask=input_mask,
<add> token_type_ids=token_type_ids,
<add> scope=self.scope)
<add>
<add> outputs = {
<add> "embedding_output": model.get_embedding_output(),
<add> "sequence_output": model.get_sequence_output(),
<add> "pooled_output": model.get_pooled_output(),
<add> "all_encoder_layers": model.get_all_encoder_layers(),
<add> }
<add> return outputs
<add>
<add> def check_output(self, result):
<add> self.parent.assertAllEqual(
<add> result["embedding_output"].shape,
<add> [self.batch_size, self.seq_length, self.hidden_size])
<add>
<add> self.parent.assertAllEqual(
<add> result["sequence_output"].shape,
<add> [self.batch_size, self.seq_length, self.hidden_size])
<add>
<add> self.parent.assertAllEqual(result["pooled_output"].shape,
<add> [self.batch_size, self.hidden_size])
<add>
<add> def test_default(self):
<add> self.run_tester(BertModelTest.BertModelTester(self))
<add>
<add> def test_config_to_json_string(self):
<add> config = modeling.BertConfig(vocab_size=99, hidden_size=37)
<add> obj = json.loads(config.to_json_string())
<add> self.assertEqual(obj["vocab_size"], 99)
<add> self.assertEqual(obj["hidden_size"], 37)
<add>
<add> def run_tester(self, tester):
<add> with self.test_session() as sess:
<add> ops = tester.create_model()
<add> init_op = tf.group(tf.global_variables_initializer(),
<add> tf.local_variables_initializer())
<add> sess.run(init_op)
<add> output_result = sess.run(ops)
<add> tester.check_output(output_result)
<add>
<add> self.assert_all_tensors_reachable(sess, [init_op, ops])
<add>
<add> @classmethod
<add> def ids_tensor(cls, shape, vocab_size, rng=None, name=None):
<add> """Creates a random int32 tensor of the shape within the vocab size."""
<add> if rng is None:
<add> rng = random.Random()
<add>
<add> total_dims = 1
<add> for dim in shape:
<add> total_dims *= dim
<add>
<add> values = []
<add> for _ in range(total_dims):
<add> values.append(rng.randint(0, vocab_size - 1))
<add>
<add> return tf.constant(value=values, dtype=tf.int32, shape=shape, name=name)
<add>
<add> def assert_all_tensors_reachable(self, sess, outputs):
<add> """Checks that all the tensors in the graph are reachable from outputs."""
<add> graph = sess.graph
<add>
<add> ignore_strings = [
<add> "^.*/dilation_rate$",
<add> "^.*/Tensordot/concat$",
<add> "^.*/Tensordot/concat/axis$",
<add> "^testing/.*$",
<add> ]
<add>
<add> ignore_regexes = [re.compile(x) for x in ignore_strings]
<add>
<add> unreachable = self.get_unreachable_ops(graph, outputs)
<add> filtered_unreachable = []
<add> for x in unreachable:
<add> do_ignore = False
<add> for r in ignore_regexes:
<add> m = r.match(x.name)
<add> if m is not None:
<add> do_ignore = True
<add> if do_ignore:
<add> continue
<add> filtered_unreachable.append(x)
<add> unreachable = filtered_unreachable
<add>
<add> self.assertEqual(
<add> len(unreachable), 0, "The following ops are unreachable: %s" %
<add> (" ".join([x.name for x in unreachable])))
<add>
<add> @classmethod
<add> def get_unreachable_ops(cls, graph, outputs):
<add> """Finds all of the tensors in graph that are unreachable from outputs."""
<add> outputs = cls.flatten_recursive(outputs)
<add> output_to_op = collections.defaultdict(list)
<add> op_to_all = collections.defaultdict(list)
<add> assign_out_to_in = collections.defaultdict(list)
<add>
<add> for op in graph.get_operations():
<add> for x in op.inputs:
<add> op_to_all[op.name].append(x.name)
<add> for y in op.outputs:
<add> output_to_op[y.name].append(op.name)
<add> op_to_all[op.name].append(y.name)
<add> if str(op.type) == "Assign":
<add> for y in op.outputs:
<add> for x in op.inputs:
<add> assign_out_to_in[y.name].append(x.name)
<add>
<add> assign_groups = collections.defaultdict(list)
<add> for out_name in assign_out_to_in.keys():
<add> name_group = assign_out_to_in[out_name]
<add> for n1 in name_group:
<add> assign_groups[n1].append(out_name)
<add> for n2 in name_group:
<add> if n1 != n2:
<add> assign_groups[n1].append(n2)
<add>
<add> seen_tensors = {}
<add> stack = [x.name for x in outputs]
<add> while stack:
<add> name = stack.pop()
<add> if name in seen_tensors:
<add> continue
<add> seen_tensors[name] = True
<add>
<add> if name in output_to_op:
<add> for op_name in output_to_op[name]:
<add> if op_name in op_to_all:
<add> for input_name in op_to_all[op_name]:
<add> if input_name not in stack:
<add> stack.append(input_name)
<add>
<add> expanded_names = []
<add> if name in assign_groups:
<add> for assign_name in assign_groups[name]:
<add> expanded_names.append(assign_name)
<add>
<add> for expanded_name in expanded_names:
<add> if expanded_name not in stack:
<add> stack.append(expanded_name)
<add>
<add> unreachable_ops = []
<add> for op in graph.get_operations():
<add> is_unreachable = False
<add> all_names = [x.name for x in op.inputs] + [x.name for x in op.outputs]
<add> for name in all_names:
<add> if name not in seen_tensors:
<add> is_unreachable = True
<add> if is_unreachable:
<add> unreachable_ops.append(op)
<add> return unreachable_ops
<add>
<add> @classmethod
<add> def flatten_recursive(cls, item):
<add> """Flattens (potentially nested) a tuple/dictionary/list to a list."""
<add> output = []
<add> if isinstance(item, list):
<add> output.extend(item)
<add> elif isinstance(item, tuple):
<add> output.extend(list(item))
<add> elif isinstance(item, dict):
<add> for (_, v) in six.iteritems(item):
<add> output.append(v)
<add> else:
<add> return [item]
<add>
<add> flat_output = []
<add> for x in output:
<add> flat_output.extend(cls.flatten_recursive(x))
<add> return flat_output
<ide>
<ide>
<ide> if __name__ == "__main__":
<del> tf.test.main()
<add> tf.test.main()
<ide><path>optimization.py
<ide>
<ide>
<ide> def create_optimizer(loss, init_lr, num_train_steps, num_warmup_steps, use_tpu):
<del> """Creates an optimizer training op."""
<del> global_step = tf.train.get_or_create_global_step()
<add> """Creates an optimizer training op."""
<add> global_step = tf.train.get_or_create_global_step()
<ide>
<del> learning_rate = tf.constant(value=init_lr, shape=[], dtype=tf.float32)
<add> learning_rate = tf.constant(value=init_lr, shape=[], dtype=tf.float32)
<ide>
<del> # Implements linear decay of the learning rate.
<del> learning_rate = tf.train.polynomial_decay(
<del> learning_rate,
<del> global_step,
<del> num_train_steps,
<del> end_learning_rate=0.0,
<del> power=1.0,
<del> cycle=False)
<add> # Implements linear decay of the learning rate.
<add> learning_rate = tf.train.polynomial_decay(
<add> learning_rate,
<add> global_step,
<add> num_train_steps,
<add> end_learning_rate=0.0,
<add> power=1.0,
<add> cycle=False)
<ide>
<del> # Implements linear warmup. I.e., if global_step < num_warmup_steps, the
<del> # learning rate will be `global_step/num_warmup_steps * init_lr`.
<del> if num_warmup_steps:
<del> global_steps_int = tf.cast(global_step, tf.int32)
<del> warmup_steps_int = tf.constant(num_warmup_steps, dtype=tf.int32)
<add> # Implements linear warmup. I.e., if global_step < num_warmup_steps, the
<add> # learning rate will be `global_step/num_warmup_steps * init_lr`.
<add> if num_warmup_steps:
<add> global_steps_int = tf.cast(global_step, tf.int32)
<add> warmup_steps_int = tf.constant(num_warmup_steps, dtype=tf.int32)
<ide>
<del> global_steps_float = tf.cast(global_steps_int, tf.float32)
<del> warmup_steps_float = tf.cast(warmup_steps_int, tf.float32)
<add> global_steps_float = tf.cast(global_steps_int, tf.float32)
<add> warmup_steps_float = tf.cast(warmup_steps_int, tf.float32)
<ide>
<del> warmup_percent_done = global_steps_float / warmup_steps_float
<del> warmup_learning_rate = init_lr * warmup_percent_done
<add> warmup_percent_done = global_steps_float / warmup_steps_float
<add> warmup_learning_rate = init_lr * warmup_percent_done
<ide>
<del> is_warmup = tf.cast(global_steps_int < warmup_steps_int, tf.float32)
<del> learning_rate = (
<del> (1.0 - is_warmup) * learning_rate + is_warmup * warmup_learning_rate)
<add> is_warmup = tf.cast(global_steps_int < warmup_steps_int, tf.float32)
<add> learning_rate = (
<add> (1.0 - is_warmup) * learning_rate + is_warmup * warmup_learning_rate)
<ide>
<del> # It is recommended that you use this optimizer for fine tuning, since this
<del> # is how the model was trained (note that the Adam m/v variables are NOT
<del> # loaded from init_checkpoint.)
<del> optimizer = AdamWeightDecayOptimizer(
<del> learning_rate=learning_rate,
<del> weight_decay_rate=0.01,
<del> beta_1=0.9,
<del> beta_2=0.999,
<del> epsilon=1e-6,
<del> exclude_from_weight_decay=["LayerNorm", "layer_norm", "bias"])
<add> # It is recommended that you use this optimizer for fine tuning, since this
<add> # is how the model was trained (note that the Adam m/v variables are NOT
<add> # loaded from init_checkpoint.)
<add> optimizer = AdamWeightDecayOptimizer(
<add> learning_rate=learning_rate,
<add> weight_decay_rate=0.01,
<add> beta_1=0.9,
<add> beta_2=0.999,
<add> epsilon=1e-6,
<add> exclude_from_weight_decay=["LayerNorm", "layer_norm", "bias"])
<ide>
<del> if use_tpu:
<del> optimizer = tf.contrib.tpu.CrossShardOptimizer(optimizer)
<add> if use_tpu:
<add> optimizer = tf.contrib.tpu.CrossShardOptimizer(optimizer)
<ide>
<del> tvars = tf.trainable_variables()
<del> grads = tf.gradients(loss, tvars)
<add> tvars = tf.trainable_variables()
<add> grads = tf.gradients(loss, tvars)
<ide>
<del> # This is how the model was pre-trained.
<del> (grads, _) = tf.clip_by_global_norm(grads, clip_norm=1.0)
<add> # This is how the model was pre-trained.
<add> (grads, _) = tf.clip_by_global_norm(grads, clip_norm=1.0)
<ide>
<del> train_op = optimizer.apply_gradients(
<del> zip(grads, tvars), global_step=global_step)
<add> train_op = optimizer.apply_gradients(
<add> zip(grads, tvars), global_step=global_step)
<ide>
<del> new_global_step = global_step + 1
<del> train_op = tf.group(train_op, [global_step.assign(new_global_step)])
<del> return train_op
<add> new_global_step = global_step + 1
<add> train_op = tf.group(train_op, [global_step.assign(new_global_step)])
<add> return train_op
<ide>
<ide>
<ide> class AdamWeightDecayOptimizer(tf.train.Optimizer):
<del> """A basic Adam optimizer that includes "correct" L2 weight decay."""
<del>
<del> def __init__(self,
<del> learning_rate,
<del> weight_decay_rate=0.0,
<del> beta_1=0.9,
<del> beta_2=0.999,
<del> epsilon=1e-6,
<del> exclude_from_weight_decay=None,
<del> name="AdamWeightDecayOptimizer"):
<del> """Constructs a AdamWeightDecayOptimizer."""
<del> super(AdamWeightDecayOptimizer, self).__init__(False, name)
<del>
<del> self.learning_rate = learning_rate
<del> self.weight_decay_rate = weight_decay_rate
<del> self.beta_1 = beta_1
<del> self.beta_2 = beta_2
<del> self.epsilon = epsilon
<del> self.exclude_from_weight_decay = exclude_from_weight_decay
<del>
<del> def apply_gradients(self, grads_and_vars, global_step=None, name=None):
<del> """See base class."""
<del> assignments = []
<del> for (grad, param) in grads_and_vars:
<del> if grad is None or param is None:
<del> continue
<del>
<del> param_name = self._get_variable_name(param.name)
<del>
<del> m = tf.get_variable(
<del> name=param_name + "/adam_m",
<del> shape=param.shape.as_list(),
<del> dtype=tf.float32,
<del> trainable=False,
<del> initializer=tf.zeros_initializer())
<del> v = tf.get_variable(
<del> name=param_name + "/adam_v",
<del> shape=param.shape.as_list(),
<del> dtype=tf.float32,
<del> trainable=False,
<del> initializer=tf.zeros_initializer())
<del>
<del> # Standard Adam update.
<del> next_m = (
<del> tf.multiply(self.beta_1, m) + tf.multiply(1.0 - self.beta_1, grad))
<del> next_v = (
<del> tf.multiply(self.beta_2, v) + tf.multiply(1.0 - self.beta_2,
<del> tf.square(grad)))
<del>
<del> update = next_m / (tf.sqrt(next_v) + self.epsilon)
<del>
<del> # Just adding the square of the weights to the loss function is *not*
<del> # the correct way of using L2 regularization/weight decay with Adam,
<del> # since that will interact with the m and v parameters in strange ways.
<del> #
<del> # Instead we want ot decay the weights in a manner that doesn't interact
<del> # with the m/v parameters. This is equivalent to adding the square
<del> # of the weights to the loss with plain (non-momentum) SGD.
<del> if self._do_use_weight_decay(param_name):
<del> update += self.weight_decay_rate * param
<del>
<del> update_with_lr = self.learning_rate * update
<del>
<del> next_param = param - update_with_lr
<del>
<del> assignments.extend(
<del> [param.assign(next_param),
<del> m.assign(next_m),
<del> v.assign(next_v)])
<del> return tf.group(*assignments, name=name)
<del>
<del> def _do_use_weight_decay(self, param_name):
<del> """Whether to use L2 weight decay for `param_name`."""
<del> if not self.weight_decay_rate:
<del> return False
<del> if self.exclude_from_weight_decay:
<del> for r in self.exclude_from_weight_decay:
<del> if re.search(r, param_name) is not None:
<del> return False
<del> return True
<del>
<del> def _get_variable_name(self, param_name):
<del> """Get the variable name from the tensor name."""
<del> m = re.match("^(.*):\\d+$", param_name)
<del> if m is not None:
<del> param_name = m.group(1)
<del> return param_name
<add> """A basic Adam optimizer that includes "correct" L2 weight decay."""
<add>
<add> def __init__(self,
<add> learning_rate,
<add> weight_decay_rate=0.0,
<add> beta_1=0.9,
<add> beta_2=0.999,
<add> epsilon=1e-6,
<add> exclude_from_weight_decay=None,
<add> name="AdamWeightDecayOptimizer"):
<add> """Constructs a AdamWeightDecayOptimizer."""
<add> super(AdamWeightDecayOptimizer, self).__init__(False, name)
<add>
<add> self.learning_rate = learning_rate
<add> self.weight_decay_rate = weight_decay_rate
<add> self.beta_1 = beta_1
<add> self.beta_2 = beta_2
<add> self.epsilon = epsilon
<add> self.exclude_from_weight_decay = exclude_from_weight_decay
<add>
<add> def apply_gradients(self, grads_and_vars, global_step=None, name=None):
<add> """See base class."""
<add> assignments = []
<add> for (grad, param) in grads_and_vars:
<add> if grad is None or param is None:
<add> continue
<add>
<add> param_name = self._get_variable_name(param.name)
<add>
<add> m = tf.get_variable(
<add> name=param_name + "/adam_m",
<add> shape=param.shape.as_list(),
<add> dtype=tf.float32,
<add> trainable=False,
<add> initializer=tf.zeros_initializer())
<add> v = tf.get_variable(
<add> name=param_name + "/adam_v",
<add> shape=param.shape.as_list(),
<add> dtype=tf.float32,
<add> trainable=False,
<add> initializer=tf.zeros_initializer())
<add>
<add> # Standard Adam update.
<add> next_m = (
<add> tf.multiply(self.beta_1, m) + tf.multiply(1.0 - self.beta_1, grad))
<add> next_v = (
<add> tf.multiply(self.beta_2, v) + tf.multiply(1.0 - self.beta_2,
<add> tf.square(grad)))
<add>
<add> update = next_m / (tf.sqrt(next_v) + self.epsilon)
<add>
<add> # Just adding the square of the weights to the loss function is *not*
<add> # the correct way of using L2 regularization/weight decay with Adam,
<add> # since that will interact with the m and v parameters in strange ways.
<add> #
<add> # Instead we want ot decay the weights in a manner that doesn't interact
<add> # with the m/v parameters. This is equivalent to adding the square
<add> # of the weights to the loss with plain (non-momentum) SGD.
<add> if self._do_use_weight_decay(param_name):
<add> update += self.weight_decay_rate * param
<add>
<add> update_with_lr = self.learning_rate * update
<add>
<add> next_param = param - update_with_lr
<add>
<add> assignments.extend(
<add> [param.assign(next_param),
<add> m.assign(next_m),
<add> v.assign(next_v)])
<add> return tf.group(*assignments, name=name)
<add>
<add> def _do_use_weight_decay(self, param_name):
<add> """Whether to use L2 weight decay for `param_name`."""
<add> if not self.weight_decay_rate:
<add> return False
<add> if self.exclude_from_weight_decay:
<add> for r in self.exclude_from_weight_decay:
<add> if re.search(r, param_name) is not None:
<add> return False
<add> return True
<add>
<add> def _get_variable_name(self, param_name):
<add> """Get the variable name from the tensor name."""
<add> m = re.match("^(.*):\\d+$", param_name)
<add> if m is not None:
<add> param_name = m.group(1)
<add> return param_name
<ide><path>optimization_test.py
<ide>
<ide> class OptimizationTest(tf.test.TestCase):
<ide>
<del> def test_adam(self):
<del> with self.test_session() as sess:
<del> w = tf.get_variable(
<del> "w",
<del> shape=[3],
<del> initializer=tf.constant_initializer([0.1, -0.2, -0.1]))
<del> x = tf.constant([0.4, 0.2, -0.5])
<del> loss = tf.reduce_mean(tf.square(x - w))
<del> tvars = tf.trainable_variables()
<del> grads = tf.gradients(loss, tvars)
<del> global_step = tf.train.get_or_create_global_step()
<del> optimizer = optimization.AdamWeightDecayOptimizer(learning_rate=0.2)
<del> train_op = optimizer.apply_gradients(zip(grads, tvars), global_step)
<del> init_op = tf.group(tf.global_variables_initializer(),
<del> tf.local_variables_initializer())
<del> sess.run(init_op)
<del> for _ in range(100):
<del> sess.run(train_op)
<del> w_np = sess.run(w)
<del> self.assertAllClose(w_np.flat, [0.4, 0.2, -0.5], rtol=1e-2, atol=1e-2)
<add> def test_adam(self):
<add> with self.test_session() as sess:
<add> w = tf.get_variable(
<add> "w",
<add> shape=[3],
<add> initializer=tf.constant_initializer([0.1, -0.2, -0.1]))
<add> x = tf.constant([0.4, 0.2, -0.5])
<add> loss = tf.reduce_mean(tf.square(x - w))
<add> tvars = tf.trainable_variables()
<add> grads = tf.gradients(loss, tvars)
<add> global_step = tf.train.get_or_create_global_step()
<add> optimizer = optimization.AdamWeightDecayOptimizer(learning_rate=0.2)
<add> train_op = optimizer.apply_gradients(zip(grads, tvars), global_step)
<add> init_op = tf.group(tf.global_variables_initializer(),
<add> tf.local_variables_initializer())
<add> sess.run(init_op)
<add> for _ in range(100):
<add> sess.run(train_op)
<add> w_np = sess.run(w)
<add> self.assertAllClose(w_np.flat, [0.4, 0.2, -0.5], rtol=1e-2, atol=1e-2)
<ide>
<ide>
<ide> if __name__ == "__main__":
<del> tf.test.main()
<add> tf.test.main()
<ide><path>run_classifier.py
<ide>
<ide>
<ide> class InputExample(object):
<del> """A single training/test example for simple sequence classification."""
<del>
<del> def __init__(self, guid, text_a, text_b=None, label=None):
<del> """Constructs a InputExample.
<del>
<del> Args:
<del> guid: Unique id for the example.
<del> text_a: string. The untokenized text of the first sequence. For single
<del> sequence tasks, only this sequence must be specified.
<del> text_b: (Optional) string. The untokenized text of the second sequence.
<del> Only must be specified for sequence pair tasks.
<del> label: (Optional) string. The label of the example. This should be
<del> specified for train and dev examples, but not for test examples.
<del> """
<del> self.guid = guid
<del> self.text_a = text_a
<del> self.text_b = text_b
<del> self.label = label
<add> """A single training/test example for simple sequence classification."""
<add>
<add> def __init__(self, guid, text_a, text_b=None, label=None):
<add> """Constructs a InputExample.
<add>
<add> Args:
<add> guid: Unique id for the example.
<add> text_a: string. The untokenized text of the first sequence. For single
<add> sequence tasks, only this sequence must be specified.
<add> text_b: (Optional) string. The untokenized text of the second sequence.
<add> Only must be specified for sequence pair tasks.
<add> label: (Optional) string. The label of the example. This should be
<add> specified for train and dev examples, but not for test examples.
<add> """
<add> self.guid = guid
<add> self.text_a = text_a
<add> self.text_b = text_b
<add> self.label = label
<ide>
<ide>
<ide> class InputFeatures(object):
<del> """A single set of features of data."""
<add> """A single set of features of data."""
<ide>
<del> def __init__(self, input_ids, input_mask, segment_ids, label_id):
<del> self.input_ids = input_ids
<del> self.input_mask = input_mask
<del> self.segment_ids = segment_ids
<del> self.label_id = label_id
<add> def __init__(self, input_ids, input_mask, segment_ids, label_id):
<add> self.input_ids = input_ids
<add> self.input_mask = input_mask
<add> self.segment_ids = segment_ids
<add> self.label_id = label_id
<ide>
<ide>
<ide> class DataProcessor(object):
<del> """Base class for data converters for sequence classification data sets."""
<add> """Base class for data converters for sequence classification data sets."""
<ide>
<del> def get_train_examples(self, data_dir):
<del> """Gets a collection of `InputExample`s for the train set."""
<del> raise NotImplementedError()
<add> def get_train_examples(self, data_dir):
<add> """Gets a collection of `InputExample`s for the train set."""
<add> raise NotImplementedError()
<ide>
<del> def get_dev_examples(self, data_dir):
<del> """Gets a collection of `InputExample`s for the dev set."""
<del> raise NotImplementedError()
<add> def get_dev_examples(self, data_dir):
<add> """Gets a collection of `InputExample`s for the dev set."""
<add> raise NotImplementedError()
<ide>
<del> def get_labels(self):
<del> """Gets the list of labels for this data set."""
<del> raise NotImplementedError()
<add> def get_labels(self):
<add> """Gets the list of labels for this data set."""
<add> raise NotImplementedError()
<ide>
<del> @classmethod
<del> def _read_tsv(cls, input_file, quotechar=None):
<del> """Reads a tab separated value file."""
<del> with tf.gfile.Open(input_file, "r") as f:
<del> reader = csv.reader(f, delimiter="\t", quotechar=quotechar)
<del> lines = []
<del> for line in reader:
<del> lines.append(line)
<del> return lines
<add> @classmethod
<add> def _read_tsv(cls, input_file, quotechar=None):
<add> """Reads a tab separated value file."""
<add> with tf.gfile.Open(input_file, "r") as f:
<add> reader = csv.reader(f, delimiter="\t", quotechar=quotechar)
<add> lines = []
<add> for line in reader:
<add> lines.append(line)
<add> return lines
<ide>
<ide>
<ide> class MnliProcessor(DataProcessor):
<del> """Processor for the MultiNLI data set (GLUE version)."""
<del>
<del> def get_train_examples(self, data_dir):
<del> """See base class."""
<del> return self._create_examples(
<del> self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
<del>
<del> def get_dev_examples(self, data_dir):
<del> """See base class."""
<del> return self._create_examples(
<del> self._read_tsv(os.path.join(data_dir, "dev_matched.tsv")),
<del> "dev_matched")
<del>
<del> def get_labels(self):
<del> """See base class."""
<del> return ["contradiction", "entailment", "neutral"]
<del>
<del> def _create_examples(self, lines, set_type):
<del> """Creates examples for the training and dev sets."""
<del> examples = []
<del> for (i, line) in enumerate(lines):
<del> if i == 0:
<del> continue
<del> guid = "%s-%s" % (set_type, tokenization.convert_to_unicode(line[0]))
<del> text_a = tokenization.convert_to_unicode(line[8])
<del> text_b = tokenization.convert_to_unicode(line[9])
<del> label = tokenization.convert_to_unicode(line[-1])
<del> examples.append(
<del> InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
<del> return examples
<add> """Processor for the MultiNLI data set (GLUE version)."""
<add>
<add> def get_train_examples(self, data_dir):
<add> """See base class."""
<add> return self._create_examples(
<add> self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
<add>
<add> def get_dev_examples(self, data_dir):
<add> """See base class."""
<add> return self._create_examples(
<add> self._read_tsv(os.path.join(data_dir, "dev_matched.tsv")),
<add> "dev_matched")
<add>
<add> def get_labels(self):
<add> """See base class."""
<add> return ["contradiction", "entailment", "neutral"]
<add>
<add> def _create_examples(self, lines, set_type):
<add> """Creates examples for the training and dev sets."""
<add> examples = []
<add> for (i, line) in enumerate(lines):
<add> if i == 0:
<add> continue
<add> guid = "%s-%s" % (set_type, tokenization.convert_to_unicode(line[0]))
<add> text_a = tokenization.convert_to_unicode(line[8])
<add> text_b = tokenization.convert_to_unicode(line[9])
<add> label = tokenization.convert_to_unicode(line[-1])
<add> examples.append(
<add> InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
<add> return examples
<ide>
<ide>
<ide> class MrpcProcessor(DataProcessor):
<del> """Processor for the MRPC data set (GLUE version)."""
<del>
<del> def get_train_examples(self, data_dir):
<del> """See base class."""
<del> print("LOOKING AT {}".format(os.path.join(data_dir, "train.tsv")))
<del> return self._create_examples(
<del> self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
<del>
<del> def get_dev_examples(self, data_dir):
<del> """See base class."""
<del> return self._create_examples(
<del> self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")
<del>
<del> def get_labels(self):
<del> """See base class."""
<del> return ["0", "1"]
<del>
<del> def _create_examples(self, lines, set_type):
<del> """Creates examples for the training and dev sets."""
<del> examples = []
<del> for (i, line) in enumerate(lines):
<del> if i == 0:
<del> continue
<del> guid = "%s-%s" % (set_type, i)
<del> text_a = tokenization.convert_to_unicode(line[3])
<del> text_b = tokenization.convert_to_unicode(line[4])
<del> label = tokenization.convert_to_unicode(line[0])
<del> examples.append(
<del> InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
<del> return examples
<add> """Processor for the MRPC data set (GLUE version)."""
<add>
<add> def get_train_examples(self, data_dir):
<add> """See base class."""
<add> print("LOOKING AT {}".format(os.path.join(data_dir, "train.tsv")))
<add> return self._create_examples(
<add> self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
<add>
<add> def get_dev_examples(self, data_dir):
<add> """See base class."""
<add> return self._create_examples(
<add> self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")
<add>
<add> def get_labels(self):
<add> """See base class."""
<add> return ["0", "1"]
<add>
<add> def _create_examples(self, lines, set_type):
<add> """Creates examples for the training and dev sets."""
<add> examples = []
<add> for (i, line) in enumerate(lines):
<add> if i == 0:
<add> continue
<add> guid = "%s-%s" % (set_type, i)
<add> text_a = tokenization.convert_to_unicode(line[3])
<add> text_b = tokenization.convert_to_unicode(line[4])
<add> label = tokenization.convert_to_unicode(line[0])
<add> examples.append(
<add> InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
<add> return examples
<ide>
<ide>
<ide> class ColaProcessor(DataProcessor):
<del> """Processor for the CoLA data set (GLUE version)."""
<del>
<del> def get_train_examples(self, data_dir):
<del> """See base class."""
<del> return self._create_examples(
<del> self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
<del>
<del> def get_dev_examples(self, data_dir):
<del> """See base class."""
<del> return self._create_examples(
<del> self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")
<del>
<del> def get_labels(self):
<del> """See base class."""
<del> return ["0", "1"]
<del>
<del> def _create_examples(self, lines, set_type):
<del> """Creates examples for the training and dev sets."""
<del> examples = []
<del> for (i, line) in enumerate(lines):
<del> guid = "%s-%s" % (set_type, i)
<del> text_a = tokenization.convert_to_unicode(line[3])
<del> label = tokenization.convert_to_unicode(line[1])
<del> examples.append(
<del> InputExample(guid=guid, text_a=text_a, text_b=None, label=label))
<del> return examples
<add> """Processor for the CoLA data set (GLUE version)."""
<add>
<add> def get_train_examples(self, data_dir):
<add> """See base class."""
<add> return self._create_examples(
<add> self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
<add>
<add> def get_dev_examples(self, data_dir):
<add> """See base class."""
<add> return self._create_examples(
<add> self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")
<add>
<add> def get_labels(self):
<add> """See base class."""
<add> return ["0", "1"]
<add>
<add> def _create_examples(self, lines, set_type):
<add> """Creates examples for the training and dev sets."""
<add> examples = []
<add> for (i, line) in enumerate(lines):
<add> guid = "%s-%s" % (set_type, i)
<add> text_a = tokenization.convert_to_unicode(line[3])
<add> label = tokenization.convert_to_unicode(line[1])
<add> examples.append(
<add> InputExample(guid=guid, text_a=text_a, text_b=None, label=label))
<add> return examples
<ide>
<ide>
<ide> def convert_examples_to_features(examples, label_list, max_seq_length,
<ide> tokenizer):
<del> """Loads a data file into a list of `InputBatch`s."""
<del>
<del> label_map = {}
<del> for (i, label) in enumerate(label_list):
<del> label_map[label] = i
<del>
<del> features = []
<del> for (ex_index, example) in enumerate(examples):
<del> tokens_a = tokenizer.tokenize(example.text_a)
<del>
<del> tokens_b = None
<del> if example.text_b:
<del> tokens_b = tokenizer.tokenize(example.text_b)
<del>
<del> if tokens_b:
<del> # Modifies `tokens_a` and `tokens_b` in place so that the total
<del> # length is less than the specified length.
<del> # Account for [CLS], [SEP], [SEP] with "- 3"
<del> _truncate_seq_pair(tokens_a, tokens_b, max_seq_length - 3)
<del> else:
<del> # Account for [CLS] and [SEP] with "- 2"
<del> if len(tokens_a) > max_seq_length - 2:
<del> tokens_a = tokens_a[0:(max_seq_length - 2)]
<del>
<del> # The convention in BERT is:
<del> # (a) For sequence pairs:
<del> # tokens: [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP]
<del> # type_ids: 0 0 0 0 0 0 0 0 1 1 1 1 1 1
<del> # (b) For single sequences:
<del> # tokens: [CLS] the dog is hairy . [SEP]
<del> # type_ids: 0 0 0 0 0 0 0
<del> #
<del> # Where "type_ids" are used to indicate whether this is the first
<del> # sequence or the second sequence. The embedding vectors for `type=0` and
<del> # `type=1` were learned during pre-training and are added to the wordpiece
<del> # embedding vector (and position vector). This is not *strictly* necessary
<del> # since the [SEP] token unambigiously separates the sequences, but it makes
<del> # it easier for the model to learn the concept of sequences.
<del> #
<del> # For classification tasks, the first vector (corresponding to [CLS]) is
<del> # used as as the "sentence vector". Note that this only makes sense because
<del> # the entire model is fine-tuned.
<del> tokens = []
<del> segment_ids = []
<del> tokens.append("[CLS]")
<del> segment_ids.append(0)
<del> for token in tokens_a:
<del> tokens.append(token)
<del> segment_ids.append(0)
<del> tokens.append("[SEP]")
<del> segment_ids.append(0)
<del>
<del> if tokens_b:
<del> for token in tokens_b:
<del> tokens.append(token)
<del> segment_ids.append(1)
<del> tokens.append("[SEP]")
<del> segment_ids.append(1)
<del>
<del> input_ids = tokenizer.convert_tokens_to_ids(tokens)
<del>
<del> # The mask has 1 for real tokens and 0 for padding tokens. Only real
<del> # tokens are attended to.
<del> input_mask = [1] * len(input_ids)
<del>
<del> # Zero-pad up to the sequence length.
<del> while len(input_ids) < max_seq_length:
<del> input_ids.append(0)
<del> input_mask.append(0)
<del> segment_ids.append(0)
<del>
<del> assert len(input_ids) == max_seq_length
<del> assert len(input_mask) == max_seq_length
<del> assert len(segment_ids) == max_seq_length
<del>
<del> label_id = label_map[example.label]
<del> if ex_index < 5:
<del> tf.logging.info("*** Example ***")
<del> tf.logging.info("guid: %s" % (example.guid))
<del> tf.logging.info("tokens: %s" % " ".join(
<del> [tokenization.printable_text(x) for x in tokens]))
<del> tf.logging.info("input_ids: %s" % " ".join([str(x) for x in input_ids]))
<del> tf.logging.info("input_mask: %s" % " ".join([str(x) for x in input_mask]))
<del> tf.logging.info(
<del> "segment_ids: %s" % " ".join([str(x) for x in segment_ids]))
<del> tf.logging.info("label: %s (id = %d)" % (example.label, label_id))
<del>
<del> features.append(
<del> InputFeatures(
<del> input_ids=input_ids,
<del> input_mask=input_mask,
<del> segment_ids=segment_ids,
<del> label_id=label_id))
<del> return features
<add> """Loads a data file into a list of `InputBatch`s."""
<add>
<add> label_map = {}
<add> for (i, label) in enumerate(label_list):
<add> label_map[label] = i
<add>
<add> features = []
<add> for (ex_index, example) in enumerate(examples):
<add> tokens_a = tokenizer.tokenize(example.text_a)
<add>
<add> tokens_b = None
<add> if example.text_b:
<add> tokens_b = tokenizer.tokenize(example.text_b)
<add>
<add> if tokens_b:
<add> # Modifies `tokens_a` and `tokens_b` in place so that the total
<add> # length is less than the specified length.
<add> # Account for [CLS], [SEP], [SEP] with "- 3"
<add> _truncate_seq_pair(tokens_a, tokens_b, max_seq_length - 3)
<add> else:
<add> # Account for [CLS] and [SEP] with "- 2"
<add> if len(tokens_a) > max_seq_length - 2:
<add> tokens_a = tokens_a[0:(max_seq_length - 2)]
<add>
<add> # The convention in BERT is:
<add> # (a) For sequence pairs:
<add> # tokens: [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP]
<add> # type_ids: 0 0 0 0 0 0 0 0 1 1 1 1 1 1
<add> # (b) For single sequences:
<add> # tokens: [CLS] the dog is hairy . [SEP]
<add> # type_ids: 0 0 0 0 0 0 0
<add> #
<add> # Where "type_ids" are used to indicate whether this is the first
<add> # sequence or the second sequence. The embedding vectors for `type=0` and
<add> # `type=1` were learned during pre-training and are added to the wordpiece
<add> # embedding vector (and position vector). This is not *strictly* necessary
<add> # since the [SEP] token unambigiously separates the sequences, but it makes
<add> # it easier for the model to learn the concept of sequences.
<add> #
<add> # For classification tasks, the first vector (corresponding to [CLS]) is
<add> # used as as the "sentence vector". Note that this only makes sense because
<add> # the entire model is fine-tuned.
<add> tokens = []
<add> segment_ids = []
<add> tokens.append("[CLS]")
<add> segment_ids.append(0)
<add> for token in tokens_a:
<add> tokens.append(token)
<add> segment_ids.append(0)
<add> tokens.append("[SEP]")
<add> segment_ids.append(0)
<add>
<add> if tokens_b:
<add> for token in tokens_b:
<add> tokens.append(token)
<add> segment_ids.append(1)
<add> tokens.append("[SEP]")
<add> segment_ids.append(1)
<add>
<add> input_ids = tokenizer.convert_tokens_to_ids(tokens)
<add>
<add> # The mask has 1 for real tokens and 0 for padding tokens. Only real
<add> # tokens are attended to.
<add> input_mask = [1] * len(input_ids)
<add>
<add> # Zero-pad up to the sequence length.
<add> while len(input_ids) < max_seq_length:
<add> input_ids.append(0)
<add> input_mask.append(0)
<add> segment_ids.append(0)
<add>
<add> assert len(input_ids) == max_seq_length
<add> assert len(input_mask) == max_seq_length
<add> assert len(segment_ids) == max_seq_length
<add>
<add> label_id = label_map[example.label]
<add> if ex_index < 5:
<add> tf.logging.info("*** Example ***")
<add> tf.logging.info("guid: %s" % (example.guid))
<add> tf.logging.info("tokens: %s" % " ".join(
<add> [tokenization.printable_text(x) for x in tokens]))
<add> tf.logging.info("input_ids: %s" % " ".join([str(x) for x in input_ids]))
<add> tf.logging.info("input_mask: %s" % " ".join([str(x) for x in input_mask]))
<add> tf.logging.info(
<add> "segment_ids: %s" % " ".join([str(x) for x in segment_ids]))
<add> tf.logging.info("label: %s (id = %d)" % (example.label, label_id))
<add>
<add> features.append(
<add> InputFeatures(
<add> input_ids=input_ids,
<add> input_mask=input_mask,
<add> segment_ids=segment_ids,
<add> label_id=label_id))
<add> return features
<ide>
<ide>
<ide> def _truncate_seq_pair(tokens_a, tokens_b, max_length):
<del> """Truncates a sequence pair in place to the maximum length."""
<del>
<del> # This is a simple heuristic which will always truncate the longer sequence
<del> # one token at a time. This makes more sense than truncating an equal percent
<del> # of tokens from each, since if one sequence is very short then each token
<del> # that's truncated likely contains more information than a longer sequence.
<del> while True:
<del> total_length = len(tokens_a) + len(tokens_b)
<del> if total_length <= max_length:
<del> break
<del> if len(tokens_a) > len(tokens_b):
<del> tokens_a.pop()
<del> else:
<del> tokens_b.pop()
<add> """Truncates a sequence pair in place to the maximum length."""
<add>
<add> # This is a simple heuristic which will always truncate the longer sequence
<add> # one token at a time. This makes more sense than truncating an equal percent
<add> # of tokens from each, since if one sequence is very short then each token
<add> # that's truncated likely contains more information than a longer sequence.
<add> while True:
<add> total_length = len(tokens_a) + len(tokens_b)
<add> if total_length <= max_length:
<add> break
<add> if len(tokens_a) > len(tokens_b):
<add> tokens_a.pop()
<add> else:
<add> tokens_b.pop()
<ide>
<ide>
<ide> def create_model(bert_config, is_training, input_ids, input_mask, segment_ids,
<ide> labels, num_labels, use_one_hot_embeddings):
<del> """Creates a classification model."""
<del> model = modeling.BertModel(
<del> config=bert_config,
<del> is_training=is_training,
<del> input_ids=input_ids,
<del> input_mask=input_mask,
<del> token_type_ids=segment_ids,
<del> use_one_hot_embeddings=use_one_hot_embeddings)
<del>
<del> # In the demo, we are doing a simple classification task on the entire
<del> # segment.
<del> #
<del> # If you want to use the token-level output, use model.get_sequence_output()
<del> # instead.
<del> output_layer = model.get_pooled_output()
<add> """Creates a classification model."""
<add> model = modeling.BertModel(
<add> config=bert_config,
<add> is_training=is_training,
<add> input_ids=input_ids,
<add> input_mask=input_mask,
<add> token_type_ids=segment_ids,
<add> use_one_hot_embeddings=use_one_hot_embeddings)
<add>
<add> # In the demo, we are doing a simple classification task on the entire
<add> # segment.
<add> #
<add> # If you want to use the token-level output, use model.get_sequence_output()
<add> # instead.
<add> output_layer = model.get_pooled_output()
<ide>
<del> hidden_size = output_layer.shape[-1].value
<add> hidden_size = output_layer.shape[-1].value
<ide>
<del> output_weights = tf.get_variable(
<del> "output_weights", [num_labels, hidden_size],
<del> initializer=tf.truncated_normal_initializer(stddev=0.02))
<add> output_weights = tf.get_variable(
<add> "output_weights", [num_labels, hidden_size],
<add> initializer=tf.truncated_normal_initializer(stddev=0.02))
<ide>
<del> output_bias = tf.get_variable(
<del> "output_bias", [num_labels], initializer=tf.zeros_initializer())
<add> output_bias = tf.get_variable(
<add> "output_bias", [num_labels], initializer=tf.zeros_initializer())
<ide>
<del> with tf.variable_scope("loss"):
<del> if is_training:
<del> # I.e., 0.1 dropout
<del> output_layer = tf.nn.dropout(output_layer, keep_prob=0.9)
<add> with tf.variable_scope("loss"):
<add> if is_training:
<add> # I.e., 0.1 dropout
<add> output_layer = tf.nn.dropout(output_layer, keep_prob=0.9)
<ide>
<del> logits = tf.matmul(output_layer, output_weights, transpose_b=True)
<del> logits = tf.nn.bias_add(logits, output_bias)
<del> log_probs = tf.nn.log_softmax(logits, axis=-1)
<add> logits = tf.matmul(output_layer, output_weights, transpose_b=True)
<add> logits = tf.nn.bias_add(logits, output_bias)
<add> log_probs = tf.nn.log_softmax(logits, axis=-1)
<ide>
<del> one_hot_labels = tf.one_hot(labels, depth=num_labels, dtype=tf.float32)
<add> one_hot_labels = tf.one_hot(labels, depth=num_labels, dtype=tf.float32)
<ide>
<del> per_example_loss = -tf.reduce_sum(one_hot_labels * log_probs, axis=-1)
<del> loss = tf.reduce_mean(per_example_loss)
<add> per_example_loss = -tf.reduce_sum(one_hot_labels * log_probs, axis=-1)
<add> loss = tf.reduce_mean(per_example_loss)
<ide>
<del> return (loss, per_example_loss, logits)
<add> return (loss, per_example_loss, logits)
<ide>
<ide>
<ide> def model_fn_builder(bert_config, num_labels, init_checkpoint, learning_rate,
<ide> num_train_steps, num_warmup_steps, use_tpu,
<ide> use_one_hot_embeddings):
<del> """Returns `model_fn` closure for TPUEstimator."""
<del>
<del> def model_fn(features, labels, mode, params): # pylint: disable=unused-argument
<del> """The `model_fn` for TPUEstimator."""
<del>
<del> tf.logging.info("*** Features ***")
<del> for name in sorted(features.keys()):
<del> tf.logging.info(" name = %s, shape = %s" % (name, features[name].shape))
<del>
<del> input_ids = features["input_ids"]
<del> input_mask = features["input_mask"]
<del> segment_ids = features["segment_ids"]
<del> label_ids = features["label_ids"]
<del>
<del> is_training = (mode == tf.estimator.ModeKeys.TRAIN)
<del>
<del> (total_loss, per_example_loss, logits) = create_model(
<del> bert_config, is_training, input_ids, input_mask, segment_ids, label_ids,
<del> num_labels, use_one_hot_embeddings)
<del>
<del> tvars = tf.trainable_variables()
<del>
<del> scaffold_fn = None
<del> if init_checkpoint:
<del> (assignment_map,
<del> initialized_variable_names) = modeling.get_assigment_map_from_checkpoint(
<del> tvars, init_checkpoint)
<del> if use_tpu:
<del>
<del> def tpu_scaffold():
<del> tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
<del> return tf.train.Scaffold()
<del>
<del> scaffold_fn = tpu_scaffold
<del> else:
<del> tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
<del>
<del> tf.logging.info("**** Trainable Variables ****")
<del> for var in tvars:
<del> init_string = ""
<del> if var.name in initialized_variable_names:
<del> init_string = ", *INIT_FROM_CKPT*"
<del> tf.logging.info(" name = %s, shape = %s%s", var.name, var.shape,
<del> init_string)
<del>
<del> output_spec = None
<del> if mode == tf.estimator.ModeKeys.TRAIN:
<del>
<del> train_op = optimization.create_optimizer(
<del> total_loss, learning_rate, num_train_steps, num_warmup_steps, use_tpu)
<del>
<del> output_spec = tf.contrib.tpu.TPUEstimatorSpec(
<del> mode=mode,
<del> loss=total_loss,
<del> train_op=train_op,
<del> scaffold_fn=scaffold_fn)
<del> elif mode == tf.estimator.ModeKeys.EVAL:
<del>
<del> def metric_fn(per_example_loss, label_ids, logits):
<del> predictions = tf.argmax(logits, axis=-1, output_type=tf.int32)
<del> accuracy = tf.metrics.accuracy(label_ids, predictions)
<del> loss = tf.metrics.mean(per_example_loss)
<del> return {
<del> "eval_accuracy": accuracy,
<del> "eval_loss": loss,
<del> }
<del>
<del> eval_metrics = (metric_fn, [per_example_loss, label_ids, logits])
<del> output_spec = tf.contrib.tpu.TPUEstimatorSpec(
<del> mode=mode,
<del> loss=total_loss,
<del> eval_metrics=eval_metrics,
<del> scaffold_fn=scaffold_fn)
<del> else:
<del> raise ValueError("Only TRAIN and EVAL modes are supported: %s" % (mode))
<del>
<del> return output_spec
<del>
<del> return model_fn
<add> """Returns `model_fn` closure for TPUEstimator."""
<add>
<add> def model_fn(features, labels, mode, params): # pylint: disable=unused-argument
<add> """The `model_fn` for TPUEstimator."""
<add>
<add> tf.logging.info("*** Features ***")
<add> for name in sorted(features.keys()):
<add> tf.logging.info(" name = %s, shape = %s" % (name, features[name].shape))
<add>
<add> input_ids = features["input_ids"]
<add> input_mask = features["input_mask"]
<add> segment_ids = features["segment_ids"]
<add> label_ids = features["label_ids"]
<add>
<add> is_training = (mode == tf.estimator.ModeKeys.TRAIN)
<add>
<add> (total_loss, per_example_loss, logits) = create_model(
<add> bert_config, is_training, input_ids, input_mask, segment_ids, label_ids,
<add> num_labels, use_one_hot_embeddings)
<add>
<add> tvars = tf.trainable_variables()
<add>
<add> scaffold_fn = None
<add> if init_checkpoint:
<add> (assignment_map,
<add> initialized_variable_names) = modeling.get_assigment_map_from_checkpoint(
<add> tvars, init_checkpoint)
<add> if use_tpu:
<add>
<add> def tpu_scaffold():
<add> tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
<add> return tf.train.Scaffold()
<add>
<add> scaffold_fn = tpu_scaffold
<add> else:
<add> tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
<add>
<add> tf.logging.info("**** Trainable Variables ****")
<add> for var in tvars:
<add> init_string = ""
<add> if var.name in initialized_variable_names:
<add> init_string = ", *INIT_FROM_CKPT*"
<add> tf.logging.info(" name = %s, shape = %s%s", var.name, var.shape,
<add> init_string)
<add>
<add> output_spec = None
<add> if mode == tf.estimator.ModeKeys.TRAIN:
<add>
<add> train_op = optimization.create_optimizer(
<add> total_loss, learning_rate, num_train_steps, num_warmup_steps, use_tpu)
<add>
<add> output_spec = tf.contrib.tpu.TPUEstimatorSpec(
<add> mode=mode,
<add> loss=total_loss,
<add> train_op=train_op,
<add> scaffold_fn=scaffold_fn)
<add> elif mode == tf.estimator.ModeKeys.EVAL:
<add>
<add> def metric_fn(per_example_loss, label_ids, logits):
<add> predictions = tf.argmax(logits, axis=-1, output_type=tf.int32)
<add> accuracy = tf.metrics.accuracy(label_ids, predictions)
<add> loss = tf.metrics.mean(per_example_loss)
<add> return {
<add> "eval_accuracy": accuracy,
<add> "eval_loss": loss,
<add> }
<add>
<add> eval_metrics = (metric_fn, [per_example_loss, label_ids, logits])
<add> output_spec = tf.contrib.tpu.TPUEstimatorSpec(
<add> mode=mode,
<add> loss=total_loss,
<add> eval_metrics=eval_metrics,
<add> scaffold_fn=scaffold_fn)
<add> else:
<add> raise ValueError("Only TRAIN and EVAL modes are supported: %s" % (mode))
<add>
<add> return output_spec
<add>
<add> return model_fn
<ide>
<ide>
<ide> def input_fn_builder(features, seq_length, is_training, drop_remainder):
<del> """Creates an `input_fn` closure to be passed to TPUEstimator."""
<del>
<del> all_input_ids = []
<del> all_input_mask = []
<del> all_segment_ids = []
<del> all_label_ids = []
<del>
<del> for feature in features:
<del> all_input_ids.append(feature.input_ids)
<del> all_input_mask.append(feature.input_mask)
<del> all_segment_ids.append(feature.segment_ids)
<del> all_label_ids.append(feature.label_id)
<del>
<del> def input_fn(params):
<del> """The actual input function."""
<del> batch_size = params["batch_size"]
<del>
<del> num_examples = len(features)
<del>
<del> # This is for demo purposes and does NOT scale to large data sets. We do
<del> # not use Dataset.from_generator() because that uses tf.py_func which is
<del> # not TPU compatible. The right way to load data is with TFRecordReader.
<del> d = tf.data.Dataset.from_tensor_slices({
<del> "input_ids":
<del> tf.constant(
<del> all_input_ids, shape=[num_examples, seq_length],
<del> dtype=tf.int32),
<del> "input_mask":
<del> tf.constant(
<del> all_input_mask,
<del> shape=[num_examples, seq_length],
<del> dtype=tf.int32),
<del> "segment_ids":
<del> tf.constant(
<del> all_segment_ids,
<del> shape=[num_examples, seq_length],
<del> dtype=tf.int32),
<del> "label_ids":
<del> tf.constant(all_label_ids, shape=[num_examples], dtype=tf.int32),
<del> })
<del>
<del> if is_training:
<del> d = d.repeat()
<del> d = d.shuffle(buffer_size=100)
<del>
<del> d = d.batch(batch_size=batch_size, drop_remainder=drop_remainder)
<del> return d
<del>
<del> return input_fn
<add> """Creates an `input_fn` closure to be passed to TPUEstimator."""
<add>
<add> all_input_ids = []
<add> all_input_mask = []
<add> all_segment_ids = []
<add> all_label_ids = []
<add>
<add> for feature in features:
<add> all_input_ids.append(feature.input_ids)
<add> all_input_mask.append(feature.input_mask)
<add> all_segment_ids.append(feature.segment_ids)
<add> all_label_ids.append(feature.label_id)
<add>
<add> def input_fn(params):
<add> """The actual input function."""
<add> batch_size = params["batch_size"]
<add>
<add> num_examples = len(features)
<add>
<add> # This is for demo purposes and does NOT scale to large data sets. We do
<add> # not use Dataset.from_generator() because that uses tf.py_func which is
<add> # not TPU compatible. The right way to load data is with TFRecordReader.
<add> d = tf.data.Dataset.from_tensor_slices({
<add> "input_ids":
<add> tf.constant(
<add> all_input_ids, shape=[num_examples, seq_length],
<add> dtype=tf.int32),
<add> "input_mask":
<add> tf.constant(
<add> all_input_mask,
<add> shape=[num_examples, seq_length],
<add> dtype=tf.int32),
<add> "segment_ids":
<add> tf.constant(
<add> all_segment_ids,
<add> shape=[num_examples, seq_length],
<add> dtype=tf.int32),
<add> "label_ids":
<add> tf.constant(all_label_ids, shape=[num_examples], dtype=tf.int32),
<add> })
<add>
<add> if is_training:
<add> d = d.repeat()
<add> d = d.shuffle(buffer_size=100)
<add>
<add> d = d.batch(batch_size=batch_size, drop_remainder=drop_remainder)
<add> return d
<add>
<add> return input_fn
<ide>
<ide>
<ide> def main(_):
<del> tf.logging.set_verbosity(tf.logging.INFO)
<del>
<del> processors = {
<del> "cola": ColaProcessor,
<del> "mnli": MnliProcessor,
<del> "mrpc": MrpcProcessor,
<del> }
<del>
<del> if not FLAGS.do_train and not FLAGS.do_eval:
<del> raise ValueError("At least one of `do_train` or `do_eval` must be True.")
<del>
<del> bert_config = modeling.BertConfig.from_json_file(FLAGS.bert_config_file)
<del>
<del> if FLAGS.max_seq_length > bert_config.max_position_embeddings:
<del> raise ValueError(
<del> "Cannot use sequence length %d because the BERT model "
<del> "was only trained up to sequence length %d" %
<del> (FLAGS.max_seq_length, bert_config.max_position_embeddings))
<del>
<del> tf.gfile.MakeDirs(FLAGS.output_dir)
<del>
<del> task_name = FLAGS.task_name.lower()
<del>
<del> if task_name not in processors:
<del> raise ValueError("Task not found: %s" % (task_name))
<del>
<del> processor = processors[task_name]()
<del>
<del> label_list = processor.get_labels()
<del>
<del> tokenizer = tokenization.FullTokenizer(
<del> vocab_file=FLAGS.vocab_file, do_lower_case=FLAGS.do_lower_case)
<del>
<del> tpu_cluster_resolver = None
<del> if FLAGS.use_tpu and FLAGS.tpu_name:
<del> tpu_cluster_resolver = tf.contrib.cluster_resolver.TPUClusterResolver(
<del> FLAGS.tpu_name, zone=FLAGS.tpu_zone, project=FLAGS.gcp_project)
<del>
<del> is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2
<del> run_config = tf.contrib.tpu.RunConfig(
<del> cluster=tpu_cluster_resolver,
<del> master=FLAGS.master,
<del> model_dir=FLAGS.output_dir,
<del> save_checkpoints_steps=FLAGS.save_checkpoints_steps,
<del> tpu_config=tf.contrib.tpu.TPUConfig(
<del> iterations_per_loop=FLAGS.iterations_per_loop,
<del> num_shards=FLAGS.num_tpu_cores,
<del> per_host_input_for_training=is_per_host))
<del>
<del> train_examples = None
<del> num_train_steps = None
<del> num_warmup_steps = None
<del> if FLAGS.do_train:
<del> train_examples = processor.get_train_examples(FLAGS.data_dir)
<del> num_train_steps = int(
<del> len(train_examples) / FLAGS.train_batch_size * FLAGS.num_train_epochs)
<del> num_warmup_steps = int(num_train_steps * FLAGS.warmup_proportion)
<del>
<del> model_fn = model_fn_builder(
<del> bert_config=bert_config,
<del> num_labels=len(label_list),
<del> init_checkpoint=FLAGS.init_checkpoint,
<del> learning_rate=FLAGS.learning_rate,
<del> num_train_steps=num_train_steps,
<del> num_warmup_steps=num_warmup_steps,
<del> use_tpu=FLAGS.use_tpu,
<del> use_one_hot_embeddings=FLAGS.use_tpu)
<del>
<del> # If TPU is not available, this will fall back to normal Estimator on CPU
<del> # or GPU.
<del> estimator = tf.contrib.tpu.TPUEstimator(
<del> use_tpu=FLAGS.use_tpu,
<del> model_fn=model_fn,
<del> config=run_config,
<del> train_batch_size=FLAGS.train_batch_size,
<del> eval_batch_size=FLAGS.eval_batch_size)
<del>
<del> if FLAGS.do_train:
<del> train_features = convert_examples_to_features(
<del> train_examples, label_list, FLAGS.max_seq_length, tokenizer)
<del> tf.logging.info("***** Running training *****")
<del> tf.logging.info(" Num examples = %d", len(train_examples))
<del> tf.logging.info(" Batch size = %d", FLAGS.train_batch_size)
<del> tf.logging.info(" Num steps = %d", num_train_steps)
<del> train_input_fn = input_fn_builder(
<del> features=train_features,
<del> seq_length=FLAGS.max_seq_length,
<del> is_training=True,
<del> drop_remainder=True)
<del> estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
<del>
<del> if FLAGS.do_eval:
<del> eval_examples = processor.get_dev_examples(FLAGS.data_dir)
<del> eval_features = convert_examples_to_features(
<del> eval_examples, label_list, FLAGS.max_seq_length, tokenizer)
<del>
<del> tf.logging.info("***** Running evaluation *****")
<del> tf.logging.info(" Num examples = %d", len(eval_examples))
<del> tf.logging.info(" Batch size = %d", FLAGS.eval_batch_size)
<del>
<del> # This tells the estimator to run through the entire set.
<del> eval_steps = None
<del> # However, if running eval on the TPU, you will need to specify the
<del> # number of steps.
<del> if FLAGS.use_tpu:
<del> # Eval will be slightly WRONG on the TPU because it will truncate
<del> # the last batch.
<del> eval_steps = int(len(eval_examples) / FLAGS.eval_batch_size)
<del>
<del> eval_drop_remainder = True if FLAGS.use_tpu else False
<del> eval_input_fn = input_fn_builder(
<del> features=eval_features,
<del> seq_length=FLAGS.max_seq_length,
<del> is_training=False,
<del> drop_remainder=eval_drop_remainder)
<del>
<del> result = estimator.evaluate(input_fn=eval_input_fn, steps=eval_steps)
<del>
<del> output_eval_file = os.path.join(FLAGS.output_dir, "eval_results.txt")
<del> with tf.gfile.GFile(output_eval_file, "w") as writer:
<del> tf.logging.info("***** Eval results *****")
<del> for key in sorted(result.keys()):
<del> tf.logging.info(" %s = %s", key, str(result[key]))
<del> writer.write("%s = %s\n" % (key, str(result[key])))
<add> tf.logging.set_verbosity(tf.logging.INFO)
<add>
<add> processors = {
<add> "cola": ColaProcessor,
<add> "mnli": MnliProcessor,
<add> "mrpc": MrpcProcessor,
<add> }
<add>
<add> if not FLAGS.do_train and not FLAGS.do_eval:
<add> raise ValueError("At least one of `do_train` or `do_eval` must be True.")
<add>
<add> bert_config = modeling.BertConfig.from_json_file(FLAGS.bert_config_file)
<add>
<add> if FLAGS.max_seq_length > bert_config.max_position_embeddings:
<add> raise ValueError(
<add> "Cannot use sequence length %d because the BERT model "
<add> "was only trained up to sequence length %d" %
<add> (FLAGS.max_seq_length, bert_config.max_position_embeddings))
<add>
<add> tf.gfile.MakeDirs(FLAGS.output_dir)
<add>
<add> task_name = FLAGS.task_name.lower()
<add>
<add> if task_name not in processors:
<add> raise ValueError("Task not found: %s" % (task_name))
<add>
<add> processor = processors[task_name]()
<add>
<add> label_list = processor.get_labels()
<add>
<add> tokenizer = tokenization.FullTokenizer(
<add> vocab_file=FLAGS.vocab_file, do_lower_case=FLAGS.do_lower_case)
<add>
<add> tpu_cluster_resolver = None
<add> if FLAGS.use_tpu and FLAGS.tpu_name:
<add> tpu_cluster_resolver = tf.contrib.cluster_resolver.TPUClusterResolver(
<add> FLAGS.tpu_name, zone=FLAGS.tpu_zone, project=FLAGS.gcp_project)
<add>
<add> is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2
<add> run_config = tf.contrib.tpu.RunConfig(
<add> cluster=tpu_cluster_resolver,
<add> master=FLAGS.master,
<add> model_dir=FLAGS.output_dir,
<add> save_checkpoints_steps=FLAGS.save_checkpoints_steps,
<add> tpu_config=tf.contrib.tpu.TPUConfig(
<add> iterations_per_loop=FLAGS.iterations_per_loop,
<add> num_shards=FLAGS.num_tpu_cores,
<add> per_host_input_for_training=is_per_host))
<add>
<add> train_examples = None
<add> num_train_steps = None
<add> num_warmup_steps = None
<add> if FLAGS.do_train:
<add> train_examples = processor.get_train_examples(FLAGS.data_dir)
<add> num_train_steps = int(
<add> len(train_examples) / FLAGS.train_batch_size * FLAGS.num_train_epochs)
<add> num_warmup_steps = int(num_train_steps * FLAGS.warmup_proportion)
<add>
<add> model_fn = model_fn_builder(
<add> bert_config=bert_config,
<add> num_labels=len(label_list),
<add> init_checkpoint=FLAGS.init_checkpoint,
<add> learning_rate=FLAGS.learning_rate,
<add> num_train_steps=num_train_steps,
<add> num_warmup_steps=num_warmup_steps,
<add> use_tpu=FLAGS.use_tpu,
<add> use_one_hot_embeddings=FLAGS.use_tpu)
<add>
<add> # If TPU is not available, this will fall back to normal Estimator on CPU
<add> # or GPU.
<add> estimator = tf.contrib.tpu.TPUEstimator(
<add> use_tpu=FLAGS.use_tpu,
<add> model_fn=model_fn,
<add> config=run_config,
<add> train_batch_size=FLAGS.train_batch_size,
<add> eval_batch_size=FLAGS.eval_batch_size)
<add>
<add> if FLAGS.do_train:
<add> train_features = convert_examples_to_features(
<add> train_examples, label_list, FLAGS.max_seq_length, tokenizer)
<add> tf.logging.info("***** Running training *****")
<add> tf.logging.info(" Num examples = %d", len(train_examples))
<add> tf.logging.info(" Batch size = %d", FLAGS.train_batch_size)
<add> tf.logging.info(" Num steps = %d", num_train_steps)
<add> train_input_fn = input_fn_builder(
<add> features=train_features,
<add> seq_length=FLAGS.max_seq_length,
<add> is_training=True,
<add> drop_remainder=True)
<add> estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
<add>
<add> if FLAGS.do_eval:
<add> eval_examples = processor.get_dev_examples(FLAGS.data_dir)
<add> eval_features = convert_examples_to_features(
<add> eval_examples, label_list, FLAGS.max_seq_length, tokenizer)
<add>
<add> tf.logging.info("***** Running evaluation *****")
<add> tf.logging.info(" Num examples = %d", len(eval_examples))
<add> tf.logging.info(" Batch size = %d", FLAGS.eval_batch_size)
<add>
<add> # This tells the estimator to run through the entire set.
<add> eval_steps = None
<add> # However, if running eval on the TPU, you will need to specify the
<add> # number of steps.
<add> if FLAGS.use_tpu:
<add> # Eval will be slightly WRONG on the TPU because it will truncate
<add> # the last batch.
<add> eval_steps = int(len(eval_examples) / FLAGS.eval_batch_size)
<add>
<add> eval_drop_remainder = True if FLAGS.use_tpu else False
<add> eval_input_fn = input_fn_builder(
<add> features=eval_features,
<add> seq_length=FLAGS.max_seq_length,
<add> is_training=False,
<add> drop_remainder=eval_drop_remainder)
<add>
<add> result = estimator.evaluate(input_fn=eval_input_fn, steps=eval_steps)
<add>
<add> output_eval_file = os.path.join(FLAGS.output_dir, "eval_results.txt")
<add> with tf.gfile.GFile(output_eval_file, "w") as writer:
<add> tf.logging.info("***** Eval results *****")
<add> for key in sorted(result.keys()):
<add> tf.logging.info(" %s = %s", key, str(result[key]))
<add> writer.write("%s = %s\n" % (key, str(result[key])))
<ide>
<ide>
<ide> if __name__ == "__main__":
<del> flags.mark_flag_as_required("data_dir")
<del> flags.mark_flag_as_required("task_name")
<del> flags.mark_flag_as_required("vocab_file")
<del> flags.mark_flag_as_required("bert_config_file")
<del> flags.mark_flag_as_required("output_dir")
<del> tf.app.run()
<add> flags.mark_flag_as_required("data_dir")
<add> flags.mark_flag_as_required("task_name")
<add> flags.mark_flag_as_required("vocab_file")
<add> flags.mark_flag_as_required("bert_config_file")
<add> flags.mark_flag_as_required("output_dir")
<add> tf.app.run()
<ide><path>run_pretraining.py
<ide> def model_fn_builder(bert_config, init_checkpoint, learning_rate,
<ide> num_train_steps, num_warmup_steps, use_tpu,
<ide> use_one_hot_embeddings):
<del> """Returns `model_fn` closure for TPUEstimator."""
<del>
<del> def model_fn(features, labels, mode, params): # pylint: disable=unused-argument
<del> """The `model_fn` for TPUEstimator."""
<del>
<del> tf.logging.info("*** Features ***")
<del> for name in sorted(features.keys()):
<del> tf.logging.info(" name = %s, shape = %s" % (name, features[name].shape))
<del>
<del> input_ids = features["input_ids"]
<del> input_mask = features["input_mask"]
<del> segment_ids = features["segment_ids"]
<del> masked_lm_positions = features["masked_lm_positions"]
<del> masked_lm_ids = features["masked_lm_ids"]
<del> masked_lm_weights = features["masked_lm_weights"]
<del> next_sentence_labels = features["next_sentence_labels"]
<del>
<del> is_training = (mode == tf.estimator.ModeKeys.TRAIN)
<del>
<del> model = modeling.BertModel(
<del> config=bert_config,
<del> is_training=is_training,
<del> input_ids=input_ids,
<del> input_mask=input_mask,
<del> token_type_ids=segment_ids,
<del> use_one_hot_embeddings=use_one_hot_embeddings)
<del>
<del> (masked_lm_loss,
<del> masked_lm_example_loss, masked_lm_log_probs) = get_masked_lm_output(
<del> bert_config, model.get_sequence_output(), model.get_embedding_table(),
<del> masked_lm_positions, masked_lm_ids, masked_lm_weights)
<del>
<del> (next_sentence_loss, next_sentence_example_loss,
<del> next_sentence_log_probs) = get_next_sentence_output(
<del> bert_config, model.get_pooled_output(), next_sentence_labels)
<del>
<del> total_loss = masked_lm_loss + next_sentence_loss
<del>
<del> tvars = tf.trainable_variables()
<del>
<del> initialized_variable_names = {}
<del> scaffold_fn = None
<del> if init_checkpoint:
<del> (assignment_map,
<del> initialized_variable_names) = modeling.get_assigment_map_from_checkpoint(
<del> tvars, init_checkpoint)
<del> if use_tpu:
<del>
<del> def tpu_scaffold():
<del> tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
<del> return tf.train.Scaffold()
<del>
<del> scaffold_fn = tpu_scaffold
<del> else:
<del> tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
<del>
<del> tf.logging.info("**** Trainable Variables ****")
<del> for var in tvars:
<del> init_string = ""
<del> if var.name in initialized_variable_names:
<del> init_string = ", *INIT_FROM_CKPT*"
<del> tf.logging.info(" name = %s, shape = %s%s", var.name, var.shape,
<del> init_string)
<del>
<del> output_spec = None
<del> if mode == tf.estimator.ModeKeys.TRAIN:
<del> train_op = optimization.create_optimizer(
<del> total_loss, learning_rate, num_train_steps, num_warmup_steps, use_tpu)
<del>
<del> output_spec = tf.contrib.tpu.TPUEstimatorSpec(
<del> mode=mode,
<del> loss=total_loss,
<del> train_op=train_op,
<del> scaffold_fn=scaffold_fn)
<del> elif mode == tf.estimator.ModeKeys.EVAL:
<del>
<del> def metric_fn(masked_lm_example_loss, masked_lm_log_probs, masked_lm_ids,
<del> masked_lm_weights, next_sentence_example_loss,
<del> next_sentence_log_probs, next_sentence_labels):
<del> """Computes the loss and accuracy of the model."""
<del> masked_lm_log_probs = tf.reshape(masked_lm_log_probs,
<del> [-1, masked_lm_log_probs.shape[-1]])
<del> masked_lm_predictions = tf.argmax(
<del> masked_lm_log_probs, axis=-1, output_type=tf.int32)
<del> masked_lm_example_loss = tf.reshape(masked_lm_example_loss, [-1])
<del> masked_lm_ids = tf.reshape(masked_lm_ids, [-1])
<del> masked_lm_weights = tf.reshape(masked_lm_weights, [-1])
<del> masked_lm_accuracy = tf.metrics.accuracy(
<del> labels=masked_lm_ids,
<del> predictions=masked_lm_predictions,
<del> weights=masked_lm_weights)
<del> masked_lm_mean_loss = tf.metrics.mean(
<del> values=masked_lm_example_loss, weights=masked_lm_weights)
<del>
<del> next_sentence_log_probs = tf.reshape(
<del> next_sentence_log_probs, [-1, next_sentence_log_probs.shape[-1]])
<del> next_sentence_predictions = tf.argmax(
<del> next_sentence_log_probs, axis=-1, output_type=tf.int32)
<del> next_sentence_labels = tf.reshape(next_sentence_labels, [-1])
<del> next_sentence_accuracy = tf.metrics.accuracy(
<del> labels=next_sentence_labels, predictions=next_sentence_predictions)
<del> next_sentence_mean_loss = tf.metrics.mean(
<del> values=next_sentence_example_loss)
<del>
<del> return {
<del> "masked_lm_accuracy": masked_lm_accuracy,
<del> "masked_lm_loss": masked_lm_mean_loss,
<del> "next_sentence_accuracy": next_sentence_accuracy,
<del> "next_sentence_loss": next_sentence_mean_loss,
<del> }
<del>
<del> eval_metrics = (metric_fn, [
<del> masked_lm_example_loss, masked_lm_log_probs, masked_lm_ids,
<del> masked_lm_weights, next_sentence_example_loss,
<del> next_sentence_log_probs, next_sentence_labels
<del> ])
<del> output_spec = tf.contrib.tpu.TPUEstimatorSpec(
<del> mode=mode,
<del> loss=total_loss,
<del> eval_metrics=eval_metrics,
<del> scaffold_fn=scaffold_fn)
<del> else:
<del> raise ValueError("Only TRAIN and EVAL modes are supported: %s" % (mode))
<del>
<del> return output_spec
<del>
<del> return model_fn
<add> """Returns `model_fn` closure for TPUEstimator."""
<add>
<add> def model_fn(features, labels, mode, params): # pylint: disable=unused-argument
<add> """The `model_fn` for TPUEstimator."""
<add>
<add> tf.logging.info("*** Features ***")
<add> for name in sorted(features.keys()):
<add> tf.logging.info(" name = %s, shape = %s" % (name, features[name].shape))
<add>
<add> input_ids = features["input_ids"]
<add> input_mask = features["input_mask"]
<add> segment_ids = features["segment_ids"]
<add> masked_lm_positions = features["masked_lm_positions"]
<add> masked_lm_ids = features["masked_lm_ids"]
<add> masked_lm_weights = features["masked_lm_weights"]
<add> next_sentence_labels = features["next_sentence_labels"]
<add>
<add> is_training = (mode == tf.estimator.ModeKeys.TRAIN)
<add>
<add> model = modeling.BertModel(
<add> config=bert_config,
<add> is_training=is_training,
<add> input_ids=input_ids,
<add> input_mask=input_mask,
<add> token_type_ids=segment_ids,
<add> use_one_hot_embeddings=use_one_hot_embeddings)
<add>
<add> (masked_lm_loss,
<add> masked_lm_example_loss, masked_lm_log_probs) = get_masked_lm_output(
<add> bert_config, model.get_sequence_output(), model.get_embedding_table(),
<add> masked_lm_positions, masked_lm_ids, masked_lm_weights)
<add>
<add> (next_sentence_loss, next_sentence_example_loss,
<add> next_sentence_log_probs) = get_next_sentence_output(
<add> bert_config, model.get_pooled_output(), next_sentence_labels)
<add>
<add> total_loss = masked_lm_loss + next_sentence_loss
<add>
<add> tvars = tf.trainable_variables()
<add>
<add> initialized_variable_names = {}
<add> scaffold_fn = None
<add> if init_checkpoint:
<add> (assignment_map,
<add> initialized_variable_names) = modeling.get_assigment_map_from_checkpoint(
<add> tvars, init_checkpoint)
<add> if use_tpu:
<add>
<add> def tpu_scaffold():
<add> tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
<add> return tf.train.Scaffold()
<add>
<add> scaffold_fn = tpu_scaffold
<add> else:
<add> tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
<add>
<add> tf.logging.info("**** Trainable Variables ****")
<add> for var in tvars:
<add> init_string = ""
<add> if var.name in initialized_variable_names:
<add> init_string = ", *INIT_FROM_CKPT*"
<add> tf.logging.info(" name = %s, shape = %s%s", var.name, var.shape,
<add> init_string)
<add>
<add> output_spec = None
<add> if mode == tf.estimator.ModeKeys.TRAIN:
<add> train_op = optimization.create_optimizer(
<add> total_loss, learning_rate, num_train_steps, num_warmup_steps, use_tpu)
<add>
<add> output_spec = tf.contrib.tpu.TPUEstimatorSpec(
<add> mode=mode,
<add> loss=total_loss,
<add> train_op=train_op,
<add> scaffold_fn=scaffold_fn)
<add> elif mode == tf.estimator.ModeKeys.EVAL:
<add>
<add> def metric_fn(masked_lm_example_loss, masked_lm_log_probs, masked_lm_ids,
<add> masked_lm_weights, next_sentence_example_loss,
<add> next_sentence_log_probs, next_sentence_labels):
<add> """Computes the loss and accuracy of the model."""
<add> masked_lm_log_probs = tf.reshape(masked_lm_log_probs,
<add> [-1, masked_lm_log_probs.shape[-1]])
<add> masked_lm_predictions = tf.argmax(
<add> masked_lm_log_probs, axis=-1, output_type=tf.int32)
<add> masked_lm_example_loss = tf.reshape(masked_lm_example_loss, [-1])
<add> masked_lm_ids = tf.reshape(masked_lm_ids, [-1])
<add> masked_lm_weights = tf.reshape(masked_lm_weights, [-1])
<add> masked_lm_accuracy = tf.metrics.accuracy(
<add> labels=masked_lm_ids,
<add> predictions=masked_lm_predictions,
<add> weights=masked_lm_weights)
<add> masked_lm_mean_loss = tf.metrics.mean(
<add> values=masked_lm_example_loss, weights=masked_lm_weights)
<add>
<add> next_sentence_log_probs = tf.reshape(
<add> next_sentence_log_probs, [-1, next_sentence_log_probs.shape[-1]])
<add> next_sentence_predictions = tf.argmax(
<add> next_sentence_log_probs, axis=-1, output_type=tf.int32)
<add> next_sentence_labels = tf.reshape(next_sentence_labels, [-1])
<add> next_sentence_accuracy = tf.metrics.accuracy(
<add> labels=next_sentence_labels, predictions=next_sentence_predictions)
<add> next_sentence_mean_loss = tf.metrics.mean(
<add> values=next_sentence_example_loss)
<add>
<add> return {
<add> "masked_lm_accuracy": masked_lm_accuracy,
<add> "masked_lm_loss": masked_lm_mean_loss,
<add> "next_sentence_accuracy": next_sentence_accuracy,
<add> "next_sentence_loss": next_sentence_mean_loss,
<add> }
<add>
<add> eval_metrics = (metric_fn, [
<add> masked_lm_example_loss, masked_lm_log_probs, masked_lm_ids,
<add> masked_lm_weights, next_sentence_example_loss,
<add> next_sentence_log_probs, next_sentence_labels
<add> ])
<add> output_spec = tf.contrib.tpu.TPUEstimatorSpec(
<add> mode=mode,
<add> loss=total_loss,
<add> eval_metrics=eval_metrics,
<add> scaffold_fn=scaffold_fn)
<add> else:
<add> raise ValueError("Only TRAIN and EVAL modes are supported: %s" % (mode))
<add>
<add> return output_spec
<add>
<add> return model_fn
<ide>
<ide>
<ide> def get_masked_lm_output(bert_config, input_tensor, output_weights, positions,
<ide> label_ids, label_weights):
<del> """Get loss and log probs for the masked LM."""
<del> input_tensor = gather_indexes(input_tensor, positions)
<del>
<del> with tf.variable_scope("cls/predictions"):
<del> # We apply one more non-linear transformation before the output layer.
<del> # This matrix is not used after pre-training.
<del> with tf.variable_scope("transform"):
<del> input_tensor = tf.layers.dense(
<del> input_tensor,
<del> units=bert_config.hidden_size,
<del> activation=modeling.get_activation(bert_config.hidden_act),
<del> kernel_initializer=modeling.create_initializer(
<del> bert_config.initializer_range))
<del> input_tensor = modeling.layer_norm(input_tensor)
<del>
<del> # The output weights are the same as the input embeddings, but there is
<del> # an output-only bias for each token.
<del> output_bias = tf.get_variable(
<del> "output_bias",
<del> shape=[bert_config.vocab_size],
<del> initializer=tf.zeros_initializer())
<del> logits = tf.matmul(input_tensor, output_weights, transpose_b=True)
<del> logits = tf.nn.bias_add(logits, output_bias)
<del> log_probs = tf.nn.log_softmax(logits, axis=-1)
<del>
<del> label_ids = tf.reshape(label_ids, [-1])
<del> label_weights = tf.reshape(label_weights, [-1])
<del>
<del> one_hot_labels = tf.one_hot(
<del> label_ids, depth=bert_config.vocab_size, dtype=tf.float32)
<del>
<del> # The `positions` tensor might be zero-padded (if the sequence is too
<del> # short to have the maximum number of predictions). The `label_weights`
<del> # tensor has a value of 1.0 for every real prediction and 0.0 for the
<del> # padding predictions.
<del> per_example_loss = -tf.reduce_sum(log_probs * one_hot_labels, axis=[-1])
<del> numerator = tf.reduce_sum(label_weights * per_example_loss)
<del> denominator = tf.reduce_sum(label_weights) + 1e-5
<del> loss = numerator / denominator
<del>
<del> return (loss, per_example_loss, log_probs)
<add> """Get loss and log probs for the masked LM."""
<add> input_tensor = gather_indexes(input_tensor, positions)
<add>
<add> with tf.variable_scope("cls/predictions"):
<add> # We apply one more non-linear transformation before the output layer.
<add> # This matrix is not used after pre-training.
<add> with tf.variable_scope("transform"):
<add> input_tensor = tf.layers.dense(
<add> input_tensor,
<add> units=bert_config.hidden_size,
<add> activation=modeling.get_activation(bert_config.hidden_act),
<add> kernel_initializer=modeling.create_initializer(
<add> bert_config.initializer_range))
<add> input_tensor = modeling.layer_norm(input_tensor)
<add>
<add> # The output weights are the same as the input embeddings, but there is
<add> # an output-only bias for each token.
<add> output_bias = tf.get_variable(
<add> "output_bias",
<add> shape=[bert_config.vocab_size],
<add> initializer=tf.zeros_initializer())
<add> logits = tf.matmul(input_tensor, output_weights, transpose_b=True)
<add> logits = tf.nn.bias_add(logits, output_bias)
<add> log_probs = tf.nn.log_softmax(logits, axis=-1)
<add>
<add> label_ids = tf.reshape(label_ids, [-1])
<add> label_weights = tf.reshape(label_weights, [-1])
<add>
<add> one_hot_labels = tf.one_hot(
<add> label_ids, depth=bert_config.vocab_size, dtype=tf.float32)
<add>
<add> # The `positions` tensor might be zero-padded (if the sequence is too
<add> # short to have the maximum number of predictions). The `label_weights`
<add> # tensor has a value of 1.0 for every real prediction and 0.0 for the
<add> # padding predictions.
<add> per_example_loss = -tf.reduce_sum(log_probs * one_hot_labels, axis=[-1])
<add> numerator = tf.reduce_sum(label_weights * per_example_loss)
<add> denominator = tf.reduce_sum(label_weights) + 1e-5
<add> loss = numerator / denominator
<add>
<add> return (loss, per_example_loss, log_probs)
<ide>
<ide>
<ide> def get_next_sentence_output(bert_config, input_tensor, labels):
<del> """Get loss and log probs for the next sentence prediction."""
<del>
<del> # Simple binary classification. Note that 0 is "next sentence" and 1 is
<del> # "random sentence". This weight matrix is not used after pre-training.
<del> with tf.variable_scope("cls/seq_relationship"):
<del> output_weights = tf.get_variable(
<del> "output_weights",
<del> shape=[2, bert_config.hidden_size],
<del> initializer=modeling.create_initializer(bert_config.initializer_range))
<del> output_bias = tf.get_variable(
<del> "output_bias", shape=[2], initializer=tf.zeros_initializer())
<del>
<del> logits = tf.matmul(input_tensor, output_weights, transpose_b=True)
<del> logits = tf.nn.bias_add(logits, output_bias)
<del> log_probs = tf.nn.log_softmax(logits, axis=-1)
<del> labels = tf.reshape(labels, [-1])
<del> one_hot_labels = tf.one_hot(labels, depth=2, dtype=tf.float32)
<del> per_example_loss = -tf.reduce_sum(one_hot_labels * log_probs, axis=-1)
<del> loss = tf.reduce_mean(per_example_loss)
<del> return (loss, per_example_loss, log_probs)
<add> """Get loss and log probs for the next sentence prediction."""
<add>
<add> # Simple binary classification. Note that 0 is "next sentence" and 1 is
<add> # "random sentence". This weight matrix is not used after pre-training.
<add> with tf.variable_scope("cls/seq_relationship"):
<add> output_weights = tf.get_variable(
<add> "output_weights",
<add> shape=[2, bert_config.hidden_size],
<add> initializer=modeling.create_initializer(bert_config.initializer_range))
<add> output_bias = tf.get_variable(
<add> "output_bias", shape=[2], initializer=tf.zeros_initializer())
<add>
<add> logits = tf.matmul(input_tensor, output_weights, transpose_b=True)
<add> logits = tf.nn.bias_add(logits, output_bias)
<add> log_probs = tf.nn.log_softmax(logits, axis=-1)
<add> labels = tf.reshape(labels, [-1])
<add> one_hot_labels = tf.one_hot(labels, depth=2, dtype=tf.float32)
<add> per_example_loss = -tf.reduce_sum(one_hot_labels * log_probs, axis=-1)
<add> loss = tf.reduce_mean(per_example_loss)
<add> return (loss, per_example_loss, log_probs)
<ide>
<ide>
<ide> def gather_indexes(sequence_tensor, positions):
<del> """Gathers the vectors at the specific positions over a minibatch."""
<del> sequence_shape = modeling.get_shape_list(sequence_tensor, expected_rank=3)
<del> batch_size = sequence_shape[0]
<del> seq_length = sequence_shape[1]
<del> width = sequence_shape[2]
<add> """Gathers the vectors at the specific positions over a minibatch."""
<add> sequence_shape = modeling.get_shape_list(sequence_tensor, expected_rank=3)
<add> batch_size = sequence_shape[0]
<add> seq_length = sequence_shape[1]
<add> width = sequence_shape[2]
<ide>
<del> flat_offsets = tf.reshape(
<del> tf.range(0, batch_size, dtype=tf.int32) * seq_length, [-1, 1])
<del> flat_positions = tf.reshape(positions + flat_offsets, [-1])
<del> flat_sequence_tensor = tf.reshape(sequence_tensor,
<del> [batch_size * seq_length, width])
<del> output_tensor = tf.gather(flat_sequence_tensor, flat_positions)
<del> return output_tensor
<add> flat_offsets = tf.reshape(
<add> tf.range(0, batch_size, dtype=tf.int32) * seq_length, [-1, 1])
<add> flat_positions = tf.reshape(positions + flat_offsets, [-1])
<add> flat_sequence_tensor = tf.reshape(sequence_tensor,
<add> [batch_size * seq_length, width])
<add> output_tensor = tf.gather(flat_sequence_tensor, flat_positions)
<add> return output_tensor
<ide>
<ide>
<ide> def input_fn_builder(input_files,
<ide> max_seq_length,
<ide> max_predictions_per_seq,
<ide> is_training,
<ide> num_cpu_threads=4):
<del> """Creates an `input_fn` closure to be passed to TPUEstimator."""
<del>
<del> def input_fn(params):
<del> """The actual input function."""
<del> batch_size = params["batch_size"]
<del>
<del> name_to_features = {
<del> "input_ids":
<del> tf.FixedLenFeature([max_seq_length], tf.int64),
<del> "input_mask":
<del> tf.FixedLenFeature([max_seq_length], tf.int64),
<del> "segment_ids":
<del> tf.FixedLenFeature([max_seq_length], tf.int64),
<del> "masked_lm_positions":
<del> tf.FixedLenFeature([max_predictions_per_seq], tf.int64),
<del> "masked_lm_ids":
<del> tf.FixedLenFeature([max_predictions_per_seq], tf.int64),
<del> "masked_lm_weights":
<del> tf.FixedLenFeature([max_predictions_per_seq], tf.float32),
<del> "next_sentence_labels":
<del> tf.FixedLenFeature([1], tf.int64),
<del> }
<del>
<del> # For training, we want a lot of parallel reading and shuffling.
<del> # For eval, we want no shuffling and parallel reading doesn't matter.
<del> if is_training:
<del> d = tf.data.Dataset.from_tensor_slices(tf.constant(input_files))
<del> d = d.repeat()
<del> d = d.shuffle(buffer_size=len(input_files))
<del>
<del> # `cycle_length` is the number of parallel files that get read.
<del> cycle_length = min(num_cpu_threads, len(input_files))
<del>
<del> # `sloppy` mode means that the interleaving is not exact. This adds
<del> # even more randomness to the training pipeline.
<del> d = d.apply(
<del> tf.contrib.data.parallel_interleave(
<del> tf.data.TFRecordDataset,
<del> sloppy=is_training,
<del> cycle_length=cycle_length))
<del> d = d.shuffle(buffer_size=100)
<del> else:
<del> d = tf.data.TFRecordDataset(input_files)
<del> # Since we evaluate for a fixed number of steps we don't want to encounter
<del> # out-of-range exceptions.
<del> d = d.repeat()
<del>
<del> # We must `drop_remainder` on training because the TPU requires fixed
<del> # size dimensions. For eval, we assume we are evaling on the CPU or GPU
<del> # and we *don"t* want to drop the remainder, otherwise we wont cover
<del> # every sample.
<del> d = d.apply(
<del> tf.contrib.data.map_and_batch(
<del> lambda record: _decode_record(record, name_to_features),
<del> batch_size=batch_size,
<del> num_parallel_batches=num_cpu_threads,
<del> drop_remainder=True))
<del> return d
<del>
<del> return input_fn
<add> """Creates an `input_fn` closure to be passed to TPUEstimator."""
<add>
<add> def input_fn(params):
<add> """The actual input function."""
<add> batch_size = params["batch_size"]
<add>
<add> name_to_features = {
<add> "input_ids":
<add> tf.FixedLenFeature([max_seq_length], tf.int64),
<add> "input_mask":
<add> tf.FixedLenFeature([max_seq_length], tf.int64),
<add> "segment_ids":
<add> tf.FixedLenFeature([max_seq_length], tf.int64),
<add> "masked_lm_positions":
<add> tf.FixedLenFeature([max_predictions_per_seq], tf.int64),
<add> "masked_lm_ids":
<add> tf.FixedLenFeature([max_predictions_per_seq], tf.int64),
<add> "masked_lm_weights":
<add> tf.FixedLenFeature([max_predictions_per_seq], tf.float32),
<add> "next_sentence_labels":
<add> tf.FixedLenFeature([1], tf.int64),
<add> }
<add>
<add> # For training, we want a lot of parallel reading and shuffling.
<add> # For eval, we want no shuffling and parallel reading doesn't matter.
<add> if is_training:
<add> d = tf.data.Dataset.from_tensor_slices(tf.constant(input_files))
<add> d = d.repeat()
<add> d = d.shuffle(buffer_size=len(input_files))
<add>
<add> # `cycle_length` is the number of parallel files that get read.
<add> cycle_length = min(num_cpu_threads, len(input_files))
<add>
<add> # `sloppy` mode means that the interleaving is not exact. This adds
<add> # even more randomness to the training pipeline.
<add> d = d.apply(
<add> tf.contrib.data.parallel_interleave(
<add> tf.data.TFRecordDataset,
<add> sloppy=is_training,
<add> cycle_length=cycle_length))
<add> d = d.shuffle(buffer_size=100)
<add> else:
<add> d = tf.data.TFRecordDataset(input_files)
<add> # Since we evaluate for a fixed number of steps we don't want to encounter
<add> # out-of-range exceptions.
<add> d = d.repeat()
<add>
<add> # We must `drop_remainder` on training because the TPU requires fixed
<add> # size dimensions. For eval, we assume we are evaling on the CPU or GPU
<add> # and we *don"t* want to drop the remainder, otherwise we wont cover
<add> # every sample.
<add> d = d.apply(
<add> tf.contrib.data.map_and_batch(
<add> lambda record: _decode_record(record, name_to_features),
<add> batch_size=batch_size,
<add> num_parallel_batches=num_cpu_threads,
<add> drop_remainder=True))
<add> return d
<add>
<add> return input_fn
<ide>
<ide>
<ide> def _decode_record(record, name_to_features):
<del> """Decodes a record to a TensorFlow example."""
<del> example = tf.parse_single_example(record, name_to_features)
<add> """Decodes a record to a TensorFlow example."""
<add> example = tf.parse_single_example(record, name_to_features)
<ide>
<del> # tf.Example only supports tf.int64, but the TPU only supports tf.int32.
<del> # So cast all int64 to int32.
<del> for name in list(example.keys()):
<del> t = example[name]
<del> if t.dtype == tf.int64:
<del> t = tf.to_int32(t)
<del> example[name] = t
<add> # tf.Example only supports tf.int64, but the TPU only supports tf.int32.
<add> # So cast all int64 to int32.
<add> for name in list(example.keys()):
<add> t = example[name]
<add> if t.dtype == tf.int64:
<add> t = tf.to_int32(t)
<add> example[name] = t
<ide>
<del> return example
<add> return example
<ide>
<ide>
<ide> def main(_):
<del> tf.logging.set_verbosity(tf.logging.INFO)
<del>
<del> if not FLAGS.do_train and not FLAGS.do_eval:
<del> raise ValueError("At least one of `do_train` or `do_eval` must be True.")
<del>
<del> bert_config = modeling.BertConfig.from_json_file(FLAGS.bert_config_file)
<del>
<del> tf.gfile.MakeDirs(FLAGS.output_dir)
<del>
<del> input_files = []
<del> for input_pattern in FLAGS.input_file.split(","):
<del> input_files.extend(tf.gfile.Glob(input_pattern))
<del>
<del> tf.logging.info("*** Input Files ***")
<del> for input_file in input_files:
<del> tf.logging.info(" %s" % input_file)
<del>
<del> tpu_cluster_resolver = None
<del> if FLAGS.use_tpu and FLAGS.tpu_name:
<del> tpu_cluster_resolver = tf.contrib.cluster_resolver.TPUClusterResolver(
<del> FLAGS.tpu_name, zone=FLAGS.tpu_zone, project=FLAGS.gcp_project)
<del>
<del> is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2
<del> run_config = tf.contrib.tpu.RunConfig(
<del> cluster=tpu_cluster_resolver,
<del> master=FLAGS.master,
<del> model_dir=FLAGS.output_dir,
<del> save_checkpoints_steps=FLAGS.save_checkpoints_steps,
<del> tpu_config=tf.contrib.tpu.TPUConfig(
<del> iterations_per_loop=FLAGS.iterations_per_loop,
<del> num_shards=FLAGS.num_tpu_cores,
<del> per_host_input_for_training=is_per_host))
<del>
<del> model_fn = model_fn_builder(
<del> bert_config=bert_config,
<del> init_checkpoint=FLAGS.init_checkpoint,
<del> learning_rate=FLAGS.learning_rate,
<del> num_train_steps=FLAGS.num_train_steps,
<del> num_warmup_steps=FLAGS.num_warmup_steps,
<del> use_tpu=FLAGS.use_tpu,
<del> use_one_hot_embeddings=FLAGS.use_tpu)
<del>
<del> # If TPU is not available, this will fall back to normal Estimator on CPU
<del> # or GPU.
<del> estimator = tf.contrib.tpu.TPUEstimator(
<del> use_tpu=FLAGS.use_tpu,
<del> model_fn=model_fn,
<del> config=run_config,
<del> train_batch_size=FLAGS.train_batch_size,
<del> eval_batch_size=FLAGS.eval_batch_size)
<del>
<del> if FLAGS.do_train:
<del> tf.logging.info("***** Running training *****")
<del> tf.logging.info(" Batch size = %d", FLAGS.train_batch_size)
<del> train_input_fn = input_fn_builder(
<del> input_files=input_files,
<del> max_seq_length=FLAGS.max_seq_length,
<del> max_predictions_per_seq=FLAGS.max_predictions_per_seq,
<del> is_training=True)
<del> estimator.train(input_fn=train_input_fn, max_steps=FLAGS.num_train_steps)
<del>
<del> if FLAGS.do_eval:
<del> tf.logging.info("***** Running evaluation *****")
<del> tf.logging.info(" Batch size = %d", FLAGS.eval_batch_size)
<del>
<del> eval_input_fn = input_fn_builder(
<del> input_files=input_files,
<del> max_seq_length=FLAGS.max_seq_length,
<del> max_predictions_per_seq=FLAGS.max_predictions_per_seq,
<del> is_training=False)
<del>
<del> result = estimator.evaluate(
<del> input_fn=eval_input_fn, steps=FLAGS.max_eval_steps)
<del>
<del> output_eval_file = os.path.join(FLAGS.output_dir, "eval_results.txt")
<del> with tf.gfile.GFile(output_eval_file, "w") as writer:
<del> tf.logging.info("***** Eval results *****")
<del> for key in sorted(result.keys()):
<del> tf.logging.info(" %s = %s", key, str(result[key]))
<del> writer.write("%s = %s\n" % (key, str(result[key])))
<add> tf.logging.set_verbosity(tf.logging.INFO)
<add>
<add> if not FLAGS.do_train and not FLAGS.do_eval:
<add> raise ValueError("At least one of `do_train` or `do_eval` must be True.")
<add>
<add> bert_config = modeling.BertConfig.from_json_file(FLAGS.bert_config_file)
<add>
<add> tf.gfile.MakeDirs(FLAGS.output_dir)
<add>
<add> input_files = []
<add> for input_pattern in FLAGS.input_file.split(","):
<add> input_files.extend(tf.gfile.Glob(input_pattern))
<add>
<add> tf.logging.info("*** Input Files ***")
<add> for input_file in input_files:
<add> tf.logging.info(" %s" % input_file)
<add>
<add> tpu_cluster_resolver = None
<add> if FLAGS.use_tpu and FLAGS.tpu_name:
<add> tpu_cluster_resolver = tf.contrib.cluster_resolver.TPUClusterResolver(
<add> FLAGS.tpu_name, zone=FLAGS.tpu_zone, project=FLAGS.gcp_project)
<add>
<add> is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2
<add> run_config = tf.contrib.tpu.RunConfig(
<add> cluster=tpu_cluster_resolver,
<add> master=FLAGS.master,
<add> model_dir=FLAGS.output_dir,
<add> save_checkpoints_steps=FLAGS.save_checkpoints_steps,
<add> tpu_config=tf.contrib.tpu.TPUConfig(
<add> iterations_per_loop=FLAGS.iterations_per_loop,
<add> num_shards=FLAGS.num_tpu_cores,
<add> per_host_input_for_training=is_per_host))
<add>
<add> model_fn = model_fn_builder(
<add> bert_config=bert_config,
<add> init_checkpoint=FLAGS.init_checkpoint,
<add> learning_rate=FLAGS.learning_rate,
<add> num_train_steps=FLAGS.num_train_steps,
<add> num_warmup_steps=FLAGS.num_warmup_steps,
<add> use_tpu=FLAGS.use_tpu,
<add> use_one_hot_embeddings=FLAGS.use_tpu)
<add>
<add> # If TPU is not available, this will fall back to normal Estimator on CPU
<add> # or GPU.
<add> estimator = tf.contrib.tpu.TPUEstimator(
<add> use_tpu=FLAGS.use_tpu,
<add> model_fn=model_fn,
<add> config=run_config,
<add> train_batch_size=FLAGS.train_batch_size,
<add> eval_batch_size=FLAGS.eval_batch_size)
<add>
<add> if FLAGS.do_train:
<add> tf.logging.info("***** Running training *****")
<add> tf.logging.info(" Batch size = %d", FLAGS.train_batch_size)
<add> train_input_fn = input_fn_builder(
<add> input_files=input_files,
<add> max_seq_length=FLAGS.max_seq_length,
<add> max_predictions_per_seq=FLAGS.max_predictions_per_seq,
<add> is_training=True)
<add> estimator.train(input_fn=train_input_fn, max_steps=FLAGS.num_train_steps)
<add>
<add> if FLAGS.do_eval:
<add> tf.logging.info("***** Running evaluation *****")
<add> tf.logging.info(" Batch size = %d", FLAGS.eval_batch_size)
<add>
<add> eval_input_fn = input_fn_builder(
<add> input_files=input_files,
<add> max_seq_length=FLAGS.max_seq_length,
<add> max_predictions_per_seq=FLAGS.max_predictions_per_seq,
<add> is_training=False)
<add>
<add> result = estimator.evaluate(
<add> input_fn=eval_input_fn, steps=FLAGS.max_eval_steps)
<add>
<add> output_eval_file = os.path.join(FLAGS.output_dir, "eval_results.txt")
<add> with tf.gfile.GFile(output_eval_file, "w") as writer:
<add> tf.logging.info("***** Eval results *****")
<add> for key in sorted(result.keys()):
<add> tf.logging.info(" %s = %s", key, str(result[key]))
<add> writer.write("%s = %s\n" % (key, str(result[key])))
<ide>
<ide>
<ide> if __name__ == "__main__":
<del> flags.mark_flag_as_required("input_file")
<del> flags.mark_flag_as_required("bert_config_file")
<del> flags.mark_flag_as_required("output_dir")
<del> tf.app.run()
<add> flags.mark_flag_as_required("input_file")
<add> flags.mark_flag_as_required("bert_config_file")
<add> flags.mark_flag_as_required("output_dir")
<add> tf.app.run()
<ide><path>run_squad.py
<ide>
<ide>
<ide> class SquadExample(object):
<del> """A single training/test example for simple sequence classification."""
<del>
<del> def __init__(self,
<del> qas_id,
<del> question_text,
<del> doc_tokens,
<del> orig_answer_text=None,
<del> start_position=None,
<del> end_position=None):
<del> self.qas_id = qas_id
<del> self.question_text = question_text
<del> self.doc_tokens = doc_tokens
<del> self.orig_answer_text = orig_answer_text
<del> self.start_position = start_position
<del> self.end_position = end_position
<del>
<del> def __str__(self):
<del> return self.__repr__()
<del>
<del> def __repr__(self):
<del> s = ""
<del> s += "qas_id: %s" % (tokenization.printable_text(self.qas_id))
<del> s += ", question_text: %s" % (
<del> tokenization.printable_text(self.question_text))
<del> s += ", doc_tokens: [%s]" % (" ".join(self.doc_tokens))
<del> if self.start_position:
<del> s += ", start_position: %d" % (self.start_position)
<del> if self.start_position:
<del> s += ", end_position: %d" % (self.end_position)
<del> return s
<add> """A single training/test example for simple sequence classification."""
<add>
<add> def __init__(self,
<add> qas_id,
<add> question_text,
<add> doc_tokens,
<add> orig_answer_text=None,
<add> start_position=None,
<add> end_position=None):
<add> self.qas_id = qas_id
<add> self.question_text = question_text
<add> self.doc_tokens = doc_tokens
<add> self.orig_answer_text = orig_answer_text
<add> self.start_position = start_position
<add> self.end_position = end_position
<add>
<add> def __str__(self):
<add> return self.__repr__()
<add>
<add> def __repr__(self):
<add> s = ""
<add> s += "qas_id: %s" % (tokenization.printable_text(self.qas_id))
<add> s += ", question_text: %s" % (
<add> tokenization.printable_text(self.question_text))
<add> s += ", doc_tokens: [%s]" % (" ".join(self.doc_tokens))
<add> if self.start_position:
<add> s += ", start_position: %d" % (self.start_position)
<add> if self.start_position:
<add> s += ", end_position: %d" % (self.end_position)
<add> return s
<ide>
<ide>
<ide> class InputFeatures(object):
<del> """A single set of features of data."""
<del>
<del> def __init__(self,
<del> unique_id,
<del> example_index,
<del> doc_span_index,
<del> tokens,
<del> token_to_orig_map,
<del> token_is_max_context,
<del> input_ids,
<del> input_mask,
<del> segment_ids,
<del> start_position=None,
<del> end_position=None):
<del> self.unique_id = unique_id
<del> self.example_index = example_index
<del> self.doc_span_index = doc_span_index
<del> self.tokens = tokens
<del> self.token_to_orig_map = token_to_orig_map
<del> self.token_is_max_context = token_is_max_context
<del> self.input_ids = input_ids
<del> self.input_mask = input_mask
<del> self.segment_ids = segment_ids
<del> self.start_position = start_position
<del> self.end_position = end_position
<add> """A single set of features of data."""
<add>
<add> def __init__(self,
<add> unique_id,
<add> example_index,
<add> doc_span_index,
<add> tokens,
<add> token_to_orig_map,
<add> token_is_max_context,
<add> input_ids,
<add> input_mask,
<add> segment_ids,
<add> start_position=None,
<add> end_position=None):
<add> self.unique_id = unique_id
<add> self.example_index = example_index
<add> self.doc_span_index = doc_span_index
<add> self.tokens = tokens
<add> self.token_to_orig_map = token_to_orig_map
<add> self.token_is_max_context = token_is_max_context
<add> self.input_ids = input_ids
<add> self.input_mask = input_mask
<add> self.segment_ids = segment_ids
<add> self.start_position = start_position
<add> self.end_position = end_position
<ide>
<ide>
<ide> def read_squad_examples(input_file, is_training):
<del> """Read a SQuAD json file into a list of SquadExample."""
<del> with tf.gfile.Open(input_file, "r") as reader:
<del> input_data = json.load(reader)["data"]
<del>
<del> def is_whitespace(c):
<del> if c == " " or c == "\t" or c == "\r" or c == "\n" or ord(c) == 0x202F:
<del> return True
<del> return False
<del>
<del> examples = []
<del> for entry in input_data:
<del> for paragraph in entry["paragraphs"]:
<del> paragraph_text = paragraph["context"]
<del> doc_tokens = []
<del> char_to_word_offset = []
<del> prev_is_whitespace = True
<del> for c in paragraph_text:
<del> if is_whitespace(c):
<del> prev_is_whitespace = True
<del> else:
<del> if prev_is_whitespace:
<del> doc_tokens.append(c)
<del> else:
<del> doc_tokens[-1] += c
<del> prev_is_whitespace = False
<del> char_to_word_offset.append(len(doc_tokens) - 1)
<del>
<del> for qa in paragraph["qas"]:
<del> qas_id = qa["id"]
<del> question_text = qa["question"]
<del> start_position = None
<del> end_position = None
<del> orig_answer_text = None
<del> if is_training:
<del> if len(qa["answers"]) != 1:
<del> raise ValueError(
<del> "For training, each question should have exactly 1 answer.")
<del> answer = qa["answers"][0]
<del> orig_answer_text = answer["text"]
<del> answer_offset = answer["answer_start"]
<del> answer_length = len(orig_answer_text)
<del> start_position = char_to_word_offset[answer_offset]
<del> end_position = char_to_word_offset[answer_offset + answer_length - 1]
<del> # Only add answers where the text can be exactly recovered from the
<del> # document. If this CAN'T happen it's likely due to weird Unicode
<del> # stuff so we will just skip the example.
<del> #
<del> # Note that this means for training mode, every example is NOT
<del> # guaranteed to be preserved.
<del> actual_text = " ".join(doc_tokens[start_position:(end_position + 1)])
<del> cleaned_answer_text = " ".join(
<del> tokenization.whitespace_tokenize(orig_answer_text))
<del> if actual_text.find(cleaned_answer_text) == -1:
<del> tf.logging.warning("Could not find answer: '%s' vs. '%s'",
<del> actual_text, cleaned_answer_text)
<del> continue
<del>
<del> example = SquadExample(
<del> qas_id=qas_id,
<del> question_text=question_text,
<del> doc_tokens=doc_tokens,
<del> orig_answer_text=orig_answer_text,
<del> start_position=start_position,
<del> end_position=end_position)
<del> examples.append(example)
<del> return examples
<add> """Read a SQuAD json file into a list of SquadExample."""
<add> with tf.gfile.Open(input_file, "r") as reader:
<add> input_data = json.load(reader)["data"]
<add>
<add> def is_whitespace(c):
<add> if c == " " or c == "\t" or c == "\r" or c == "\n" or ord(c) == 0x202F:
<add> return True
<add> return False
<add>
<add> examples = []
<add> for entry in input_data:
<add> for paragraph in entry["paragraphs"]:
<add> paragraph_text = paragraph["context"]
<add> doc_tokens = []
<add> char_to_word_offset = []
<add> prev_is_whitespace = True
<add> for c in paragraph_text:
<add> if is_whitespace(c):
<add> prev_is_whitespace = True
<add> else:
<add> if prev_is_whitespace:
<add> doc_tokens.append(c)
<add> else:
<add> doc_tokens[-1] += c
<add> prev_is_whitespace = False
<add> char_to_word_offset.append(len(doc_tokens) - 1)
<add>
<add> for qa in paragraph["qas"]:
<add> qas_id = qa["id"]
<add> question_text = qa["question"]
<add> start_position = None
<add> end_position = None
<add> orig_answer_text = None
<add> if is_training:
<add> if len(qa["answers"]) != 1:
<add> raise ValueError(
<add> "For training, each question should have exactly 1 answer.")
<add> answer = qa["answers"][0]
<add> orig_answer_text = answer["text"]
<add> answer_offset = answer["answer_start"]
<add> answer_length = len(orig_answer_text)
<add> start_position = char_to_word_offset[answer_offset]
<add> end_position = char_to_word_offset[answer_offset + answer_length - 1]
<add> # Only add answers where the text can be exactly recovered from the
<add> # document. If this CAN'T happen it's likely due to weird Unicode
<add> # stuff so we will just skip the example.
<add> #
<add> # Note that this means for training mode, every example is NOT
<add> # guaranteed to be preserved.
<add> actual_text = " ".join(doc_tokens[start_position:(end_position + 1)])
<add> cleaned_answer_text = " ".join(
<add> tokenization.whitespace_tokenize(orig_answer_text))
<add> if actual_text.find(cleaned_answer_text) == -1:
<add> tf.logging.warning("Could not find answer: '%s' vs. '%s'",
<add> actual_text, cleaned_answer_text)
<add> continue
<add>
<add> example = SquadExample(
<add> qas_id=qas_id,
<add> question_text=question_text,
<add> doc_tokens=doc_tokens,
<add> orig_answer_text=orig_answer_text,
<add> start_position=start_position,
<add> end_position=end_position)
<add> examples.append(example)
<add> return examples
<ide>
<ide>
<ide> def convert_examples_to_features(examples, tokenizer, max_seq_length,
<ide> doc_stride, max_query_length, is_training):
<del> """Loads a data file into a list of `InputBatch`s."""
<del>
<del> unique_id = 1000000000
<del>
<del> features = []
<del> for (example_index, example) in enumerate(examples):
<del> query_tokens = tokenizer.tokenize(example.question_text)
<del>
<del> if len(query_tokens) > max_query_length:
<del> query_tokens = query_tokens[0:max_query_length]
<del>
<del> tok_to_orig_index = []
<del> orig_to_tok_index = []
<del> all_doc_tokens = []
<del> for (i, token) in enumerate(example.doc_tokens):
<del> orig_to_tok_index.append(len(all_doc_tokens))
<del> sub_tokens = tokenizer.tokenize(token)
<del> for sub_token in sub_tokens:
<del> tok_to_orig_index.append(i)
<del> all_doc_tokens.append(sub_token)
<del>
<del> tok_start_position = None
<del> tok_end_position = None
<del> if is_training:
<del> tok_start_position = orig_to_tok_index[example.start_position]
<del> if example.end_position < len(example.doc_tokens) - 1:
<del> tok_end_position = orig_to_tok_index[example.end_position + 1] - 1
<del> else:
<del> tok_end_position = len(all_doc_tokens) - 1
<del> (tok_start_position, tok_end_position) = _improve_answer_span(
<del> all_doc_tokens, tok_start_position, tok_end_position, tokenizer,
<del> example.orig_answer_text)
<del>
<del> # The -3 accounts for [CLS], [SEP] and [SEP]
<del> max_tokens_for_doc = max_seq_length - len(query_tokens) - 3
<del>
<del> # We can have documents that are longer than the maximum sequence length.
<del> # To deal with this we do a sliding window approach, where we take chunks
<del> # of the up to our max length with a stride of `doc_stride`.
<del> _DocSpan = collections.namedtuple( # pylint: disable=invalid-name
<del> "DocSpan", ["start", "length"])
<del> doc_spans = []
<del> start_offset = 0
<del> while start_offset < len(all_doc_tokens):
<del> length = len(all_doc_tokens) - start_offset
<del> if length > max_tokens_for_doc:
<del> length = max_tokens_for_doc
<del> doc_spans.append(_DocSpan(start=start_offset, length=length))
<del> if start_offset + length == len(all_doc_tokens):
<del> break
<del> start_offset += min(length, doc_stride)
<del>
<del> for (doc_span_index, doc_span) in enumerate(doc_spans):
<del> tokens = []
<del> token_to_orig_map = {}
<del> token_is_max_context = {}
<del> segment_ids = []
<del> tokens.append("[CLS]")
<del> segment_ids.append(0)
<del> for token in query_tokens:
<del> tokens.append(token)
<del> segment_ids.append(0)
<del> tokens.append("[SEP]")
<del> segment_ids.append(0)
<del>
<del> for i in range(doc_span.length):
<del> split_token_index = doc_span.start + i
<del> token_to_orig_map[len(tokens)] = tok_to_orig_index[split_token_index]
<del>
<del> is_max_context = _check_is_max_context(doc_spans, doc_span_index,
<del> split_token_index)
<del> token_is_max_context[len(tokens)] = is_max_context
<del> tokens.append(all_doc_tokens[split_token_index])
<del> segment_ids.append(1)
<del> tokens.append("[SEP]")
<del> segment_ids.append(1)
<del>
<del> input_ids = tokenizer.convert_tokens_to_ids(tokens)
<del>
<del> # The mask has 1 for real tokens and 0 for padding tokens. Only real
<del> # tokens are attended to.
<del> input_mask = [1] * len(input_ids)
<del>
<del> # Zero-pad up to the sequence length.
<del> while len(input_ids) < max_seq_length:
<del> input_ids.append(0)
<del> input_mask.append(0)
<del> segment_ids.append(0)
<del>
<del> assert len(input_ids) == max_seq_length
<del> assert len(input_mask) == max_seq_length
<del> assert len(segment_ids) == max_seq_length
<del>
<del> start_position = None
<del> end_position = None
<del> if is_training:
<del> # For training, if our document chunk does not contain an annotation
<del> # we throw it out, since there is nothing to predict.
<del> doc_start = doc_span.start
<del> doc_end = doc_span.start + doc_span.length - 1
<del> if (example.start_position < doc_start or
<del> example.end_position < doc_start or
<del> example.start_position > doc_end or example.end_position > doc_end):
<del> continue
<del>
<del> doc_offset = len(query_tokens) + 2
<del> start_position = tok_start_position - doc_start + doc_offset
<del> end_position = tok_end_position - doc_start + doc_offset
<del>
<del> if example_index < 20:
<del> tf.logging.info("*** Example ***")
<del> tf.logging.info("unique_id: %s" % (unique_id))
<del> tf.logging.info("example_index: %s" % (example_index))
<del> tf.logging.info("doc_span_index: %s" % (doc_span_index))
<del> tf.logging.info("tokens: %s" % " ".join(
<del> [tokenization.printable_text(x) for x in tokens]))
<del> tf.logging.info("token_to_orig_map: %s" % " ".join(
<del> ["%d:%d" % (x, y) for (x, y) in six.iteritems(token_to_orig_map)]))
<del> tf.logging.info("token_is_max_context: %s" % " ".join([
<del> "%d:%s" % (x, y) for (x, y) in six.iteritems(token_is_max_context)
<del> ]))
<del> tf.logging.info("input_ids: %s" % " ".join([str(x) for x in input_ids]))
<del> tf.logging.info(
<del> "input_mask: %s" % " ".join([str(x) for x in input_mask]))
<del> tf.logging.info(
<del> "segment_ids: %s" % " ".join([str(x) for x in segment_ids]))
<add> """Loads a data file into a list of `InputBatch`s."""
<add>
<add> unique_id = 1000000000
<add>
<add> features = []
<add> for (example_index, example) in enumerate(examples):
<add> query_tokens = tokenizer.tokenize(example.question_text)
<add>
<add> if len(query_tokens) > max_query_length:
<add> query_tokens = query_tokens[0:max_query_length]
<add>
<add> tok_to_orig_index = []
<add> orig_to_tok_index = []
<add> all_doc_tokens = []
<add> for (i, token) in enumerate(example.doc_tokens):
<add> orig_to_tok_index.append(len(all_doc_tokens))
<add> sub_tokens = tokenizer.tokenize(token)
<add> for sub_token in sub_tokens:
<add> tok_to_orig_index.append(i)
<add> all_doc_tokens.append(sub_token)
<add>
<add> tok_start_position = None
<add> tok_end_position = None
<ide> if is_training:
<del> answer_text = " ".join(tokens[start_position:(end_position + 1)])
<del> tf.logging.info("start_position: %d" % (start_position))
<del> tf.logging.info("end_position: %d" % (end_position))
<del> tf.logging.info(
<del> "answer: %s" % (tokenization.printable_text(answer_text)))
<del>
<del> features.append(
<del> InputFeatures(
<del> unique_id=unique_id,
<del> example_index=example_index,
<del> doc_span_index=doc_span_index,
<del> tokens=tokens,
<del> token_to_orig_map=token_to_orig_map,
<del> token_is_max_context=token_is_max_context,
<del> input_ids=input_ids,
<del> input_mask=input_mask,
<del> segment_ids=segment_ids,
<del> start_position=start_position,
<del> end_position=end_position))
<del> unique_id += 1
<del>
<del> return features
<add> tok_start_position = orig_to_tok_index[example.start_position]
<add> if example.end_position < len(example.doc_tokens) - 1:
<add> tok_end_position = orig_to_tok_index[example.end_position + 1] - 1
<add> else:
<add> tok_end_position = len(all_doc_tokens) - 1
<add> (tok_start_position, tok_end_position) = _improve_answer_span(
<add> all_doc_tokens, tok_start_position, tok_end_position, tokenizer,
<add> example.orig_answer_text)
<add>
<add> # The -3 accounts for [CLS], [SEP] and [SEP]
<add> max_tokens_for_doc = max_seq_length - len(query_tokens) - 3
<add>
<add> # We can have documents that are longer than the maximum sequence length.
<add> # To deal with this we do a sliding window approach, where we take chunks
<add> # of the up to our max length with a stride of `doc_stride`.
<add> _DocSpan = collections.namedtuple( # pylint: disable=invalid-name
<add> "DocSpan", ["start", "length"])
<add> doc_spans = []
<add> start_offset = 0
<add> while start_offset < len(all_doc_tokens):
<add> length = len(all_doc_tokens) - start_offset
<add> if length > max_tokens_for_doc:
<add> length = max_tokens_for_doc
<add> doc_spans.append(_DocSpan(start=start_offset, length=length))
<add> if start_offset + length == len(all_doc_tokens):
<add> break
<add> start_offset += min(length, doc_stride)
<add>
<add> for (doc_span_index, doc_span) in enumerate(doc_spans):
<add> tokens = []
<add> token_to_orig_map = {}
<add> token_is_max_context = {}
<add> segment_ids = []
<add> tokens.append("[CLS]")
<add> segment_ids.append(0)
<add> for token in query_tokens:
<add> tokens.append(token)
<add> segment_ids.append(0)
<add> tokens.append("[SEP]")
<add> segment_ids.append(0)
<add>
<add> for i in range(doc_span.length):
<add> split_token_index = doc_span.start + i
<add> token_to_orig_map[len(tokens)] = tok_to_orig_index[split_token_index]
<add>
<add> is_max_context = _check_is_max_context(doc_spans, doc_span_index,
<add> split_token_index)
<add> token_is_max_context[len(tokens)] = is_max_context
<add> tokens.append(all_doc_tokens[split_token_index])
<add> segment_ids.append(1)
<add> tokens.append("[SEP]")
<add> segment_ids.append(1)
<add>
<add> input_ids = tokenizer.convert_tokens_to_ids(tokens)
<add>
<add> # The mask has 1 for real tokens and 0 for padding tokens. Only real
<add> # tokens are attended to.
<add> input_mask = [1] * len(input_ids)
<add>
<add> # Zero-pad up to the sequence length.
<add> while len(input_ids) < max_seq_length:
<add> input_ids.append(0)
<add> input_mask.append(0)
<add> segment_ids.append(0)
<add>
<add> assert len(input_ids) == max_seq_length
<add> assert len(input_mask) == max_seq_length
<add> assert len(segment_ids) == max_seq_length
<add>
<add> start_position = None
<add> end_position = None
<add> if is_training:
<add> # For training, if our document chunk does not contain an annotation
<add> # we throw it out, since there is nothing to predict.
<add> doc_start = doc_span.start
<add> doc_end = doc_span.start + doc_span.length - 1
<add> if (example.start_position < doc_start or
<add> example.end_position < doc_start or
<add> example.start_position > doc_end or example.end_position > doc_end):
<add> continue
<add>
<add> doc_offset = len(query_tokens) + 2
<add> start_position = tok_start_position - doc_start + doc_offset
<add> end_position = tok_end_position - doc_start + doc_offset
<add>
<add> if example_index < 20:
<add> tf.logging.info("*** Example ***")
<add> tf.logging.info("unique_id: %s" % (unique_id))
<add> tf.logging.info("example_index: %s" % (example_index))
<add> tf.logging.info("doc_span_index: %s" % (doc_span_index))
<add> tf.logging.info("tokens: %s" % " ".join(
<add> [tokenization.printable_text(x) for x in tokens]))
<add> tf.logging.info("token_to_orig_map: %s" % " ".join(
<add> ["%d:%d" % (x, y) for (x, y) in six.iteritems(token_to_orig_map)]))
<add> tf.logging.info("token_is_max_context: %s" % " ".join([
<add> "%d:%s" % (x, y) for (x, y) in six.iteritems(token_is_max_context)
<add> ]))
<add> tf.logging.info("input_ids: %s" % " ".join([str(x) for x in input_ids]))
<add> tf.logging.info(
<add> "input_mask: %s" % " ".join([str(x) for x in input_mask]))
<add> tf.logging.info(
<add> "segment_ids: %s" % " ".join([str(x) for x in segment_ids]))
<add> if is_training:
<add> answer_text = " ".join(tokens[start_position:(end_position + 1)])
<add> tf.logging.info("start_position: %d" % (start_position))
<add> tf.logging.info("end_position: %d" % (end_position))
<add> tf.logging.info(
<add> "answer: %s" % (tokenization.printable_text(answer_text)))
<add>
<add> features.append(
<add> InputFeatures(
<add> unique_id=unique_id,
<add> example_index=example_index,
<add> doc_span_index=doc_span_index,
<add> tokens=tokens,
<add> token_to_orig_map=token_to_orig_map,
<add> token_is_max_context=token_is_max_context,
<add> input_ids=input_ids,
<add> input_mask=input_mask,
<add> segment_ids=segment_ids,
<add> start_position=start_position,
<add> end_position=end_position))
<add> unique_id += 1
<add>
<add> return features
<ide>
<ide>
<ide> def _improve_answer_span(doc_tokens, input_start, input_end, tokenizer,
<ide> orig_answer_text):
<del> """Returns tokenized answer spans that better match the annotated answer."""
<del>
<del> # The SQuAD annotations are character based. We first project them to
<del> # whitespace-tokenized words. But then after WordPiece tokenization, we can
<del> # often find a "better match". For example:
<del> #
<del> # Question: What year was John Smith born?
<del> # Context: The leader was John Smith (1895-1943).
<del> # Answer: 1895
<del> #
<del> # The original whitespace-tokenized answer will be "(1895-1943).". However
<del> # after tokenization, our tokens will be "( 1895 - 1943 ) .". So we can match
<del> # the exact answer, 1895.
<del> #
<del> # However, this is not always possible. Consider the following:
<del> #
<del> # Question: What country is the top exporter of electornics?
<del> # Context: The Japanese electronics industry is the lagest in the world.
<del> # Answer: Japan
<del> #
<del> # In this case, the annotator chose "Japan" as a character sub-span of
<del> # the word "Japanese". Since our WordPiece tokenizer does not split
<del> # "Japanese", we just use "Japanese" as the annotation. This is fairly rare
<del> # in SQuAD, but does happen.
<del> tok_answer_text = " ".join(tokenizer.tokenize(orig_answer_text))
<del>
<del> for new_start in range(input_start, input_end + 1):
<del> for new_end in range(input_end, new_start - 1, -1):
<del> text_span = " ".join(doc_tokens[new_start:(new_end + 1)])
<del> if text_span == tok_answer_text:
<del> return (new_start, new_end)
<del>
<del> return (input_start, input_end)
<add> """Returns tokenized answer spans that better match the annotated answer."""
<add>
<add> # The SQuAD annotations are character based. We first project them to
<add> # whitespace-tokenized words. But then after WordPiece tokenization, we can
<add> # often find a "better match". For example:
<add> #
<add> # Question: What year was John Smith born?
<add> # Context: The leader was John Smith (1895-1943).
<add> # Answer: 1895
<add> #
<add> # The original whitespace-tokenized answer will be "(1895-1943).". However
<add> # after tokenization, our tokens will be "( 1895 - 1943 ) .". So we can match
<add> # the exact answer, 1895.
<add> #
<add> # However, this is not always possible. Consider the following:
<add> #
<add> # Question: What country is the top exporter of electornics?
<add> # Context: The Japanese electronics industry is the lagest in the world.
<add> # Answer: Japan
<add> #
<add> # In this case, the annotator chose "Japan" as a character sub-span of
<add> # the word "Japanese". Since our WordPiece tokenizer does not split
<add> # "Japanese", we just use "Japanese" as the annotation. This is fairly rare
<add> # in SQuAD, but does happen.
<add> tok_answer_text = " ".join(tokenizer.tokenize(orig_answer_text))
<add>
<add> for new_start in range(input_start, input_end + 1):
<add> for new_end in range(input_end, new_start - 1, -1):
<add> text_span = " ".join(doc_tokens[new_start:(new_end + 1)])
<add> if text_span == tok_answer_text:
<add> return (new_start, new_end)
<add>
<add> return (input_start, input_end)
<ide>
<ide>
<ide> def _check_is_max_context(doc_spans, cur_span_index, position):
<del> """Check if this is the 'max context' doc span for the token."""
<del>
<del> # Because of the sliding window approach taken to scoring documents, a single
<del> # token can appear in multiple documents. E.g.
<del> # Doc: the man went to the store and bought a gallon of milk
<del> # Span A: the man went to the
<del> # Span B: to the store and bought
<del> # Span C: and bought a gallon of
<del> # ...
<del> #
<del> # Now the word 'bought' will have two scores from spans B and C. We only
<del> # want to consider the score with "maximum context", which we define as
<del> # the *minimum* of its left and right context (the *sum* of left and
<del> # right context will always be the same, of course).
<del> #
<del> # In the example the maximum context for 'bought' would be span C since
<del> # it has 1 left context and 3 right context, while span B has 4 left context
<del> # and 0 right context.
<del> best_score = None
<del> best_span_index = None
<del> for (span_index, doc_span) in enumerate(doc_spans):
<del> end = doc_span.start + doc_span.length - 1
<del> if position < doc_span.start:
<del> continue
<del> if position > end:
<del> continue
<del> num_left_context = position - doc_span.start
<del> num_right_context = end - position
<del> score = min(num_left_context, num_right_context) + 0.01 * doc_span.length
<del> if best_score is None or score > best_score:
<del> best_score = score
<del> best_span_index = span_index
<del>
<del> return cur_span_index == best_span_index
<add> """Check if this is the 'max context' doc span for the token."""
<add>
<add> # Because of the sliding window approach taken to scoring documents, a single
<add> # token can appear in multiple documents. E.g.
<add> # Doc: the man went to the store and bought a gallon of milk
<add> # Span A: the man went to the
<add> # Span B: to the store and bought
<add> # Span C: and bought a gallon of
<add> # ...
<add> #
<add> # Now the word 'bought' will have two scores from spans B and C. We only
<add> # want to consider the score with "maximum context", which we define as
<add> # the *minimum* of its left and right context (the *sum* of left and
<add> # right context will always be the same, of course).
<add> #
<add> # In the example the maximum context for 'bought' would be span C since
<add> # it has 1 left context and 3 right context, while span B has 4 left context
<add> # and 0 right context.
<add> best_score = None
<add> best_span_index = None
<add> for (span_index, doc_span) in enumerate(doc_spans):
<add> end = doc_span.start + doc_span.length - 1
<add> if position < doc_span.start:
<add> continue
<add> if position > end:
<add> continue
<add> num_left_context = position - doc_span.start
<add> num_right_context = end - position
<add> score = min(num_left_context, num_right_context) + 0.01 * doc_span.length
<add> if best_score is None or score > best_score:
<add> best_score = score
<add> best_span_index = span_index
<add>
<add> return cur_span_index == best_span_index
<ide>
<ide>
<ide> def create_model(bert_config, is_training, input_ids, input_mask, segment_ids,
<ide> use_one_hot_embeddings):
<del> """Creates a classification model."""
<del> model = modeling.BertModel(
<del> config=bert_config,
<del> is_training=is_training,
<del> input_ids=input_ids,
<del> input_mask=input_mask,
<del> token_type_ids=segment_ids,
<del> use_one_hot_embeddings=use_one_hot_embeddings)
<add> """Creates a classification model."""
<add> model = modeling.BertModel(
<add> config=bert_config,
<add> is_training=is_training,
<add> input_ids=input_ids,
<add> input_mask=input_mask,
<add> token_type_ids=segment_ids,
<add> use_one_hot_embeddings=use_one_hot_embeddings)
<ide>
<del> final_hidden = model.get_sequence_output()
<add> final_hidden = model.get_sequence_output()
<ide>
<del> final_hidden_shape = modeling.get_shape_list(final_hidden, expected_rank=3)
<del> batch_size = final_hidden_shape[0]
<del> seq_length = final_hidden_shape[1]
<del> hidden_size = final_hidden_shape[2]
<add> final_hidden_shape = modeling.get_shape_list(final_hidden, expected_rank=3)
<add> batch_size = final_hidden_shape[0]
<add> seq_length = final_hidden_shape[1]
<add> hidden_size = final_hidden_shape[2]
<ide>
<del> output_weights = tf.get_variable(
<del> "cls/squad/output_weights", [2, hidden_size],
<del> initializer=tf.truncated_normal_initializer(stddev=0.02))
<add> output_weights = tf.get_variable(
<add> "cls/squad/output_weights", [2, hidden_size],
<add> initializer=tf.truncated_normal_initializer(stddev=0.02))
<ide>
<del> output_bias = tf.get_variable(
<del> "cls/squad/output_bias", [2], initializer=tf.zeros_initializer())
<add> output_bias = tf.get_variable(
<add> "cls/squad/output_bias", [2], initializer=tf.zeros_initializer())
<ide>
<del> final_hidden_matrix = tf.reshape(final_hidden,
<del> [batch_size * seq_length, hidden_size])
<del> logits = tf.matmul(final_hidden_matrix, output_weights, transpose_b=True)
<del> logits = tf.nn.bias_add(logits, output_bias)
<add> final_hidden_matrix = tf.reshape(final_hidden,
<add> [batch_size * seq_length, hidden_size])
<add> logits = tf.matmul(final_hidden_matrix, output_weights, transpose_b=True)
<add> logits = tf.nn.bias_add(logits, output_bias)
<ide>
<del> logits = tf.reshape(logits, [batch_size, seq_length, 2])
<del> logits = tf.transpose(logits, [2, 0, 1])
<add> logits = tf.reshape(logits, [batch_size, seq_length, 2])
<add> logits = tf.transpose(logits, [2, 0, 1])
<ide>
<del> unstacked_logits = tf.unstack(logits, axis=0)
<add> unstacked_logits = tf.unstack(logits, axis=0)
<ide>
<del> (start_logits, end_logits) = (unstacked_logits[0], unstacked_logits[1])
<add> (start_logits, end_logits) = (unstacked_logits[0], unstacked_logits[1])
<ide>
<del> return (start_logits, end_logits)
<add> return (start_logits, end_logits)
<ide>
<ide>
<ide> def model_fn_builder(bert_config, init_checkpoint, learning_rate,
<ide> num_train_steps, num_warmup_steps, use_tpu,
<ide> use_one_hot_embeddings):
<del> """Returns `model_fn` closure for TPUEstimator."""
<add> """Returns `model_fn` closure for TPUEstimator."""
<add>
<add> def model_fn(features, labels, mode, params): # pylint: disable=unused-argument
<add> """The `model_fn` for TPUEstimator."""
<add>
<add> tf.logging.info("*** Features ***")
<add> for name in sorted(features.keys()):
<add> tf.logging.info(" name = %s, shape = %s" % (name, features[name].shape))
<add>
<add> unique_ids = features["unique_ids"]
<add> input_ids = features["input_ids"]
<add> input_mask = features["input_mask"]
<add> segment_ids = features["segment_ids"]
<add>
<add> is_training = (mode == tf.estimator.ModeKeys.TRAIN)
<add>
<add> (start_logits, end_logits) = create_model(
<add> bert_config=bert_config,
<add> is_training=is_training,
<add> input_ids=input_ids,
<add> input_mask=input_mask,
<add> segment_ids=segment_ids,
<add> use_one_hot_embeddings=use_one_hot_embeddings)
<add>
<add> tvars = tf.trainable_variables()
<add>
<add> initialized_variable_names = {}
<add> scaffold_fn = None
<add> if init_checkpoint:
<add> (assignment_map,
<add> initialized_variable_names) = modeling.get_assigment_map_from_checkpoint(
<add> tvars, init_checkpoint)
<add> if use_tpu:
<add>
<add> def tpu_scaffold():
<add> tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
<add> return tf.train.Scaffold()
<add>
<add> scaffold_fn = tpu_scaffold
<add> else:
<add> tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
<add>
<add> tf.logging.info("**** Trainable Variables ****")
<add> for var in tvars:
<add> init_string = ""
<add> if var.name in initialized_variable_names:
<add> init_string = ", *INIT_FROM_CKPT*"
<add> tf.logging.info(" name = %s, shape = %s%s", var.name, var.shape,
<add> init_string)
<add>
<add> output_spec = None
<add> if mode == tf.estimator.ModeKeys.TRAIN:
<add> seq_length = modeling.get_shape_list(input_ids)[1]
<add>
<add> def compute_loss(logits, positions):
<add> one_hot_positions = tf.one_hot(
<add> positions, depth=seq_length, dtype=tf.float32)
<add> log_probs = tf.nn.log_softmax(logits, axis=-1)
<add> loss = -tf.reduce_mean(
<add> tf.reduce_sum(one_hot_positions * log_probs, axis=-1))
<add> return loss
<add>
<add> start_positions = features["start_positions"]
<add> end_positions = features["end_positions"]
<add>
<add> start_loss = compute_loss(start_logits, start_positions)
<add> end_loss = compute_loss(end_logits, end_positions)
<add>
<add> total_loss = (start_loss + end_loss) / 2.0
<add>
<add> train_op = optimization.create_optimizer(
<add> total_loss, learning_rate, num_train_steps, num_warmup_steps, use_tpu)
<add>
<add> output_spec = tf.contrib.tpu.TPUEstimatorSpec(
<add> mode=mode,
<add> loss=total_loss,
<add> train_op=train_op,
<add> scaffold_fn=scaffold_fn)
<add> elif mode == tf.estimator.ModeKeys.PREDICT:
<add> predictions = {
<add> "unique_ids": unique_ids,
<add> "start_logits": start_logits,
<add> "end_logits": end_logits,
<add> }
<add> output_spec = tf.contrib.tpu.TPUEstimatorSpec(
<add> mode=mode, predictions=predictions, scaffold_fn=scaffold_fn)
<add> else:
<add> raise ValueError(
<add> "Only TRAIN and PREDICT modes are supported: %s" % (mode))
<ide>
<del> def model_fn(features, labels, mode, params): # pylint: disable=unused-argument
<del> """The `model_fn` for TPUEstimator."""
<add> return output_spec
<ide>
<del> tf.logging.info("*** Features ***")
<del> for name in sorted(features.keys()):
<del> tf.logging.info(" name = %s, shape = %s" % (name, features[name].shape))
<add> return model_fn
<ide>
<del> unique_ids = features["unique_ids"]
<del> input_ids = features["input_ids"]
<del> input_mask = features["input_mask"]
<del> segment_ids = features["segment_ids"]
<ide>
<del> is_training = (mode == tf.estimator.ModeKeys.TRAIN)
<add>def input_fn_builder(features, seq_length, is_training, drop_remainder):
<add> """Creates an `input_fn` closure to be passed to TPUEstimator."""
<add>
<add> all_unique_ids = []
<add> all_input_ids = []
<add> all_input_mask = []
<add> all_segment_ids = []
<add> all_start_positions = []
<add> all_end_positions = []
<add>
<add> for feature in features:
<add> all_unique_ids.append(feature.unique_id)
<add> all_input_ids.append(feature.input_ids)
<add> all_input_mask.append(feature.input_mask)
<add> all_segment_ids.append(feature.segment_ids)
<add> if is_training:
<add> all_start_positions.append(feature.start_position)
<add> all_end_positions.append(feature.end_position)
<add>
<add> def input_fn(params):
<add> """The actual input function."""
<add> batch_size = params["batch_size"]
<add>
<add> num_examples = len(features)
<add>
<add> # This is for demo purposes and does NOT scale to large data sets. We do
<add> # not use Dataset.from_generator() because that uses tf.py_func which is
<add> # not TPU compatible. The right way to load data is with TFRecordReader.
<add> feature_map = {
<add> "unique_ids":
<add> tf.constant(all_unique_ids, shape=[num_examples], dtype=tf.int32),
<add> "input_ids":
<add> tf.constant(
<add> all_input_ids, shape=[num_examples, seq_length],
<add> dtype=tf.int32),
<add> "input_mask":
<add> tf.constant(
<add> all_input_mask,
<add> shape=[num_examples, seq_length],
<add> dtype=tf.int32),
<add> "segment_ids":
<add> tf.constant(
<add> all_segment_ids,
<add> shape=[num_examples, seq_length],
<add> dtype=tf.int32),
<add> }
<add> if is_training:
<add> feature_map["start_positions"] = tf.constant(
<add> all_start_positions, shape=[num_examples], dtype=tf.int32)
<add> feature_map["end_positions"] = tf.constant(
<add> all_end_positions, shape=[num_examples], dtype=tf.int32)
<ide>
<del> (start_logits, end_logits) = create_model(
<del> bert_config=bert_config,
<del> is_training=is_training,
<del> input_ids=input_ids,
<del> input_mask=input_mask,
<del> segment_ids=segment_ids,
<del> use_one_hot_embeddings=use_one_hot_embeddings)
<add> d = tf.data.Dataset.from_tensor_slices(feature_map)
<ide>
<del> tvars = tf.trainable_variables()
<del>
<del> initialized_variable_names = {}
<del> scaffold_fn = None
<del> if init_checkpoint:
<del> (assignment_map,
<del> initialized_variable_names) = modeling.get_assigment_map_from_checkpoint(
<del> tvars, init_checkpoint)
<del> if use_tpu:
<del>
<del> def tpu_scaffold():
<del> tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
<del> return tf.train.Scaffold()
<del>
<del> scaffold_fn = tpu_scaffold
<del> else:
<del> tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
<del>
<del> tf.logging.info("**** Trainable Variables ****")
<del> for var in tvars:
<del> init_string = ""
<del> if var.name in initialized_variable_names:
<del> init_string = ", *INIT_FROM_CKPT*"
<del> tf.logging.info(" name = %s, shape = %s%s", var.name, var.shape,
<del> init_string)
<del>
<del> output_spec = None
<del> if mode == tf.estimator.ModeKeys.TRAIN:
<del> seq_length = modeling.get_shape_list(input_ids)[1]
<del>
<del> def compute_loss(logits, positions):
<del> one_hot_positions = tf.one_hot(
<del> positions, depth=seq_length, dtype=tf.float32)
<del> log_probs = tf.nn.log_softmax(logits, axis=-1)
<del> loss = -tf.reduce_mean(
<del> tf.reduce_sum(one_hot_positions * log_probs, axis=-1))
<del> return loss
<del>
<del> start_positions = features["start_positions"]
<del> end_positions = features["end_positions"]
<del>
<del> start_loss = compute_loss(start_logits, start_positions)
<del> end_loss = compute_loss(end_logits, end_positions)
<del>
<del> total_loss = (start_loss + end_loss) / 2.0
<del>
<del> train_op = optimization.create_optimizer(
<del> total_loss, learning_rate, num_train_steps, num_warmup_steps, use_tpu)
<del>
<del> output_spec = tf.contrib.tpu.TPUEstimatorSpec(
<del> mode=mode,
<del> loss=total_loss,
<del> train_op=train_op,
<del> scaffold_fn=scaffold_fn)
<del> elif mode == tf.estimator.ModeKeys.PREDICT:
<del> predictions = {
<del> "unique_ids": unique_ids,
<del> "start_logits": start_logits,
<del> "end_logits": end_logits,
<del> }
<del> output_spec = tf.contrib.tpu.TPUEstimatorSpec(
<del> mode=mode, predictions=predictions, scaffold_fn=scaffold_fn)
<del> else:
<del> raise ValueError(
<del> "Only TRAIN and PREDICT modes are supported: %s" % (mode))
<del>
<del> return output_spec
<del>
<del> return model_fn
<add> if is_training:
<add> d = d.repeat()
<add> d = d.shuffle(buffer_size=100)
<ide>
<add> d = d.batch(batch_size=batch_size, drop_remainder=drop_remainder)
<add> return d
<ide>
<del>def input_fn_builder(features, seq_length, is_training, drop_remainder):
<del> """Creates an `input_fn` closure to be passed to TPUEstimator."""
<del>
<del> all_unique_ids = []
<del> all_input_ids = []
<del> all_input_mask = []
<del> all_segment_ids = []
<del> all_start_positions = []
<del> all_end_positions = []
<del>
<del> for feature in features:
<del> all_unique_ids.append(feature.unique_id)
<del> all_input_ids.append(feature.input_ids)
<del> all_input_mask.append(feature.input_mask)
<del> all_segment_ids.append(feature.segment_ids)
<del> if is_training:
<del> all_start_positions.append(feature.start_position)
<del> all_end_positions.append(feature.end_position)
<del>
<del> def input_fn(params):
<del> """The actual input function."""
<del> batch_size = params["batch_size"]
<del>
<del> num_examples = len(features)
<del>
<del> # This is for demo purposes and does NOT scale to large data sets. We do
<del> # not use Dataset.from_generator() because that uses tf.py_func which is
<del> # not TPU compatible. The right way to load data is with TFRecordReader.
<del> feature_map = {
<del> "unique_ids":
<del> tf.constant(all_unique_ids, shape=[num_examples], dtype=tf.int32),
<del> "input_ids":
<del> tf.constant(
<del> all_input_ids, shape=[num_examples, seq_length],
<del> dtype=tf.int32),
<del> "input_mask":
<del> tf.constant(
<del> all_input_mask,
<del> shape=[num_examples, seq_length],
<del> dtype=tf.int32),
<del> "segment_ids":
<del> tf.constant(
<del> all_segment_ids,
<del> shape=[num_examples, seq_length],
<del> dtype=tf.int32),
<del> }
<del> if is_training:
<del> feature_map["start_positions"] = tf.constant(
<del> all_start_positions, shape=[num_examples], dtype=tf.int32)
<del> feature_map["end_positions"] = tf.constant(
<del> all_end_positions, shape=[num_examples], dtype=tf.int32)
<del>
<del> d = tf.data.Dataset.from_tensor_slices(feature_map)
<del>
<del> if is_training:
<del> d = d.repeat()
<del> d = d.shuffle(buffer_size=100)
<del>
<del> d = d.batch(batch_size=batch_size, drop_remainder=drop_remainder)
<del> return d
<del>
<del> return input_fn
<add> return input_fn
<ide>
<ide>
<ide> RawResult = collections.namedtuple("RawResult",
<ide> def input_fn(params):
<ide> def write_predictions(all_examples, all_features, all_results, n_best_size,
<ide> max_answer_length, do_lower_case, output_prediction_file,
<ide> output_nbest_file):
<del> """Write final predictions to the json file."""
<del> tf.logging.info("Writing predictions to: %s" % (output_prediction_file))
<del> tf.logging.info("Writing nbest to: %s" % (output_nbest_file))
<del>
<del> example_index_to_features = collections.defaultdict(list)
<del> for feature in all_features:
<del> example_index_to_features[feature.example_index].append(feature)
<del>
<del> unique_id_to_result = {}
<del> for result in all_results:
<del> unique_id_to_result[result.unique_id] = result
<del>
<del> _PrelimPrediction = collections.namedtuple( # pylint: disable=invalid-name
<del> "PrelimPrediction",
<del> ["feature_index", "start_index", "end_index", "start_logit", "end_logit"])
<del>
<del> all_predictions = collections.OrderedDict()
<del> all_nbest_json = collections.OrderedDict()
<del> for (example_index, example) in enumerate(all_examples):
<del> features = example_index_to_features[example_index]
<del>
<del> prelim_predictions = []
<del> for (feature_index, feature) in enumerate(features):
<del> result = unique_id_to_result[feature.unique_id]
<del>
<del> start_indexes = _get_best_indexes(result.start_logits, n_best_size)
<del> end_indexes = _get_best_indexes(result.end_logits, n_best_size)
<del> for start_index in start_indexes:
<del> for end_index in end_indexes:
<del> # We could hypothetically create invalid predictions, e.g., predict
<del> # that the start of the span is in the question. We throw out all
<del> # invalid predictions.
<del> if start_index >= len(feature.tokens):
<del> continue
<del> if end_index >= len(feature.tokens):
<del> continue
<del> if start_index not in feature.token_to_orig_map:
<del> continue
<del> if end_index not in feature.token_to_orig_map:
<del> continue
<del> if not feature.token_is_max_context.get(start_index, False):
<del> continue
<del> if end_index < start_index:
<del> continue
<del> length = end_index - start_index + 1
<del> if length > max_answer_length:
<del> continue
<del> prelim_predictions.append(
<del> _PrelimPrediction(
<del> feature_index=feature_index,
<del> start_index=start_index,
<del> end_index=end_index,
<del> start_logit=result.start_logits[start_index],
<del> end_logit=result.end_logits[end_index]))
<del>
<del> prelim_predictions = sorted(
<del> prelim_predictions,
<del> key=lambda x: (x.start_logit + x.end_logit),
<del> reverse=True)
<del>
<del> _NbestPrediction = collections.namedtuple( # pylint: disable=invalid-name
<del> "NbestPrediction", ["text", "start_logit", "end_logit"])
<del>
<del> seen_predictions = {}
<del> nbest = []
<del> for pred in prelim_predictions:
<del> if len(nbest) >= n_best_size:
<del> break
<del> feature = features[pred.feature_index]
<del>
<del> tok_tokens = feature.tokens[pred.start_index:(pred.end_index + 1)]
<del> orig_doc_start = feature.token_to_orig_map[pred.start_index]
<del> orig_doc_end = feature.token_to_orig_map[pred.end_index]
<del> orig_tokens = example.doc_tokens[orig_doc_start:(orig_doc_end + 1)]
<del> tok_text = " ".join(tok_tokens)
<del>
<del> # De-tokenize WordPieces that have been split off.
<del> tok_text = tok_text.replace(" ##", "")
<del> tok_text = tok_text.replace("##", "")
<del>
<del> # Clean whitespace
<del> tok_text = tok_text.strip()
<del> tok_text = " ".join(tok_text.split())
<del> orig_text = " ".join(orig_tokens)
<del>
<del> final_text = get_final_text(tok_text, orig_text, do_lower_case)
<del> if final_text in seen_predictions:
<del> continue
<del>
<del> seen_predictions[final_text] = True
<del> nbest.append(
<del> _NbestPrediction(
<del> text=final_text,
<del> start_logit=pred.start_logit,
<del> end_logit=pred.end_logit))
<del>
<del> # In very rare edge cases we could have no valid predictions. So we
<del> # just create a nonce prediction in this case to avoid failure.
<del> if not nbest:
<del> nbest.append(
<del> _NbestPrediction(text="empty", start_logit=0.0, end_logit=0.0))
<del>
<del> assert len(nbest) >= 1
<del>
<del> total_scores = []
<del> for entry in nbest:
<del> total_scores.append(entry.start_logit + entry.end_logit)
<del>
<del> probs = _compute_softmax(total_scores)
<del>
<del> nbest_json = []
<del> for (i, entry) in enumerate(nbest):
<del> output = collections.OrderedDict()
<del> output["text"] = entry.text
<del> output["probability"] = probs[i]
<del> output["start_logit"] = entry.start_logit
<del> output["end_logit"] = entry.end_logit
<del> nbest_json.append(output)
<del>
<del> assert len(nbest_json) >= 1
<del>
<del> all_predictions[example.qas_id] = nbest_json[0]["text"]
<del> all_nbest_json[example.qas_id] = nbest_json
<del>
<del> with tf.gfile.GFile(output_prediction_file, "w") as writer:
<del> writer.write(json.dumps(all_predictions, indent=4) + "\n")
<del>
<del> with tf.gfile.GFile(output_nbest_file, "w") as writer:
<del> writer.write(json.dumps(all_nbest_json, indent=4) + "\n")
<add> """Write final predictions to the json file."""
<add> tf.logging.info("Writing predictions to: %s" % (output_prediction_file))
<add> tf.logging.info("Writing nbest to: %s" % (output_nbest_file))
<add>
<add> example_index_to_features = collections.defaultdict(list)
<add> for feature in all_features:
<add> example_index_to_features[feature.example_index].append(feature)
<add>
<add> unique_id_to_result = {}
<add> for result in all_results:
<add> unique_id_to_result[result.unique_id] = result
<add>
<add> _PrelimPrediction = collections.namedtuple( # pylint: disable=invalid-name
<add> "PrelimPrediction",
<add> ["feature_index", "start_index", "end_index", "start_logit", "end_logit"])
<add>
<add> all_predictions = collections.OrderedDict()
<add> all_nbest_json = collections.OrderedDict()
<add> for (example_index, example) in enumerate(all_examples):
<add> features = example_index_to_features[example_index]
<add>
<add> prelim_predictions = []
<add> for (feature_index, feature) in enumerate(features):
<add> result = unique_id_to_result[feature.unique_id]
<add>
<add> start_indexes = _get_best_indexes(result.start_logits, n_best_size)
<add> end_indexes = _get_best_indexes(result.end_logits, n_best_size)
<add> for start_index in start_indexes:
<add> for end_index in end_indexes:
<add> # We could hypothetically create invalid predictions, e.g., predict
<add> # that the start of the span is in the question. We throw out all
<add> # invalid predictions.
<add> if start_index >= len(feature.tokens):
<add> continue
<add> if end_index >= len(feature.tokens):
<add> continue
<add> if start_index not in feature.token_to_orig_map:
<add> continue
<add> if end_index not in feature.token_to_orig_map:
<add> continue
<add> if not feature.token_is_max_context.get(start_index, False):
<add> continue
<add> if end_index < start_index:
<add> continue
<add> length = end_index - start_index + 1
<add> if length > max_answer_length:
<add> continue
<add> prelim_predictions.append(
<add> _PrelimPrediction(
<add> feature_index=feature_index,
<add> start_index=start_index,
<add> end_index=end_index,
<add> start_logit=result.start_logits[start_index],
<add> end_logit=result.end_logits[end_index]))
<add>
<add> prelim_predictions = sorted(
<add> prelim_predictions,
<add> key=lambda x: (x.start_logit + x.end_logit),
<add> reverse=True)
<add>
<add> _NbestPrediction = collections.namedtuple( # pylint: disable=invalid-name
<add> "NbestPrediction", ["text", "start_logit", "end_logit"])
<add>
<add> seen_predictions = {}
<add> nbest = []
<add> for pred in prelim_predictions:
<add> if len(nbest) >= n_best_size:
<add> break
<add> feature = features[pred.feature_index]
<add>
<add> tok_tokens = feature.tokens[pred.start_index:(pred.end_index + 1)]
<add> orig_doc_start = feature.token_to_orig_map[pred.start_index]
<add> orig_doc_end = feature.token_to_orig_map[pred.end_index]
<add> orig_tokens = example.doc_tokens[orig_doc_start:(orig_doc_end + 1)]
<add> tok_text = " ".join(tok_tokens)
<add>
<add> # De-tokenize WordPieces that have been split off.
<add> tok_text = tok_text.replace(" ##", "")
<add> tok_text = tok_text.replace("##", "")
<add>
<add> # Clean whitespace
<add> tok_text = tok_text.strip()
<add> tok_text = " ".join(tok_text.split())
<add> orig_text = " ".join(orig_tokens)
<add>
<add> final_text = get_final_text(tok_text, orig_text, do_lower_case)
<add> if final_text in seen_predictions:
<add> continue
<add>
<add> seen_predictions[final_text] = True
<add> nbest.append(
<add> _NbestPrediction(
<add> text=final_text,
<add> start_logit=pred.start_logit,
<add> end_logit=pred.end_logit))
<add>
<add> # In very rare edge cases we could have no valid predictions. So we
<add> # just create a nonce prediction in this case to avoid failure.
<add> if not nbest:
<add> nbest.append(
<add> _NbestPrediction(text="empty", start_logit=0.0, end_logit=0.0))
<add>
<add> assert len(nbest) >= 1
<add>
<add> total_scores = []
<add> for entry in nbest:
<add> total_scores.append(entry.start_logit + entry.end_logit)
<add>
<add> probs = _compute_softmax(total_scores)
<add>
<add> nbest_json = []
<add> for (i, entry) in enumerate(nbest):
<add> output = collections.OrderedDict()
<add> output["text"] = entry.text
<add> output["probability"] = probs[i]
<add> output["start_logit"] = entry.start_logit
<add> output["end_logit"] = entry.end_logit
<add> nbest_json.append(output)
<add>
<add> assert len(nbest_json) >= 1
<add>
<add> all_predictions[example.qas_id] = nbest_json[0]["text"]
<add> all_nbest_json[example.qas_id] = nbest_json
<add>
<add> with tf.gfile.GFile(output_prediction_file, "w") as writer:
<add> writer.write(json.dumps(all_predictions, indent=4) + "\n")
<add>
<add> with tf.gfile.GFile(output_nbest_file, "w") as writer:
<add> writer.write(json.dumps(all_nbest_json, indent=4) + "\n")
<ide>
<ide>
<ide> def get_final_text(pred_text, orig_text, do_lower_case):
<del> """Project the tokenized prediction back to the original text."""
<del>
<del> # When we created the data, we kept track of the alignment between original
<del> # (whitespace tokenized) tokens and our WordPiece tokenized tokens. So
<del> # now `orig_text` contains the span of our original text corresponding to the
<del> # span that we predicted.
<del> #
<del> # However, `orig_text` may contain extra characters that we don't want in
<del> # our prediction.
<del> #
<del> # For example, let's say:
<del> # pred_text = steve smith
<del> # orig_text = Steve Smith's
<del> #
<del> # We don't want to return `orig_text` because it contains the extra "'s".
<del> #
<del> # We don't want to return `pred_text` because it's already been normalized
<del> # (the SQuAD eval script also does punctuation stripping/lower casing but
<del> # our tokenizer does additional normalization like stripping accent
<del> # characters).
<del> #
<del> # What we really want to return is "Steve Smith".
<del> #
<del> # Therefore, we have to apply a semi-complicated alignment heruistic between
<del> # `pred_text` and `orig_text` to get a character-to-charcter alignment. This
<del> # can fail in certain cases in which case we just return `orig_text`.
<del>
<del> def _strip_spaces(text):
<del> ns_chars = []
<del> ns_to_s_map = collections.OrderedDict()
<del> for (i, c) in enumerate(text):
<del> if c == " ":
<del> continue
<del> ns_to_s_map[len(ns_chars)] = i
<del> ns_chars.append(c)
<del> ns_text = "".join(ns_chars)
<del> return (ns_text, ns_to_s_map)
<del>
<del> # We first tokenize `orig_text`, strip whitespace from the result
<del> # and `pred_text`, and check if they are the same length. If they are
<del> # NOT the same length, the heuristic has failed. If they are the same
<del> # length, we assume the characters are one-to-one aligned.
<del> tokenizer = tokenization.BasicTokenizer(do_lower_case=do_lower_case)
<del>
<del> tok_text = " ".join(tokenizer.tokenize(orig_text))
<del>
<del> start_position = tok_text.find(pred_text)
<del> if start_position == -1:
<del> if FLAGS.verbose_logging:
<del> tf.logging.info(
<del> "Unable to find text: '%s' in '%s'" % (pred_text, orig_text))
<del> return orig_text
<del> end_position = start_position + len(pred_text) - 1
<del>
<del> (orig_ns_text, orig_ns_to_s_map) = _strip_spaces(orig_text)
<del> (tok_ns_text, tok_ns_to_s_map) = _strip_spaces(tok_text)
<del>
<del> if len(orig_ns_text) != len(tok_ns_text):
<del> if FLAGS.verbose_logging:
<del> tf.logging.info("Length not equal after stripping spaces: '%s' vs '%s'",
<del> orig_ns_text, tok_ns_text)
<del> return orig_text
<del>
<del> # We then project the characters in `pred_text` back to `orig_text` using
<del> # the character-to-character alignment.
<del> tok_s_to_ns_map = {}
<del> for (i, tok_index) in six.iteritems(tok_ns_to_s_map):
<del> tok_s_to_ns_map[tok_index] = i
<del>
<del> orig_start_position = None
<del> if start_position in tok_s_to_ns_map:
<del> ns_start_position = tok_s_to_ns_map[start_position]
<del> if ns_start_position in orig_ns_to_s_map:
<del> orig_start_position = orig_ns_to_s_map[ns_start_position]
<del>
<del> if orig_start_position is None:
<del> if FLAGS.verbose_logging:
<del> tf.logging.info("Couldn't map start position")
<del> return orig_text
<del>
<del> orig_end_position = None
<del> if end_position in tok_s_to_ns_map:
<del> ns_end_position = tok_s_to_ns_map[end_position]
<del> if ns_end_position in orig_ns_to_s_map:
<del> orig_end_position = orig_ns_to_s_map[ns_end_position]
<del>
<del> if orig_end_position is None:
<del> if FLAGS.verbose_logging:
<del> tf.logging.info("Couldn't map end position")
<del> return orig_text
<del>
<del> output_text = orig_text[orig_start_position:(orig_end_position + 1)]
<del> return output_text
<add> """Project the tokenized prediction back to the original text."""
<add>
<add> # When we created the data, we kept track of the alignment between original
<add> # (whitespace tokenized) tokens and our WordPiece tokenized tokens. So
<add> # now `orig_text` contains the span of our original text corresponding to the
<add> # span that we predicted.
<add> #
<add> # However, `orig_text` may contain extra characters that we don't want in
<add> # our prediction.
<add> #
<add> # For example, let's say:
<add> # pred_text = steve smith
<add> # orig_text = Steve Smith's
<add> #
<add> # We don't want to return `orig_text` because it contains the extra "'s".
<add> #
<add> # We don't want to return `pred_text` because it's already been normalized
<add> # (the SQuAD eval script also does punctuation stripping/lower casing but
<add> # our tokenizer does additional normalization like stripping accent
<add> # characters).
<add> #
<add> # What we really want to return is "Steve Smith".
<add> #
<add> # Therefore, we have to apply a semi-complicated alignment heruistic between
<add> # `pred_text` and `orig_text` to get a character-to-charcter alignment. This
<add> # can fail in certain cases in which case we just return `orig_text`.
<add>
<add> def _strip_spaces(text):
<add> ns_chars = []
<add> ns_to_s_map = collections.OrderedDict()
<add> for (i, c) in enumerate(text):
<add> if c == " ":
<add> continue
<add> ns_to_s_map[len(ns_chars)] = i
<add> ns_chars.append(c)
<add> ns_text = "".join(ns_chars)
<add> return (ns_text, ns_to_s_map)
<add>
<add> # We first tokenize `orig_text`, strip whitespace from the result
<add> # and `pred_text`, and check if they are the same length. If they are
<add> # NOT the same length, the heuristic has failed. If they are the same
<add> # length, we assume the characters are one-to-one aligned.
<add> tokenizer = tokenization.BasicTokenizer(do_lower_case=do_lower_case)
<add>
<add> tok_text = " ".join(tokenizer.tokenize(orig_text))
<add>
<add> start_position = tok_text.find(pred_text)
<add> if start_position == -1:
<add> if FLAGS.verbose_logging:
<add> tf.logging.info(
<add> "Unable to find text: '%s' in '%s'" % (pred_text, orig_text))
<add> return orig_text
<add> end_position = start_position + len(pred_text) - 1
<add>
<add> (orig_ns_text, orig_ns_to_s_map) = _strip_spaces(orig_text)
<add> (tok_ns_text, tok_ns_to_s_map) = _strip_spaces(tok_text)
<add>
<add> if len(orig_ns_text) != len(tok_ns_text):
<add> if FLAGS.verbose_logging:
<add> tf.logging.info("Length not equal after stripping spaces: '%s' vs '%s'",
<add> orig_ns_text, tok_ns_text)
<add> return orig_text
<add>
<add> # We then project the characters in `pred_text` back to `orig_text` using
<add> # the character-to-character alignment.
<add> tok_s_to_ns_map = {}
<add> for (i, tok_index) in six.iteritems(tok_ns_to_s_map):
<add> tok_s_to_ns_map[tok_index] = i
<add>
<add> orig_start_position = None
<add> if start_position in tok_s_to_ns_map:
<add> ns_start_position = tok_s_to_ns_map[start_position]
<add> if ns_start_position in orig_ns_to_s_map:
<add> orig_start_position = orig_ns_to_s_map[ns_start_position]
<add>
<add> if orig_start_position is None:
<add> if FLAGS.verbose_logging:
<add> tf.logging.info("Couldn't map start position")
<add> return orig_text
<add>
<add> orig_end_position = None
<add> if end_position in tok_s_to_ns_map:
<add> ns_end_position = tok_s_to_ns_map[end_position]
<add> if ns_end_position in orig_ns_to_s_map:
<add> orig_end_position = orig_ns_to_s_map[ns_end_position]
<add>
<add> if orig_end_position is None:
<add> if FLAGS.verbose_logging:
<add> tf.logging.info("Couldn't map end position")
<add> return orig_text
<add>
<add> output_text = orig_text[orig_start_position:(orig_end_position + 1)]
<add> return output_text
<ide>
<ide>
<ide> def _get_best_indexes(logits, n_best_size):
<del> """Get the n-best logits from a list."""
<del> index_and_score = sorted(enumerate(logits), key=lambda x: x[1], reverse=True)
<add> """Get the n-best logits from a list."""
<add> index_and_score = sorted(enumerate(logits), key=lambda x: x[1], reverse=True)
<ide>
<del> best_indexes = []
<del> for i in range(len(index_and_score)):
<del> if i >= n_best_size:
<del> break
<del> best_indexes.append(index_and_score[i][0])
<del> return best_indexes
<add> best_indexes = []
<add> for i in range(len(index_and_score)):
<add> if i >= n_best_size:
<add> break
<add> best_indexes.append(index_and_score[i][0])
<add> return best_indexes
<ide>
<ide>
<ide> def _compute_softmax(scores):
<del> """Compute softmax probability over raw logits."""
<del> if not scores:
<del> return []
<add> """Compute softmax probability over raw logits."""
<add> if not scores:
<add> return []
<ide>
<del> max_score = None
<del> for score in scores:
<del> if max_score is None or score > max_score:
<del> max_score = score
<add> max_score = None
<add> for score in scores:
<add> if max_score is None or score > max_score:
<add> max_score = score
<ide>
<del> exp_scores = []
<del> total_sum = 0.0
<del> for score in scores:
<del> x = math.exp(score - max_score)
<del> exp_scores.append(x)
<del> total_sum += x
<add> exp_scores = []
<add> total_sum = 0.0
<add> for score in scores:
<add> x = math.exp(score - max_score)
<add> exp_scores.append(x)
<add> total_sum += x
<ide>
<del> probs = []
<del> for score in exp_scores:
<del> probs.append(score / total_sum)
<del> return probs
<add> probs = []
<add> for score in exp_scores:
<add> probs.append(score / total_sum)
<add> return probs
<ide>
<ide>
<ide> def main(_):
<del> tf.logging.set_verbosity(tf.logging.INFO)
<del>
<del> if not FLAGS.do_train and not FLAGS.do_predict:
<del> raise ValueError("At least one of `do_train` or `do_predict` must be True.")
<del>
<del> if FLAGS.do_train:
<del> if not FLAGS.train_file:
<del> raise ValueError(
<del> "If `do_train` is True, then `train_file` must be specified.")
<del> if FLAGS.do_predict:
<del> if not FLAGS.predict_file:
<del> raise ValueError(
<del> "If `do_predict` is True, then `predict_file` must be specified.")
<del>
<del> bert_config = modeling.BertConfig.from_json_file(FLAGS.bert_config_file)
<del>
<del> if FLAGS.max_seq_length > bert_config.max_position_embeddings:
<del> raise ValueError(
<del> "Cannot use sequence length %d because the BERT model "
<del> "was only trained up to sequence length %d" %
<del> (FLAGS.max_seq_length, bert_config.max_position_embeddings))
<del>
<del> tf.gfile.MakeDirs(FLAGS.output_dir)
<del>
<del> tokenizer = tokenization.FullTokenizer(
<del> vocab_file=FLAGS.vocab_file, do_lower_case=FLAGS.do_lower_case)
<del>
<del> tpu_cluster_resolver = None
<del> if FLAGS.use_tpu and FLAGS.tpu_name:
<del> tpu_cluster_resolver = tf.contrib.cluster_resolver.TPUClusterResolver(
<del> FLAGS.tpu_name, zone=FLAGS.tpu_zone, project=FLAGS.gcp_project)
<del>
<del> is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2
<del> run_config = tf.contrib.tpu.RunConfig(
<del> cluster=tpu_cluster_resolver,
<del> master=FLAGS.master,
<del> model_dir=FLAGS.output_dir,
<del> save_checkpoints_steps=FLAGS.save_checkpoints_steps,
<del> tpu_config=tf.contrib.tpu.TPUConfig(
<del> iterations_per_loop=FLAGS.iterations_per_loop,
<del> num_shards=FLAGS.num_tpu_cores,
<del> per_host_input_for_training=is_per_host))
<del>
<del> train_examples = None
<del> num_train_steps = None
<del> num_warmup_steps = None
<del> if FLAGS.do_train:
<del> train_examples = read_squad_examples(
<del> input_file=FLAGS.train_file, is_training=True)
<del> num_train_steps = int(
<del> len(train_examples) / FLAGS.train_batch_size * FLAGS.num_train_epochs)
<del> num_warmup_steps = int(num_train_steps * FLAGS.warmup_proportion)
<del>
<del> model_fn = model_fn_builder(
<del> bert_config=bert_config,
<del> init_checkpoint=FLAGS.init_checkpoint,
<del> learning_rate=FLAGS.learning_rate,
<del> num_train_steps=num_train_steps,
<del> num_warmup_steps=num_warmup_steps,
<del> use_tpu=FLAGS.use_tpu,
<del> use_one_hot_embeddings=FLAGS.use_tpu)
<del>
<del> # If TPU is not available, this will fall back to normal Estimator on CPU
<del> # or GPU.
<del> estimator = tf.contrib.tpu.TPUEstimator(
<del> use_tpu=FLAGS.use_tpu,
<del> model_fn=model_fn,
<del> config=run_config,
<del> train_batch_size=FLAGS.train_batch_size,
<del> predict_batch_size=FLAGS.predict_batch_size)
<del>
<del> if FLAGS.do_train:
<del> train_features = convert_examples_to_features(
<del> examples=train_examples,
<del> tokenizer=tokenizer,
<del> max_seq_length=FLAGS.max_seq_length,
<del> doc_stride=FLAGS.doc_stride,
<del> max_query_length=FLAGS.max_query_length,
<del> is_training=True)
<del> tf.logging.info("***** Running training *****")
<del> tf.logging.info(" Num orig examples = %d", len(train_examples))
<del> tf.logging.info(" Num split examples = %d", len(train_features))
<del> tf.logging.info(" Batch size = %d", FLAGS.train_batch_size)
<del> tf.logging.info(" Num steps = %d", num_train_steps)
<del> train_input_fn = input_fn_builder(
<del> features=train_features,
<del> seq_length=FLAGS.max_seq_length,
<del> is_training=True,
<del> drop_remainder=True)
<del> estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
<del>
<del> if FLAGS.do_predict:
<del> eval_examples = read_squad_examples(
<del> input_file=FLAGS.predict_file, is_training=False)
<del> eval_features = convert_examples_to_features(
<del> examples=eval_examples,
<del> tokenizer=tokenizer,
<del> max_seq_length=FLAGS.max_seq_length,
<del> doc_stride=FLAGS.doc_stride,
<del> max_query_length=FLAGS.max_query_length,
<del> is_training=False)
<del>
<del> tf.logging.info("***** Running predictions *****")
<del> tf.logging.info(" Num orig examples = %d", len(eval_examples))
<del> tf.logging.info(" Num split examples = %d", len(eval_features))
<del> tf.logging.info(" Batch size = %d", FLAGS.predict_batch_size)
<del>
<del> all_results = []
<del>
<del> predict_input_fn = input_fn_builder(
<del> features=eval_features,
<del> seq_length=FLAGS.max_seq_length,
<del> is_training=False,
<del> drop_remainder=False)
<del>
<del> # If running eval on the TPU, you will need to specify the number of
<del> # steps.
<del> all_results = []
<del> for result in estimator.predict(
<del> predict_input_fn, yield_single_examples=True):
<del> if len(all_results) % 1000 == 0:
<del> tf.logging.info("Processing example: %d" % (len(all_results)))
<del> unique_id = int(result["unique_ids"])
<del> start_logits = [float(x) for x in result["start_logits"].flat]
<del> end_logits = [float(x) for x in result["end_logits"].flat]
<del> all_results.append(
<del> RawResult(
<del> unique_id=unique_id,
<del> start_logits=start_logits,
<del> end_logits=end_logits))
<del>
<del> output_prediction_file = os.path.join(FLAGS.output_dir, "predictions.json")
<del> output_nbest_file = os.path.join(FLAGS.output_dir, "nbest_predictions.json")
<del> write_predictions(eval_examples, eval_features, all_results,
<del> FLAGS.n_best_size, FLAGS.max_answer_length,
<del> FLAGS.do_lower_case, output_prediction_file,
<del> output_nbest_file)
<add> tf.logging.set_verbosity(tf.logging.INFO)
<add>
<add> if not FLAGS.do_train and not FLAGS.do_predict:
<add> raise ValueError("At least one of `do_train` or `do_predict` must be True.")
<add>
<add> if FLAGS.do_train:
<add> if not FLAGS.train_file:
<add> raise ValueError(
<add> "If `do_train` is True, then `train_file` must be specified.")
<add> if FLAGS.do_predict:
<add> if not FLAGS.predict_file:
<add> raise ValueError(
<add> "If `do_predict` is True, then `predict_file` must be specified.")
<add>
<add> bert_config = modeling.BertConfig.from_json_file(FLAGS.bert_config_file)
<add>
<add> if FLAGS.max_seq_length > bert_config.max_position_embeddings:
<add> raise ValueError(
<add> "Cannot use sequence length %d because the BERT model "
<add> "was only trained up to sequence length %d" %
<add> (FLAGS.max_seq_length, bert_config.max_position_embeddings))
<add>
<add> tf.gfile.MakeDirs(FLAGS.output_dir)
<add>
<add> tokenizer = tokenization.FullTokenizer(
<add> vocab_file=FLAGS.vocab_file, do_lower_case=FLAGS.do_lower_case)
<add>
<add> tpu_cluster_resolver = None
<add> if FLAGS.use_tpu and FLAGS.tpu_name:
<add> tpu_cluster_resolver = tf.contrib.cluster_resolver.TPUClusterResolver(
<add> FLAGS.tpu_name, zone=FLAGS.tpu_zone, project=FLAGS.gcp_project)
<add>
<add> is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2
<add> run_config = tf.contrib.tpu.RunConfig(
<add> cluster=tpu_cluster_resolver,
<add> master=FLAGS.master,
<add> model_dir=FLAGS.output_dir,
<add> save_checkpoints_steps=FLAGS.save_checkpoints_steps,
<add> tpu_config=tf.contrib.tpu.TPUConfig(
<add> iterations_per_loop=FLAGS.iterations_per_loop,
<add> num_shards=FLAGS.num_tpu_cores,
<add> per_host_input_for_training=is_per_host))
<add>
<add> train_examples = None
<add> num_train_steps = None
<add> num_warmup_steps = None
<add> if FLAGS.do_train:
<add> train_examples = read_squad_examples(
<add> input_file=FLAGS.train_file, is_training=True)
<add> num_train_steps = int(
<add> len(train_examples) / FLAGS.train_batch_size * FLAGS.num_train_epochs)
<add> num_warmup_steps = int(num_train_steps * FLAGS.warmup_proportion)
<add>
<add> model_fn = model_fn_builder(
<add> bert_config=bert_config,
<add> init_checkpoint=FLAGS.init_checkpoint,
<add> learning_rate=FLAGS.learning_rate,
<add> num_train_steps=num_train_steps,
<add> num_warmup_steps=num_warmup_steps,
<add> use_tpu=FLAGS.use_tpu,
<add> use_one_hot_embeddings=FLAGS.use_tpu)
<add>
<add> # If TPU is not available, this will fall back to normal Estimator on CPU
<add> # or GPU.
<add> estimator = tf.contrib.tpu.TPUEstimator(
<add> use_tpu=FLAGS.use_tpu,
<add> model_fn=model_fn,
<add> config=run_config,
<add> train_batch_size=FLAGS.train_batch_size,
<add> predict_batch_size=FLAGS.predict_batch_size)
<add>
<add> if FLAGS.do_train:
<add> train_features = convert_examples_to_features(
<add> examples=train_examples,
<add> tokenizer=tokenizer,
<add> max_seq_length=FLAGS.max_seq_length,
<add> doc_stride=FLAGS.doc_stride,
<add> max_query_length=FLAGS.max_query_length,
<add> is_training=True)
<add> tf.logging.info("***** Running training *****")
<add> tf.logging.info(" Num orig examples = %d", len(train_examples))
<add> tf.logging.info(" Num split examples = %d", len(train_features))
<add> tf.logging.info(" Batch size = %d", FLAGS.train_batch_size)
<add> tf.logging.info(" Num steps = %d", num_train_steps)
<add> train_input_fn = input_fn_builder(
<add> features=train_features,
<add> seq_length=FLAGS.max_seq_length,
<add> is_training=True,
<add> drop_remainder=True)
<add> estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
<add>
<add> if FLAGS.do_predict:
<add> eval_examples = read_squad_examples(
<add> input_file=FLAGS.predict_file, is_training=False)
<add> eval_features = convert_examples_to_features(
<add> examples=eval_examples,
<add> tokenizer=tokenizer,
<add> max_seq_length=FLAGS.max_seq_length,
<add> doc_stride=FLAGS.doc_stride,
<add> max_query_length=FLAGS.max_query_length,
<add> is_training=False)
<add>
<add> tf.logging.info("***** Running predictions *****")
<add> tf.logging.info(" Num orig examples = %d", len(eval_examples))
<add> tf.logging.info(" Num split examples = %d", len(eval_features))
<add> tf.logging.info(" Batch size = %d", FLAGS.predict_batch_size)
<add>
<add> all_results = []
<add>
<add> predict_input_fn = input_fn_builder(
<add> features=eval_features,
<add> seq_length=FLAGS.max_seq_length,
<add> is_training=False,
<add> drop_remainder=False)
<add>
<add> # If running eval on the TPU, you will need to specify the number of
<add> # steps.
<add> all_results = []
<add> for result in estimator.predict(
<add> predict_input_fn, yield_single_examples=True):
<add> if len(all_results) % 1000 == 0:
<add> tf.logging.info("Processing example: %d" % (len(all_results)))
<add> unique_id = int(result["unique_ids"])
<add> start_logits = [float(x) for x in result["start_logits"].flat]
<add> end_logits = [float(x) for x in result["end_logits"].flat]
<add> all_results.append(
<add> RawResult(
<add> unique_id=unique_id,
<add> start_logits=start_logits,
<add> end_logits=end_logits))
<add>
<add> output_prediction_file = os.path.join(FLAGS.output_dir, "predictions.json")
<add> output_nbest_file = os.path.join(FLAGS.output_dir, "nbest_predictions.json")
<add> write_predictions(eval_examples, eval_features, all_results,
<add> FLAGS.n_best_size, FLAGS.max_answer_length,
<add> FLAGS.do_lower_case, output_prediction_file,
<add> output_nbest_file)
<ide>
<ide>
<ide> if __name__ == "__main__":
<del> flags.mark_flag_as_required("vocab_file")
<del> flags.mark_flag_as_required("bert_config_file")
<del> flags.mark_flag_as_required("output_dir")
<del> tf.app.run()
<add> flags.mark_flag_as_required("vocab_file")
<add> flags.mark_flag_as_required("bert_config_file")
<add> flags.mark_flag_as_required("output_dir")
<add> tf.app.run()
<ide><path>tokenization.py
<ide>
<ide>
<ide> def convert_to_unicode(text):
<del> """Converts `text` to Unicode (if it's not already), assuming utf-8 input."""
<del> if six.PY3:
<del> if isinstance(text, str):
<del> return text
<del> elif isinstance(text, bytes):
<del> return text.decode("utf-8", "ignore")
<add> """Converts `text` to Unicode (if it's not already), assuming utf-8 input."""
<add> if six.PY3:
<add> if isinstance(text, str):
<add> return text
<add> elif isinstance(text, bytes):
<add> return text.decode("utf-8", "ignore")
<add> else:
<add> raise ValueError("Unsupported string type: %s" % (type(text)))
<add> elif six.PY2:
<add> if isinstance(text, str):
<add> return text.decode("utf-8", "ignore")
<add> elif isinstance(text, unicode):
<add> return text
<add> else:
<add> raise ValueError("Unsupported string type: %s" % (type(text)))
<ide> else:
<del> raise ValueError("Unsupported string type: %s" % (type(text)))
<del> elif six.PY2:
<del> if isinstance(text, str):
<del> return text.decode("utf-8", "ignore")
<del> elif isinstance(text, unicode):
<del> return text
<del> else:
<del> raise ValueError("Unsupported string type: %s" % (type(text)))
<del> else:
<del> raise ValueError("Not running on Python2 or Python 3?")
<add> raise ValueError("Not running on Python2 or Python 3?")
<ide>
<ide>
<ide> def printable_text(text):
<del> """Returns text encoded in a way suitable for print or `tf.logging`."""
<del>
<del> # These functions want `str` for both Python2 and Python3, but in one case
<del> # it's a Unicode string and in the other it's a byte string.
<del> if six.PY3:
<del> if isinstance(text, str):
<del> return text
<del> elif isinstance(text, bytes):
<del> return text.decode("utf-8", "ignore")
<del> else:
<del> raise ValueError("Unsupported string type: %s" % (type(text)))
<del> elif six.PY2:
<del> if isinstance(text, str):
<del> return text
<del> elif isinstance(text, unicode):
<del> return text.encode("utf-8")
<add> """Returns text encoded in a way suitable for print or `tf.logging`."""
<add>
<add> # These functions want `str` for both Python2 and Python3, but in one case
<add> # it's a Unicode string and in the other it's a byte string.
<add> if six.PY3:
<add> if isinstance(text, str):
<add> return text
<add> elif isinstance(text, bytes):
<add> return text.decode("utf-8", "ignore")
<add> else:
<add> raise ValueError("Unsupported string type: %s" % (type(text)))
<add> elif six.PY2:
<add> if isinstance(text, str):
<add> return text
<add> elif isinstance(text, unicode):
<add> return text.encode("utf-8")
<add> else:
<add> raise ValueError("Unsupported string type: %s" % (type(text)))
<ide> else:
<del> raise ValueError("Unsupported string type: %s" % (type(text)))
<del> else:
<del> raise ValueError("Not running on Python2 or Python 3?")
<add> raise ValueError("Not running on Python2 or Python 3?")
<ide>
<ide>
<ide> def load_vocab(vocab_file):
<del> """Loads a vocabulary file into a dictionary."""
<del> vocab = collections.OrderedDict()
<del> index = 0
<del> with tf.gfile.GFile(vocab_file, "r") as reader:
<del> while True:
<del> token = convert_to_unicode(reader.readline())
<del> if not token:
<del> break
<del> token = token.strip()
<del> vocab[token] = index
<del> index += 1
<del> return vocab
<add> """Loads a vocabulary file into a dictionary."""
<add> vocab = collections.OrderedDict()
<add> index = 0
<add> with tf.gfile.GFile(vocab_file, "r") as reader:
<add> while True:
<add> token = convert_to_unicode(reader.readline())
<add> if not token:
<add> break
<add> token = token.strip()
<add> vocab[token] = index
<add> index += 1
<add> return vocab
<ide>
<ide>
<ide> def convert_tokens_to_ids(vocab, tokens):
<del> """Converts a sequence of tokens into ids using the vocab."""
<del> ids = []
<del> for token in tokens:
<del> ids.append(vocab[token])
<del> return ids
<add> """Converts a sequence of tokens into ids using the vocab."""
<add> ids = []
<add> for token in tokens:
<add> ids.append(vocab[token])
<add> return ids
<ide>
<ide>
<ide> def whitespace_tokenize(text):
<del> """Runs basic whitespace cleaning and splitting on a peice of text."""
<del> text = text.strip()
<del> if not text:
<del> return []
<del> tokens = text.split()
<del> return tokens
<add> """Runs basic whitespace cleaning and splitting on a peice of text."""
<add> text = text.strip()
<add> if not text:
<add> return []
<add> tokens = text.split()
<add> return tokens
<ide>
<ide>
<ide> class FullTokenizer(object):
<del> """Runs end-to-end tokenziation."""
<add> """Runs end-to-end tokenziation."""
<ide>
<del> def __init__(self, vocab_file, do_lower_case=True):
<del> self.vocab = load_vocab(vocab_file)
<del> self.basic_tokenizer = BasicTokenizer(do_lower_case=do_lower_case)
<del> self.wordpiece_tokenizer = WordpieceTokenizer(vocab=self.vocab)
<add> def __init__(self, vocab_file, do_lower_case=True):
<add> self.vocab = load_vocab(vocab_file)
<add> self.basic_tokenizer = BasicTokenizer(do_lower_case=do_lower_case)
<add> self.wordpiece_tokenizer = WordpieceTokenizer(vocab=self.vocab)
<ide>
<del> def tokenize(self, text):
<del> split_tokens = []
<del> for token in self.basic_tokenizer.tokenize(text):
<del> for sub_token in self.wordpiece_tokenizer.tokenize(token):
<del> split_tokens.append(sub_token)
<add> def tokenize(self, text):
<add> split_tokens = []
<add> for token in self.basic_tokenizer.tokenize(text):
<add> for sub_token in self.wordpiece_tokenizer.tokenize(token):
<add> split_tokens.append(sub_token)
<ide>
<del> return split_tokens
<add> return split_tokens
<ide>
<del> def convert_tokens_to_ids(self, tokens):
<del> return convert_tokens_to_ids(self.vocab, tokens)
<add> def convert_tokens_to_ids(self, tokens):
<add> return convert_tokens_to_ids(self.vocab, tokens)
<ide>
<ide>
<ide> class BasicTokenizer(object):
<del> """Runs basic tokenization (punctuation splitting, lower casing, etc.)."""
<del>
<del> def __init__(self, do_lower_case=True):
<del> """Constructs a BasicTokenizer.
<del>
<del> Args:
<del> do_lower_case: Whether to lower case the input.
<del> """
<del> self.do_lower_case = do_lower_case
<del>
<del> def tokenize(self, text):
<del> """Tokenizes a piece of text."""
<del> text = convert_to_unicode(text)
<del> text = self._clean_text(text)
<del> orig_tokens = whitespace_tokenize(text)
<del> split_tokens = []
<del> for token in orig_tokens:
<del> if self.do_lower_case:
<del> token = token.lower()
<del> token = self._run_strip_accents(token)
<del> split_tokens.extend(self._run_split_on_punc(token))
<del>
<del> output_tokens = whitespace_tokenize(" ".join(split_tokens))
<del> return output_tokens
<del>
<del> def _run_strip_accents(self, text):
<del> """Strips accents from a piece of text."""
<del> text = unicodedata.normalize("NFD", text)
<del> output = []
<del> for char in text:
<del> cat = unicodedata.category(char)
<del> if cat == "Mn":
<del> continue
<del> output.append(char)
<del> return "".join(output)
<del>
<del> def _run_split_on_punc(self, text):
<del> """Splits punctuation on a piece of text."""
<del> chars = list(text)
<del> i = 0
<del> start_new_word = True
<del> output = []
<del> while i < len(chars):
<del> char = chars[i]
<del> if _is_punctuation(char):
<del> output.append([char])
<add> """Runs basic tokenization (punctuation splitting, lower casing, etc.)."""
<add>
<add> def __init__(self, do_lower_case=True):
<add> """Constructs a BasicTokenizer.
<add>
<add> Args:
<add> do_lower_case: Whether to lower case the input.
<add> """
<add> self.do_lower_case = do_lower_case
<add>
<add> def tokenize(self, text):
<add> """Tokenizes a piece of text."""
<add> text = convert_to_unicode(text)
<add> text = self._clean_text(text)
<add> orig_tokens = whitespace_tokenize(text)
<add> split_tokens = []
<add> for token in orig_tokens:
<add> if self.do_lower_case:
<add> token = token.lower()
<add> token = self._run_strip_accents(token)
<add> split_tokens.extend(self._run_split_on_punc(token))
<add>
<add> output_tokens = whitespace_tokenize(" ".join(split_tokens))
<add> return output_tokens
<add>
<add> def _run_strip_accents(self, text):
<add> """Strips accents from a piece of text."""
<add> text = unicodedata.normalize("NFD", text)
<add> output = []
<add> for char in text:
<add> cat = unicodedata.category(char)
<add> if cat == "Mn":
<add> continue
<add> output.append(char)
<add> return "".join(output)
<add>
<add> def _run_split_on_punc(self, text):
<add> """Splits punctuation on a piece of text."""
<add> chars = list(text)
<add> i = 0
<ide> start_new_word = True
<del> else:
<del> if start_new_word:
<del> output.append([])
<del> start_new_word = False
<del> output[-1].append(char)
<del> i += 1
<del>
<del> return ["".join(x) for x in output]
<del>
<del> def _clean_text(self, text):
<del> """Performs invalid character removal and whitespace cleanup on text."""
<del> output = []
<del> for char in text:
<del> cp = ord(char)
<del> if cp == 0 or cp == 0xfffd or _is_control(char):
<del> continue
<del> if _is_whitespace(char):
<del> output.append(" ")
<del> else:
<del> output.append(char)
<del> return "".join(output)
<add> output = []
<add> while i < len(chars):
<add> char = chars[i]
<add> if _is_punctuation(char):
<add> output.append([char])
<add> start_new_word = True
<add> else:
<add> if start_new_word:
<add> output.append([])
<add> start_new_word = False
<add> output[-1].append(char)
<add> i += 1
<add>
<add> return ["".join(x) for x in output]
<add>
<add> def _clean_text(self, text):
<add> """Performs invalid character removal and whitespace cleanup on text."""
<add> output = []
<add> for char in text:
<add> cp = ord(char)
<add> if cp == 0 or cp == 0xfffd or _is_control(char):
<add> continue
<add> if _is_whitespace(char):
<add> output.append(" ")
<add> else:
<add> output.append(char)
<add> return "".join(output)
<ide>
<ide>
<ide> class WordpieceTokenizer(object):
<del> """Runs WordPiece tokenziation."""
<del>
<del> def __init__(self, vocab, unk_token="[UNK]", max_input_chars_per_word=100):
<del> self.vocab = vocab
<del> self.unk_token = unk_token
<del> self.max_input_chars_per_word = max_input_chars_per_word
<del>
<del> def tokenize(self, text):
<del> """Tokenizes a piece of text into its word pieces.
<del>
<del> This uses a greedy longest-match-first algorithm to perform tokenization
<del> using the given vocabulary.
<del>
<del> For example:
<del> input = "unaffable"
<del> output = ["un", "##aff", "##able"]
<del>
<del> Args:
<del> text: A single token or whitespace separated tokens. This should have
<del> already been passed through `BasicTokenizer.
<del>
<del> Returns:
<del> A list of wordpiece tokens.
<del> """
<del>
<del> text = convert_to_unicode(text)
<del>
<del> output_tokens = []
<del> for token in whitespace_tokenize(text):
<del> chars = list(token)
<del> if len(chars) > self.max_input_chars_per_word:
<del> output_tokens.append(self.unk_token)
<del> continue
<del>
<del> is_bad = False
<del> start = 0
<del> sub_tokens = []
<del> while start < len(chars):
<del> end = len(chars)
<del> cur_substr = None
<del> while start < end:
<del> substr = "".join(chars[start:end])
<del> if start > 0:
<del> substr = "##" + substr
<del> if substr in self.vocab:
<del> cur_substr = substr
<del> break
<del> end -= 1
<del> if cur_substr is None:
<del> is_bad = True
<del> break
<del> sub_tokens.append(cur_substr)
<del> start = end
<del>
<del> if is_bad:
<del> output_tokens.append(self.unk_token)
<del> else:
<del> output_tokens.extend(sub_tokens)
<del> return output_tokens
<add> """Runs WordPiece tokenziation."""
<add>
<add> def __init__(self, vocab, unk_token="[UNK]", max_input_chars_per_word=100):
<add> self.vocab = vocab
<add> self.unk_token = unk_token
<add> self.max_input_chars_per_word = max_input_chars_per_word
<add>
<add> def tokenize(self, text):
<add> """Tokenizes a piece of text into its word pieces.
<add>
<add> This uses a greedy longest-match-first algorithm to perform tokenization
<add> using the given vocabulary.
<add>
<add> For example:
<add> input = "unaffable"
<add> output = ["un", "##aff", "##able"]
<add>
<add> Args:
<add> text: A single token or whitespace separated tokens. This should have
<add> already been passed through `BasicTokenizer.
<add>
<add> Returns:
<add> A list of wordpiece tokens.
<add> """
<add>
<add> text = convert_to_unicode(text)
<add>
<add> output_tokens = []
<add> for token in whitespace_tokenize(text):
<add> chars = list(token)
<add> if len(chars) > self.max_input_chars_per_word:
<add> output_tokens.append(self.unk_token)
<add> continue
<add>
<add> is_bad = False
<add> start = 0
<add> sub_tokens = []
<add> while start < len(chars):
<add> end = len(chars)
<add> cur_substr = None
<add> while start < end:
<add> substr = "".join(chars[start:end])
<add> if start > 0:
<add> substr = "##" + substr
<add> if substr in self.vocab:
<add> cur_substr = substr
<add> break
<add> end -= 1
<add> if cur_substr is None:
<add> is_bad = True
<add> break
<add> sub_tokens.append(cur_substr)
<add> start = end
<add>
<add> if is_bad:
<add> output_tokens.append(self.unk_token)
<add> else:
<add> output_tokens.extend(sub_tokens)
<add> return output_tokens
<ide>
<ide>
<ide> def _is_whitespace(char):
<del> """Checks whether `chars` is a whitespace character."""
<del> # \t, \n, and \r are technically contorl characters but we treat them
<del> # as whitespace since they are generally considered as such.
<del> if char == " " or char == "\t" or char == "\n" or char == "\r":
<del> return True
<del> cat = unicodedata.category(char)
<del> if cat == "Zs":
<del> return True
<del> return False
<add> """Checks whether `chars` is a whitespace character."""
<add> # \t, \n, and \r are technically contorl characters but we treat them
<add> # as whitespace since they are generally considered as such.
<add> if char == " " or char == "\t" or char == "\n" or char == "\r":
<add> return True
<add> cat = unicodedata.category(char)
<add> if cat == "Zs":
<add> return True
<add> return False
<ide>
<ide>
<ide> def _is_control(char):
<del> """Checks whether `chars` is a control character."""
<del> # These are technically control characters but we count them as whitespace
<del> # characters.
<del> if char == "\t" or char == "\n" or char == "\r":
<add> """Checks whether `chars` is a control character."""
<add> # These are technically control characters but we count them as whitespace
<add> # characters.
<add> if char == "\t" or char == "\n" or char == "\r":
<add> return False
<add> cat = unicodedata.category(char)
<add> if cat.startswith("C"):
<add> return True
<ide> return False
<del> cat = unicodedata.category(char)
<del> if cat.startswith("C"):
<del> return True
<del> return False
<ide>
<ide>
<ide> def _is_punctuation(char):
<del> """Checks whether `chars` is a punctuation character."""
<del> cp = ord(char)
<del> # We treat all non-letter/number ASCII as punctuation.
<del> # Characters such as "^", "$", and "`" are not in the Unicode
<del> # Punctuation class but we treat them as punctuation anyways, for
<del> # consistency.
<del> if ((cp >= 33 and cp <= 47) or (cp >= 58 and cp <= 64) or
<del> (cp >= 91 and cp <= 96) or (cp >= 123 and cp <= 126)):
<del> return True
<del> cat = unicodedata.category(char)
<del> if cat.startswith("P"):
<del> return True
<del> return False
<add> """Checks whether `chars` is a punctuation character."""
<add> cp = ord(char)
<add> # We treat all non-letter/number ASCII as punctuation.
<add> # Characters such as "^", "$", and "`" are not in the Unicode
<add> # Punctuation class but we treat them as punctuation anyways, for
<add> # consistency.
<add> if ((cp >= 33 and cp <= 47) or (cp >= 58 and cp <= 64) or
<add> (cp >= 91 and cp <= 96) or (cp >= 123 and cp <= 126)):
<add> return True
<add> cat = unicodedata.category(char)
<add> if cat.startswith("P"):
<add> return True
<add> return False
<ide><path>tokenization_test.py
<ide>
<ide> class TokenizationTest(tf.test.TestCase):
<ide>
<del> def test_full_tokenizer(self):
<del> vocab_tokens = [
<del> "[UNK]", "[CLS]", "[SEP]", "want", "##want", "##ed", "wa", "un", "runn",
<del> "##ing", ","
<del> ]
<del> with tempfile.NamedTemporaryFile(delete=False) as vocab_writer:
<del> vocab_writer.write("".join([x + "\n" for x in vocab_tokens]))
<add> def test_full_tokenizer(self):
<add> vocab_tokens = [
<add> "[UNK]", "[CLS]", "[SEP]", "want", "##want", "##ed", "wa", "un", "runn",
<add> "##ing", ","
<add> ]
<add> with tempfile.NamedTemporaryFile(delete=False) as vocab_writer:
<add> vocab_writer.write("".join([x + "\n" for x in vocab_tokens]))
<ide>
<del> vocab_file = vocab_writer.name
<add> vocab_file = vocab_writer.name
<ide>
<del> tokenizer = tokenization.FullTokenizer(vocab_file)
<del> os.unlink(vocab_file)
<add> tokenizer = tokenization.FullTokenizer(vocab_file)
<add> os.unlink(vocab_file)
<ide>
<del> tokens = tokenizer.tokenize(u"UNwant\u00E9d,running")
<del> self.assertAllEqual(tokens, ["un", "##want", "##ed", ",", "runn", "##ing"])
<add> tokens = tokenizer.tokenize(u"UNwant\u00E9d,running")
<add> self.assertAllEqual(tokens, ["un", "##want", "##ed", ",", "runn", "##ing"])
<ide>
<del> self.assertAllEqual(
<del> tokenizer.convert_tokens_to_ids(tokens), [7, 4, 5, 10, 8, 9])
<add> self.assertAllEqual(
<add> tokenizer.convert_tokens_to_ids(tokens), [7, 4, 5, 10, 8, 9])
<ide>
<del> def test_basic_tokenizer_lower(self):
<del> tokenizer = tokenization.BasicTokenizer(do_lower_case=True)
<add> def test_basic_tokenizer_lower(self):
<add> tokenizer = tokenization.BasicTokenizer(do_lower_case=True)
<ide>
<del> self.assertAllEqual(
<del> tokenizer.tokenize(u" \tHeLLo!how \n Are yoU? "),
<del> ["hello", "!", "how", "are", "you", "?"])
<del> self.assertAllEqual(tokenizer.tokenize(u"H\u00E9llo"), ["hello"])
<add> self.assertAllEqual(
<add> tokenizer.tokenize(u" \tHeLLo!how \n Are yoU? "),
<add> ["hello", "!", "how", "are", "you", "?"])
<add> self.assertAllEqual(tokenizer.tokenize(u"H\u00E9llo"), ["hello"])
<ide>
<del> def test_basic_tokenizer_no_lower(self):
<del> tokenizer = tokenization.BasicTokenizer(do_lower_case=False)
<add> def test_basic_tokenizer_no_lower(self):
<add> tokenizer = tokenization.BasicTokenizer(do_lower_case=False)
<ide>
<del> self.assertAllEqual(
<del> tokenizer.tokenize(u" \tHeLLo!how \n Are yoU? "),
<del> ["HeLLo", "!", "how", "Are", "yoU", "?"])
<add> self.assertAllEqual(
<add> tokenizer.tokenize(u" \tHeLLo!how \n Are yoU? "),
<add> ["HeLLo", "!", "how", "Are", "yoU", "?"])
<ide>
<del> def test_wordpiece_tokenizer(self):
<del> vocab_tokens = [
<del> "[UNK]", "[CLS]", "[SEP]", "want", "##want", "##ed", "wa", "un", "runn",
<del> "##ing"
<del> ]
<add> def test_wordpiece_tokenizer(self):
<add> vocab_tokens = [
<add> "[UNK]", "[CLS]", "[SEP]", "want", "##want", "##ed", "wa", "un", "runn",
<add> "##ing"
<add> ]
<ide>
<del> vocab = {}
<del> for (i, token) in enumerate(vocab_tokens):
<del> vocab[token] = i
<del> tokenizer = tokenization.WordpieceTokenizer(vocab=vocab)
<add> vocab = {}
<add> for (i, token) in enumerate(vocab_tokens):
<add> vocab[token] = i
<add> tokenizer = tokenization.WordpieceTokenizer(vocab=vocab)
<ide>
<del> self.assertAllEqual(tokenizer.tokenize(""), [])
<add> self.assertAllEqual(tokenizer.tokenize(""), [])
<ide>
<del> self.assertAllEqual(
<del> tokenizer.tokenize("unwanted running"),
<del> ["un", "##want", "##ed", "runn", "##ing"])
<add> self.assertAllEqual(
<add> tokenizer.tokenize("unwanted running"),
<add> ["un", "##want", "##ed", "runn", "##ing"])
<ide>
<del> self.assertAllEqual(
<del> tokenizer.tokenize("unwantedX running"), ["[UNK]", "runn", "##ing"])
<add> self.assertAllEqual(
<add> tokenizer.tokenize("unwantedX running"), ["[UNK]", "runn", "##ing"])
<ide>
<del> def test_convert_tokens_to_ids(self):
<del> vocab_tokens = [
<del> "[UNK]", "[CLS]", "[SEP]", "want", "##want", "##ed", "wa", "un", "runn",
<del> "##ing"
<del> ]
<add> def test_convert_tokens_to_ids(self):
<add> vocab_tokens = [
<add> "[UNK]", "[CLS]", "[SEP]", "want", "##want", "##ed", "wa", "un", "runn",
<add> "##ing"
<add> ]
<ide>
<del> vocab = {}
<del> for (i, token) in enumerate(vocab_tokens):
<del> vocab[token] = i
<add> vocab = {}
<add> for (i, token) in enumerate(vocab_tokens):
<add> vocab[token] = i
<ide>
<del> self.assertAllEqual(
<del> tokenization.convert_tokens_to_ids(
<del> vocab, ["un", "##want", "##ed", "runn", "##ing"]), [7, 4, 5, 8, 9])
<add> self.assertAllEqual(
<add> tokenization.convert_tokens_to_ids(
<add> vocab, ["un", "##want", "##ed", "runn", "##ing"]), [7, 4, 5, 8, 9])
<ide>
<del> def test_is_whitespace(self):
<del> self.assertTrue(tokenization._is_whitespace(u" "))
<del> self.assertTrue(tokenization._is_whitespace(u"\t"))
<del> self.assertTrue(tokenization._is_whitespace(u"\r"))
<del> self.assertTrue(tokenization._is_whitespace(u"\n"))
<del> self.assertTrue(tokenization._is_whitespace(u"\u00A0"))
<add> def test_is_whitespace(self):
<add> self.assertTrue(tokenization._is_whitespace(u" "))
<add> self.assertTrue(tokenization._is_whitespace(u"\t"))
<add> self.assertTrue(tokenization._is_whitespace(u"\r"))
<add> self.assertTrue(tokenization._is_whitespace(u"\n"))
<add> self.assertTrue(tokenization._is_whitespace(u"\u00A0"))
<ide>
<del> self.assertFalse(tokenization._is_whitespace(u"A"))
<del> self.assertFalse(tokenization._is_whitespace(u"-"))
<add> self.assertFalse(tokenization._is_whitespace(u"A"))
<add> self.assertFalse(tokenization._is_whitespace(u"-"))
<ide>
<del> def test_is_control(self):
<del> self.assertTrue(tokenization._is_control(u"\u0005"))
<add> def test_is_control(self):
<add> self.assertTrue(tokenization._is_control(u"\u0005"))
<ide>
<del> self.assertFalse(tokenization._is_control(u"A"))
<del> self.assertFalse(tokenization._is_control(u" "))
<del> self.assertFalse(tokenization._is_control(u"\t"))
<del> self.assertFalse(tokenization._is_control(u"\r"))
<add> self.assertFalse(tokenization._is_control(u"A"))
<add> self.assertFalse(tokenization._is_control(u" "))
<add> self.assertFalse(tokenization._is_control(u"\t"))
<add> self.assertFalse(tokenization._is_control(u"\r"))
<ide>
<del> def test_is_punctuation(self):
<del> self.assertTrue(tokenization._is_punctuation(u"-"))
<del> self.assertTrue(tokenization._is_punctuation(u"$"))
<del> self.assertTrue(tokenization._is_punctuation(u"`"))
<del> self.assertTrue(tokenization._is_punctuation(u"."))
<add> def test_is_punctuation(self):
<add> self.assertTrue(tokenization._is_punctuation(u"-"))
<add> self.assertTrue(tokenization._is_punctuation(u"$"))
<add> self.assertTrue(tokenization._is_punctuation(u"`"))
<add> self.assertTrue(tokenization._is_punctuation(u"."))
<ide>
<del> self.assertFalse(tokenization._is_punctuation(u"A"))
<del> self.assertFalse(tokenization._is_punctuation(u" "))
<add> self.assertFalse(tokenization._is_punctuation(u"A"))
<add> self.assertFalse(tokenization._is_punctuation(u" "))
<ide>
<ide>
<ide> if __name__ == "__main__":
<del> tf.test.main()
<add> tf.test.main() | 11 |
Ruby | Ruby | ignore unused optional and recommended deps | fe802f05ef60118b67a6999e3eb2c17fdef08645 | <ide><path>Library/Homebrew/cmd/doctor.rb
<ide> def check_tmpdir
<ide> def check_missing_deps
<ide> return unless HOMEBREW_CELLAR.exist?
<ide> s = Set.new
<del> Homebrew.missing_deps(Homebrew.installed_brews).each do |_, deps|
<add> Homebrew.missing_deps(Formula.installed).each do |_, deps|
<ide> s.merge deps
<ide> end
<ide>
<ide><path>Library/Homebrew/cmd/missing.rb
<ide> require 'formula'
<add>require 'tab'
<ide>
<ide> module Homebrew extend self
<del> def installed_brews
<del> formulae = []
<del> HOMEBREW_CELLAR.subdirs.each do |rack|
<del> f = Formula.factory rack.basename.to_s rescue nil
<del> formulae << f if f and f.rack.exist? and f.rack.subdirs.length > 0
<del> end
<del> formulae
<del> end
<del>
<ide> def missing_deps ff
<ide> missing = {}
<ide> ff.each do |f|
<del> missing_deps = f.recursive_deps.uniq.reject do |dep|
<del> dep.rack.exist? and dep.rack.subdirs.length > 0
<add> missing_deps = f.recursive_dependencies do |dependent, dep|
<add> if dep.optional? || dep.recommended?
<add> tab = Tab.for_formula(dependent)
<add> Dependency.prune unless tab.with?(dep.name)
<add> elsif dep.build?
<add> Dependency.prune
<ide> end
<add> end
<add>
<add> missing_deps.map!(&:to_formula)
<add> missing_deps.reject! { |d| d.rack.exist? && d.rack.subdirs.length > 0 }
<ide>
<ide> unless missing_deps.empty?
<ide> yield f.name, missing_deps if block_given?
<ide> def missing
<ide> return unless HOMEBREW_CELLAR.exist?
<ide>
<ide> ff = if ARGV.named.empty?
<del> installed_brews
<add> Formula.installed
<ide> else
<ide> ARGV.formulae
<ide> end | 2 |
Javascript | Javascript | fix typo in internal message event name | 4d49469d0df0730b16630d8f32fbfc55aaaa0952 | <ide><path>lib/child_process.js
<ide> function setupChannel(target, channel) {
<ide> typeof message === 'object' &&
<ide> typeof message.cmd === 'string' &&
<ide> message.cmd.indexOf('NODE_') === 0) {
<del> target.emit('inernalMessage', message, recvHandle);
<add> target.emit('internalMessage', message, recvHandle);
<ide> }
<ide> //Non-internal message
<ide> else { | 1 |
Javascript | Javascript | apply review feedback | 61ce0f79c1eb2a491fd993ac35612824a59415ed | <ide><path>lib/optimize/CommonsChunkPlugin.js
<ide> The available options are:
<ide> You can however specify the name of the async chunk by passing the desired string as the \"async\" option.`);
<ide> }
<ide>
<del> const chunkNames = options.name ? [options.name] : options.names;
<add> /**
<add> * Make sure this is either an array or undefined.
<add> * "name" can be a string and
<add> * "names" a string or an array
<add> */
<add> const chunkNames = options.name || options.names ? [].concat(options.name || options.names) : undefined;
<ide> return {
<ide> chunkNames: chunkNames,
<ide> filenameTemplate: options.filename,
<ide> You can however specify the name of the async chunk by passing the desired strin
<ide> // we have specified chunk names
<ide> if(chunkNames) {
<ide> // map chunks by chunkName for quick access
<del> const optimizedChunkMap = allChunks.reduce((map, chunk) => {
<del> map.set(chunk.name, chunk);
<add> const allChunksNameMap = allChunks.reduce((map, chunk) => {
<add> if(chunk.name) {
<add> map.set(chunk.name, chunk);
<add> }
<ide> return map;
<ide> }, new Map());
<ide>
<ide> // Ensure we have a chunk per specified chunk name.
<ide> // Reuse existing chunks if possible
<ide> return chunkNames.map(chunkName => {
<del> if(optimizedChunkMap.has(chunkName)) {
<del> return optimizedChunkMap.get(chunkName);
<add> if(allChunksNameMap.has(chunkName)) {
<add> return allChunksNameMap.get(chunkName);
<ide> }
<ide> // add the filtered chunks to the compilation
<ide> return compilation.addChunk(chunkName); | 1 |
Go | Go | use assert.nilerror() instead of assert.assert() | 3449b12cc7eefa8ebd0de6ec8b9803c6ee823af0 | <ide><path>client/hijack_test.go
<ide> func TestTLSCloseWriter(t *testing.T) {
<ide> break
<ide> }
<ide> }
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide>
<ide> ts.Listener = l
<ide> defer l.Close()
<ide> func TestTLSCloseWriter(t *testing.T) {
<ide> defer ts.Close()
<ide>
<ide> serverURL, err := url.Parse(ts.URL)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide>
<ide> client, err := NewClient("tcp://"+serverURL.Host, "", ts.Client(), nil)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide>
<ide> resp, err := client.postHijacked(context.Background(), "/asdf", url.Values{}, nil, map[string][]string{"Content-Type": {"text/plain"}})
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> defer resp.Close()
<ide>
<ide> if _, ok := resp.Conn.(types.CloseWriter); !ok {
<ide> t.Fatal("tls conn did not implement the CloseWrite interface")
<ide> }
<ide>
<ide> _, err = resp.Conn.Write([]byte("hello"))
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide>
<ide> b, err := ioutil.ReadAll(resp.Reader)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Assert(t, string(b) == "hello")
<ide> assert.Assert(t, resp.CloseWrite())
<ide>
<ide><path>daemon/daemon_linux_test.go
<ide> func TestRootMountCleanup(t *testing.T) {
<ide> t.Parallel()
<ide>
<ide> testRoot, err := ioutil.TempDir("", t.Name())
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> defer os.RemoveAll(testRoot)
<ide> cfg := &config.Config{}
<ide>
<ide> err = mount.MakePrivate(testRoot)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> defer mount.Unmount(testRoot)
<ide>
<ide> cfg.ExecRoot = filepath.Join(testRoot, "exec")
<ide> cfg.Root = filepath.Join(testRoot, "daemon")
<ide>
<ide> err = os.Mkdir(cfg.ExecRoot, 0755)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> err = os.Mkdir(cfg.Root, 0755)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide>
<ide> d := &Daemon{configStore: cfg, root: cfg.Root}
<ide> unmountFile := getUnmountOnShutdownPath(cfg)
<ide>
<ide> t.Run("regular dir no mountpoint", func(t *testing.T) {
<ide> err = setupDaemonRootPropagation(cfg)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> _, err = os.Stat(unmountFile)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> checkMounted(t, cfg.Root, true)
<ide>
<ide> assert.Assert(t, d.cleanupMounts())
<ide> func TestRootMountCleanup(t *testing.T) {
<ide>
<ide> t.Run("root is a private mountpoint", func(t *testing.T) {
<ide> err = mount.MakePrivate(cfg.Root)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> defer mount.Unmount(cfg.Root)
<ide>
<ide> err = setupDaemonRootPropagation(cfg)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Check(t, ensureShared(cfg.Root))
<ide>
<ide> _, err = os.Stat(unmountFile)
<ide> func TestRootMountCleanup(t *testing.T) {
<ide> // mount is pre-configured with a shared mount
<ide> t.Run("root is a shared mountpoint", func(t *testing.T) {
<ide> err = mount.MakeShared(cfg.Root)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> defer mount.Unmount(cfg.Root)
<ide>
<ide> err = setupDaemonRootPropagation(cfg)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide>
<ide> if _, err := os.Stat(unmountFile); err == nil {
<ide> t.Fatal("unmount file should not exist")
<ide> func TestRootMountCleanup(t *testing.T) {
<ide> // does not need mount but unmount file exists from previous run
<ide> t.Run("old mount file is cleaned up on setup if not needed", func(t *testing.T) {
<ide> err = mount.MakeShared(testRoot)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> defer mount.MakePrivate(testRoot)
<ide> err = ioutil.WriteFile(unmountFile, nil, 0644)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide>
<ide> err = setupDaemonRootPropagation(cfg)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide>
<ide> _, err = os.Stat(unmountFile)
<ide> assert.Check(t, os.IsNotExist(err), err)
<ide><path>daemon/logger/jsonfilelog/read_test.go
<ide> func TestEncodeDecode(t *testing.T) {
<ide>
<ide> decode := decodeFunc(buf)
<ide> msg, err := decode()
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Assert(t, string(msg.Line) == "hello 1\n", string(msg.Line))
<ide>
<ide> msg, err = decode()
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Assert(t, string(msg.Line) == "hello 2\n")
<ide>
<ide> msg, err = decode()
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Assert(t, string(msg.Line) == "hello 3\n")
<ide>
<ide> _, err = decode()
<ide><path>daemon/logger/local/local_test.go
<ide> func TestWriteLog(t *testing.T) {
<ide> t.Parallel()
<ide>
<ide> dir, err := ioutil.TempDir("", t.Name())
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> defer os.RemoveAll(dir)
<ide>
<ide> logPath := filepath.Join(dir, "test.log")
<ide>
<ide> l, err := New(logger.Info{LogPath: logPath})
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> defer l.Close()
<ide>
<ide> m1 := logger.Message{Source: "stdout", Timestamp: time.Now().Add(-1 * 30 * time.Minute), Line: []byte("message 1")}
<ide> func TestWriteLog(t *testing.T) {
<ide>
<ide> // copy the log message because the underying log writer resets the log message and returns it to a buffer pool
<ide> err = l.Log(copyLogMessage(&m1))
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> err = l.Log(copyLogMessage(&m2))
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> err = l.Log(copyLogMessage(&m3))
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide>
<ide> f, err := os.Open(logPath)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> defer f.Close()
<ide> dec := protoio.NewUint32DelimitedReader(f, binary.BigEndian, 1e6)
<ide>
<ide> func TestWriteLog(t *testing.T) {
<ide> }
<ide>
<ide> err = dec.ReadMsg(&proto)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> messageToProto(&m1, &testProto, &partial)
<ide> assert.Check(t, is.DeepEqual(testProto, proto), "expected:\n%+v\ngot:\n%+v", testProto, proto)
<ide> seekMsgLen()
<ide>
<ide> err = dec.ReadMsg(&proto)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> messageToProto(&m2, &testProto, &partial)
<ide> assert.Check(t, is.DeepEqual(testProto, proto))
<ide> seekMsgLen()
<ide>
<ide> err = dec.ReadMsg(&proto)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> messageToProto(&m3, &testProto, &partial)
<ide> assert.Check(t, is.DeepEqual(testProto, proto), "expected:\n%+v\ngot:\n%+v", testProto, proto)
<ide> }
<ide> func TestReadLog(t *testing.T) {
<ide> t.Parallel()
<ide>
<ide> dir, err := ioutil.TempDir("", t.Name())
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> defer os.RemoveAll(dir)
<ide>
<ide> logPath := filepath.Join(dir, "test.log")
<ide> l, err := New(logger.Info{LogPath: logPath})
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> defer l.Close()
<ide>
<ide> m1 := logger.Message{Source: "stdout", Timestamp: time.Now().Add(-1 * 30 * time.Minute), Line: []byte("a message")}
<ide> func TestReadLog(t *testing.T) {
<ide>
<ide> // copy the log message because the underlying log writer resets the log message and returns it to a buffer pool
<ide> err = l.Log(copyLogMessage(&m1))
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> err = l.Log(copyLogMessage(&m2))
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> err = l.Log(copyLogMessage(&m3))
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> err = l.Log(copyLogMessage(&m4))
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide>
<ide> lr := l.(logger.LogReader)
<ide>
<ide> func TestReadLog(t *testing.T) {
<ide> case <-ctx.Done():
<ide> assert.Assert(t, ctx.Err())
<ide> case err := <-lw.Err:
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> case msg, open := <-lw.Msg:
<ide> if !open {
<ide> select {
<ide> case err := <-lw.Err:
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> default:
<ide> assert.Assert(t, m == nil)
<ide> return
<ide><path>daemon/logger/loggerutils/logfile_test.go
<ide> func TestTailFiles(t *testing.T) {
<ide> case <-time.After(60 * time.Second):
<ide> t.Fatal("timeout waiting for tail line")
<ide> case err := <-watcher.Err:
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> case msg := <-watcher.Msg:
<ide> assert.Assert(t, msg != nil)
<ide> assert.Assert(t, string(msg.Line) == "Roads?", string(msg.Line))
<ide> func TestTailFiles(t *testing.T) {
<ide> case <-time.After(60 * time.Second):
<ide> t.Fatal("timeout waiting for tail line")
<ide> case err := <-watcher.Err:
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> case msg := <-watcher.Msg:
<ide> assert.Assert(t, msg != nil)
<ide> assert.Assert(t, string(msg.Line) == "Where we're going we don't need roads.", string(msg.Line))
<ide><path>integration/plugin/authz/authz_plugin_test.go
<ide> func TestAuthzPluginEnsureContainerCopyToFrom(t *testing.T) {
<ide> d.StartWithBusybox(t, "--authorization-plugin="+testAuthZPlugin, "--authorization-plugin="+testAuthZPlugin)
<ide>
<ide> dir, err := ioutil.TempDir("", t.Name())
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> defer os.RemoveAll(dir)
<ide>
<ide> f, err := ioutil.TempFile(dir, "send")
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> defer f.Close()
<ide>
<ide> buf := make([]byte, 1024)
<ide> fileSize := len(buf) * 1024 * 10
<ide> for written := 0; written < fileSize; {
<ide> n, err := f.Write(buf)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> written += n
<ide> }
<ide>
<ide> func TestAuthzPluginEnsureContainerCopyToFrom(t *testing.T) {
<ide> defer c.ContainerRemove(ctx, cID, types.ContainerRemoveOptions{Force: true})
<ide>
<ide> _, err = f.Seek(0, io.SeekStart)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide>
<ide> srcInfo, err := archive.CopyInfoSourcePath(f.Name(), false)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> srcArchive, err := archive.TarResource(srcInfo)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> defer srcArchive.Close()
<ide>
<ide> dstDir, preparedArchive, err := archive.PrepareArchiveCopy(srcArchive, srcInfo, archive.CopyInfo{Path: "/test"})
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide>
<ide> err = c.CopyToContainer(ctx, cID, dstDir, preparedArchive, types.CopyToContainerOptions{})
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide>
<ide> rdr, _, err := c.CopyFromContainer(ctx, cID, "/test")
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> _, err = io.Copy(ioutil.Discard, rdr)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> }
<ide>
<ide> func imageSave(client client.APIClient, path, image string) error {
<ide><path>integration/plugin/logging/logging_linux_test.go
<ide> func TestContinueAfterPluginCrash(t *testing.T) {
<ide>
<ide> // Attach to the container to make sure it's written a few times to stdout
<ide> attach, err := client.ContainerAttach(context.Background(), id, types.ContainerAttachOptions{Stream: true, Stdout: true})
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide>
<ide> chErr := make(chan error)
<ide> go func() {
<ide> func TestContinueAfterPluginCrash(t *testing.T) {
<ide>
<ide> select {
<ide> case err := <-chErr:
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> case <-time.After(60 * time.Second):
<ide> t.Fatal("timeout waiting for container i/o")
<ide> }
<ide> func TestContinueAfterPluginCrash(t *testing.T) {
<ide> // TODO(@cpuguy83): This is horribly hacky but is the only way to really test this case right now.
<ide> // It would be nice if there was a way to know that a broken pipe has occurred without looking through the logs.
<ide> log, err := os.Open(d.LogFileName())
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> scanner := bufio.NewScanner(log)
<ide> for scanner.Scan() {
<ide> assert.Assert(t, !strings.Contains(scanner.Text(), "broken pipe"))
<ide><path>integration/plugin/volumes/mounts_test.go
<ide> func TestPluginWithDevMounts(t *testing.T) {
<ide> ctx := context.Background()
<ide>
<ide> testDir, err := ioutil.TempDir("", "test-dir")
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> defer os.RemoveAll(testDir)
<ide>
<ide> createPlugin(t, c, "test", "dummy", asVolumeDriver, func(c *plugin.Config) {
<ide> func TestPluginWithDevMounts(t *testing.T) {
<ide> })
<ide>
<ide> err = c.PluginEnable(ctx, "test", types.PluginEnableOptions{Timeout: 30})
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> defer func() {
<ide> err := c.PluginRemove(ctx, "test", types.PluginRemoveOptions{Force: true})
<ide> assert.Check(t, err)
<ide> }()
<ide>
<ide> p, _, err := c.PluginInspectWithRaw(ctx, "test")
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Assert(t, p.Enabled)
<ide> }
<ide><path>pkg/tailfile/tailfile_test.go
<ide> func TestNewTailReader(t *testing.T) {
<ide> assert.Assert(t, lines == 0)
<ide> return
<ide> }
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Check(t, lines == i, "%d -- %d", lines, i)
<ide>
<ide> b, err := ioutil.ReadAll(tr)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide>
<ide> expectLines := test.data[len(test.data)-i:]
<ide> assert.Check(t, len(expectLines) == i)
<ide> func TestNewTailReader(t *testing.T) {
<ide> return
<ide> }
<ide>
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Check(t, lines == len(test.data), "%d -- %d", lines, len(test.data))
<ide> b, err := ioutil.ReadAll(tr)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Check(t, bytes.Equal(b, []byte(s)), "\n%v\n%v", b, []byte(s))
<ide> })
<ide> })
<ide> func TestNewTailReader(t *testing.T) {
<ide> t.Run("truncated last line", func(t *testing.T) {
<ide> t.Run("more than available", func(t *testing.T) {
<ide> tail, nLines, err := NewTailReader(ctx, strings.NewReader("a\nb\nextra"), 3)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Check(t, nLines == 2, nLines)
<ide>
<ide> rdr := bufio.NewReader(tail)
<ide> data, _, err := rdr.ReadLine()
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Check(t, string(data) == "a", string(data))
<ide>
<ide> data, _, err = rdr.ReadLine()
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Check(t, string(data) == "b", string(data))
<ide>
<ide> _, _, err = rdr.ReadLine()
<ide> func TestNewTailReader(t *testing.T) {
<ide> t.Run("truncated last line", func(t *testing.T) {
<ide> t.Run("exact", func(t *testing.T) {
<ide> tail, nLines, err := NewTailReader(ctx, strings.NewReader("a\nb\nextra"), 2)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Check(t, nLines == 2, nLines)
<ide>
<ide> rdr := bufio.NewReader(tail)
<ide> data, _, err := rdr.ReadLine()
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Check(t, string(data) == "a", string(data))
<ide>
<ide> data, _, err = rdr.ReadLine()
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Check(t, string(data) == "b", string(data))
<ide>
<ide> _, _, err = rdr.ReadLine()
<ide> func TestNewTailReader(t *testing.T) {
<ide> t.Run("truncated last line", func(t *testing.T) {
<ide> t.Run("one line", func(t *testing.T) {
<ide> tail, nLines, err := NewTailReader(ctx, strings.NewReader("a\nb\nextra"), 1)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Check(t, nLines == 1, nLines)
<ide>
<ide> rdr := bufio.NewReader(tail)
<ide> data, _, err := rdr.ReadLine()
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Check(t, string(data) == "b", string(data))
<ide>
<ide> _, _, err = rdr.ReadLine()
<ide><path>plugin/executor/containerd/containerd_test.go
<ide> func TestLifeCycle(t *testing.T) {
<ide> mock.simulateStartError(false, id)
<ide>
<ide> err = exec.Create(id, specs.Spec{}, nil, nil)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> running, _ := exec.IsRunning(id)
<ide> assert.Assert(t, running)
<ide>
<ide> func TestLifeCycle(t *testing.T) {
<ide> mock.HandleExitEvent(id) // simulate a plugin that exits
<ide>
<ide> err = exec.Create(id, specs.Spec{}, nil, nil)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> }
<ide>
<ide> func setupTest(t *testing.T, client Client, eh ExitHandler) (*Executor, func()) {
<ide> rootDir, err := ioutil.TempDir("", "test-daemon")
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Assert(t, client != nil)
<ide> assert.Assert(t, eh != nil)
<ide>
<ide><path>volume/service/service_linux_test.go
<ide> func TestLocalVolumeSize(t *testing.T) {
<ide>
<ide> ds := volumedrivers.NewStore(nil)
<ide> dir, err := ioutil.TempDir("", t.Name())
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> defer os.RemoveAll(dir)
<ide>
<ide> l, err := local.New(dir, idtools.Identity{UID: os.Getuid(), GID: os.Getegid()})
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Assert(t, ds.Register(l, volume.DefaultDriverName))
<ide> assert.Assert(t, ds.Register(testutils.NewFakeDriver("fake"), "fake"))
<ide>
<ide> func TestLocalVolumeSize(t *testing.T) {
<ide>
<ide> ctx := context.Background()
<ide> v1, err := service.Create(ctx, "test1", volume.DefaultDriverName, opts.WithCreateReference("foo"))
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> v2, err := service.Create(ctx, "test2", volume.DefaultDriverName)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> _, err = service.Create(ctx, "test3", "fake")
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide>
<ide> data := make([]byte, 1024)
<ide> err = ioutil.WriteFile(filepath.Join(v1.Mountpoint, "data"), data, 0644)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> err = ioutil.WriteFile(filepath.Join(v2.Mountpoint, "data"), data[:1], 0644)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide>
<ide> ls, err := service.LocalVolumesSize(ctx)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Assert(t, is.Len(ls, 2))
<ide>
<ide> for _, v := range ls {
<ide><path>volume/service/service_test.go
<ide> func TestServiceCreate(t *testing.T) {
<ide> assert.Assert(t, errdefs.IsNotFound(err), err)
<ide>
<ide> v, err := service.Create(ctx, "v1", "d1")
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide>
<ide> vCopy, err := service.Create(ctx, "v1", "d1")
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Assert(t, is.DeepEqual(v, vCopy))
<ide>
<ide> _, err = service.Create(ctx, "v1", "d2")
<ide> func TestServiceCreate(t *testing.T) {
<ide>
<ide> assert.Assert(t, service.Remove(ctx, "v1"))
<ide> _, err = service.Create(ctx, "v1", "d2")
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> _, err = service.Create(ctx, "v1", "d2")
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide>
<ide> }
<ide>
<ide> func TestServiceList(t *testing.T) {
<ide> ctx := context.Background()
<ide>
<ide> _, err := service.Create(ctx, "v1", "d1")
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> _, err = service.Create(ctx, "v2", "d1")
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> _, err = service.Create(ctx, "v3", "d2")
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide>
<ide> ls, _, err := service.List(ctx, filters.NewArgs(filters.Arg("driver", "d1")))
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Check(t, is.Len(ls, 2))
<ide>
<ide> ls, _, err = service.List(ctx, filters.NewArgs(filters.Arg("driver", "d2")))
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Check(t, is.Len(ls, 1))
<ide>
<ide> ls, _, err = service.List(ctx, filters.NewArgs(filters.Arg("driver", "notexist")))
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Check(t, is.Len(ls, 0))
<ide>
<ide> ls, _, err = service.List(ctx, filters.NewArgs(filters.Arg("dangling", "true")))
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Check(t, is.Len(ls, 3))
<ide> ls, _, err = service.List(ctx, filters.NewArgs(filters.Arg("dangling", "false")))
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Check(t, is.Len(ls, 0))
<ide>
<ide> _, err = service.Get(ctx, "v1", opts.WithGetReference("foo"))
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> ls, _, err = service.List(ctx, filters.NewArgs(filters.Arg("dangling", "true")))
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Check(t, is.Len(ls, 2))
<ide> ls, _, err = service.List(ctx, filters.NewArgs(filters.Arg("dangling", "false")))
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Check(t, is.Len(ls, 1))
<ide>
<ide> ls, _, err = service.List(ctx, filters.NewArgs(filters.Arg("dangling", "false"), filters.Arg("driver", "d2")))
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Check(t, is.Len(ls, 0))
<ide> ls, _, err = service.List(ctx, filters.NewArgs(filters.Arg("dangling", "true"), filters.Arg("driver", "d2")))
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Check(t, is.Len(ls, 1))
<ide> }
<ide>
<ide> func TestServiceRemove(t *testing.T) {
<ide> ctx := context.Background()
<ide>
<ide> _, err := service.Create(ctx, "test", "d1")
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide>
<ide> assert.Assert(t, service.Remove(ctx, "test"))
<ide> assert.Assert(t, service.Remove(ctx, "test", opts.WithPurgeOnError(true)))
<ide> func TestServiceGet(t *testing.T) {
<ide> assert.Check(t, v == nil)
<ide>
<ide> created, err := service.Create(ctx, "test", "d1")
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Assert(t, created != nil)
<ide>
<ide> v, err = service.Get(ctx, "test")
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Assert(t, is.DeepEqual(created, v))
<ide>
<ide> v, err = service.Get(ctx, "test", opts.WithGetResolveStatus)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Assert(t, is.Len(v.Status, 1), v.Status)
<ide>
<ide> v, err = service.Get(ctx, "test", opts.WithGetDriver("notarealdriver"))
<ide> func TestServicePrune(t *testing.T) {
<ide> ctx := context.Background()
<ide>
<ide> _, err := service.Create(ctx, "test", volume.DefaultDriverName)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> _, err = service.Create(ctx, "test2", "other")
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide>
<ide> pr, err := service.Prune(ctx, filters.NewArgs(filters.Arg("label", "banana")))
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Assert(t, is.Len(pr.VolumesDeleted, 0))
<ide>
<ide> pr, err = service.Prune(ctx, filters.NewArgs())
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Assert(t, is.Len(pr.VolumesDeleted, 1))
<ide> assert.Assert(t, is.Equal(pr.VolumesDeleted[0], "test"))
<ide>
<ide> _, err = service.Get(ctx, "test")
<ide> assert.Assert(t, IsNotExist(err), err)
<ide>
<ide> v, err := service.Get(ctx, "test2")
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Assert(t, is.Equal(v.Driver, "other"))
<ide>
<ide> _, err = service.Create(ctx, "test", volume.DefaultDriverName)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide>
<ide> pr, err = service.Prune(ctx, filters.NewArgs(filters.Arg("label!", "banana")))
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Assert(t, is.Len(pr.VolumesDeleted, 1))
<ide> assert.Assert(t, is.Equal(pr.VolumesDeleted[0], "test"))
<ide> v, err = service.Get(ctx, "test2")
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Assert(t, is.Equal(v.Driver, "other"))
<ide>
<ide> _, err = service.Create(ctx, "test", volume.DefaultDriverName, opts.WithCreateLabels(map[string]string{"banana": ""}))
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> pr, err = service.Prune(ctx, filters.NewArgs(filters.Arg("label!", "banana")))
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Assert(t, is.Len(pr.VolumesDeleted, 0))
<ide>
<ide> _, err = service.Create(ctx, "test3", volume.DefaultDriverName, opts.WithCreateLabels(map[string]string{"banana": "split"}))
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> pr, err = service.Prune(ctx, filters.NewArgs(filters.Arg("label!", "banana=split")))
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Assert(t, is.Len(pr.VolumesDeleted, 1))
<ide> assert.Assert(t, is.Equal(pr.VolumesDeleted[0], "test"))
<ide>
<ide> pr, err = service.Prune(ctx, filters.NewArgs(filters.Arg("label", "banana=split")))
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Assert(t, is.Len(pr.VolumesDeleted, 1))
<ide> assert.Assert(t, is.Equal(pr.VolumesDeleted[0], "test3"))
<ide>
<ide> v, err = service.Create(ctx, "test", volume.DefaultDriverName, opts.WithCreateReference(t.Name()))
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide>
<ide> pr, err = service.Prune(ctx, filters.NewArgs())
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Assert(t, is.Len(pr.VolumesDeleted, 0))
<ide> assert.Assert(t, service.Release(ctx, v.Name, t.Name()))
<ide>
<ide> pr, err = service.Prune(ctx, filters.NewArgs())
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Assert(t, is.Len(pr.VolumesDeleted, 1))
<ide> assert.Assert(t, is.Equal(pr.VolumesDeleted[0], "test"))
<ide> }
<ide> func newTestService(t *testing.T, ds *volumedrivers.Store) (*VolumesService, fun
<ide> t.Helper()
<ide>
<ide> dir, err := ioutil.TempDir("", t.Name())
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide>
<ide> store, err := NewStore(dir, ds)
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> s := &VolumesService{vs: store, eventLogger: dummyEventLogger{}}
<ide> return s, func() {
<ide> assert.Check(t, s.Shutdown())
<ide><path>volume/service/store_test.go
<ide> func TestFindByReferenced(t *testing.T) {
<ide> }
<ide>
<ide> dangling, _, err := s.Find(ctx, ByReferenced(false))
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Assert(t, len(dangling) == 1)
<ide> assert.Check(t, dangling[0].Name() == "fake2")
<ide>
<ide> used, _, err := s.Find(ctx, ByReferenced(true))
<del> assert.Assert(t, err)
<add> assert.NilError(t, err)
<ide> assert.Assert(t, len(used) == 1)
<ide> assert.Check(t, used[0].Name() == "fake1")
<ide> } | 13 |
Javascript | Javascript | add basic rescale support for smacks | 5ad3a9cc72de0fb4bf5a02c0084be5a40742b01c | <ide><path>src/image.js
<ide> var PDFImage = (function PDFImageClosure() {
<ide> smaskPromise.resolve(null);
<ide> };
<ide>
<add> /**
<add> * Resize an image using the nearest neighbor algorithm. Currently only
<add> * supports one component images.
<add> * @param {TypedArray} pixels The original image with one component.
<add> * @param {Number} w1 Original width.
<add> * @param {Number} h1 Original height.
<add> * @param {Number} w2 New width.
<add> * @param {Number} h2 New height.
<add> * @return {TypedArray} Resized image data.
<add> */
<add> PDFImage.resize = function resize(pixels, w1, h1, w2, h2) {
<add> var temp = new Uint8Array(w2 * h2);
<add> var xRatio = w1 / w2;
<add> var yRatio = h1 / h2;
<add> var px, py;
<add> for (var i = 0; i < h2; i++) {
<add> for (var j = 0; j < w2; j++) {
<add> px = Math.floor(j * xRatio);
<add> py = Math.floor(i * yRatio);
<add> temp[(i * w2) + j] = pixels[((py * w1) + px)];
<add> }
<add> }
<add> return temp;
<add> };
<add>
<ide> PDFImage.prototype = {
<ide> getComponents: function getComponents(buffer) {
<ide> var bpc = this.bpc;
<ide> var PDFImage = (function PDFImageClosure() {
<ide> var smask = this.smask;
<ide> var width = this.width;
<ide> var height = this.height;
<del> var buf = new Uint8Array(width * height);
<add> var buf;
<ide>
<ide> if (smask) {
<ide> var sw = smask.width;
<ide> var sh = smask.height;
<del> if (sw != this.width || sh != this.height)
<del> error('smask dimensions do not match image dimensions: ' + sw +
<del> ' != ' + this.width + ', ' + sh + ' != ' + this.height);
<del>
<add> buf = new Uint8Array(sw * sh)
<ide> smask.fillGrayBuffer(buf);
<add> if (sw != this.width || sh != this.height)
<add> buf = PDFImage.resize(buf, sw, sh, this.width, this.height);
<ide> return buf;
<ide> } else {
<add> buf = new Uint8Array(width * height)
<ide> for (var i = 0, ii = width * height; i < ii; ++i)
<ide> buf[i] = 255;
<ide> } | 1 |
Ruby | Ruby | remove obsolete test file | 48d05bdd97e067b534b158bdf56256b45bf59e48 | <ide><path>activerecord/test/cases/associations/habtm_join_table_test.rb
<del>require 'cases/helper'
<del>
<del>class MyReader < ActiveRecord::Base
<del> has_and_belongs_to_many :my_books
<del>end
<del>
<del>class MyBook < ActiveRecord::Base
<del> has_and_belongs_to_many :my_readers
<del>end
<del>
<del>class HabtmJoinTableTest < ActiveRecord::TestCase
<del> def test_habtm_join_table
<del> ActiveRecord::Base.connection.create_table :my_books, :force => true do |t|
<del> t.string :name
<del> end
<del> assert ActiveRecord::Base.connection.table_exists?(:my_books)
<del>
<del> ActiveRecord::Base.connection.create_table :my_readers, :force => true do |t|
<del> t.string :name
<del> end
<del> assert ActiveRecord::Base.connection.table_exists?(:my_readers)
<del>
<del> ActiveRecord::Base.connection.create_table :my_books_my_readers, :force => true do |t|
<del> t.integer :my_book_id
<del> t.integer :my_reader_id
<del> end
<del> assert ActiveRecord::Base.connection.table_exists?(:my_books_my_readers)
<del> end
<del>
<del> def teardown
<del> ActiveRecord::Base.connection.drop_table :my_books
<del> ActiveRecord::Base.connection.drop_table :my_readers
<del> ActiveRecord::Base.connection.drop_table :my_books_my_readers
<del> end
<del>end | 1 |
Javascript | Javascript | remove unnecessary fd property from socket | 3ce6bc3b5082478dfe6832997640c93de97705be | <ide><path>lib/dgram.js
<ide> function Socket(type, listener) {
<ide>
<ide> this[async_id_symbol] = handle.getAsyncId();
<ide> this.type = type;
<del> this.fd = null; // compatibility hack
<ide>
<ide> if (typeof listener === 'function')
<ide> this.on('message', listener);
<ide> function startListening(socket) {
<ide> state.handle.recvStart();
<ide> state.receiving = true;
<ide> state.bindState = BIND_STATE_BOUND;
<del> socket.fd = -42; // compatibility hack
<ide>
<ide> if (state.recvBufferSize)
<ide> bufferSize(socket, state.recvBufferSize, RECV_BUFFER);
<ide> function stopReceiving(socket) {
<ide>
<ide> state.handle.recvStop();
<ide> state.receiving = false;
<del> socket.fd = null; // compatibility hack
<ide> }
<ide>
<ide> | 1 |
Ruby | Ruby | handle users not having any github credentials | ccb6d5e834dbfe91043b324af95872840903aec7 | <ide><path>Library/Homebrew/utils/github.rb
<ide> module GitHub
<ide> ALL_SCOPES_URL = Formatter.url(
<ide> "https://github.com/settings/tokens/new?scopes=#{ALL_SCOPES.join(",")}&description=Homebrew",
<ide> ).freeze
<add> CREATE_GITHUB_PAT_MESSAGE = <<~EOS
<add> Create a GitHub personal access token:
<add> #{ALL_SCOPES_URL}
<add> #{Utils::Shell.set_variable_in_profile("HOMEBREW_GITHUB_API_TOKEN", "your_token_here")}
<add> EOS
<ide>
<ide> # Generic API error.
<ide> class Error < RuntimeError
<ide> def initialize(github_message)
<ide> end
<ide> end
<ide>
<add> # Error when the user has no GitHub API credentials set at all (macOS keychain or envvar).
<add> class MissingAuthenticationError < Error
<add> def initialize
<add> message = +"No GitHub credentials found in Keychain or environment.\n"
<add> message << CREATE_GITHUB_PAT_MESSAGE
<add> super message
<add> end
<add> end
<add>
<ide> # Error when the API returns a validation error.
<ide> class ValidationFailedError < Error
<ide> def initialize(github_message, errors)
<ide> def raise_api_error(output, errors, http_code, headers, scopes)
<ide> when "401", "403"
<ide> raise AuthenticationFailedError, message
<ide> when "404"
<add> raise MissingAuthenticationError if api_credentials_type == :none && scopes.present?
<add>
<ide> raise HTTPNotFoundError, message
<ide> when "422"
<ide> errors = json&.[]("errors") || [] | 1 |
Javascript | Javascript | define functions only once | 2f1f22ab26ed5afe3e8860f22fbf486f2e1f5d7f | <ide><path>lib/module.js
<ide> var debug = Module._debug;
<ide> // -> a
<ide> // -> a.<ext>
<ide> // -> a/index.<ext>
<add>
<add>function statPath(path) {
<add> var fs = NativeModule.require('fs');
<add> try {
<add> return fs.statSync(path);
<add> } catch (ex) {}
<add> return false;
<add>}
<add>
<add>// check if the file exists and is not a directory
<add>function tryFile(requestPath) {
<add> var fs = NativeModule.require('fs');
<add> var stats = statPath(requestPath);
<add> if (stats && !stats.isDirectory()) {
<add> return fs.realpathSync(requestPath);
<add> }
<add> return false;
<add>}
<add>
<add>// given a path check a the file exists with any of the set extensions
<add>function tryExtensions(p, exts) {
<add> for (var i = 0, EL = exts.length; i < EL; i++) {
<add> var filename = tryFile(p + exts[i]);
<add>
<add> if (filename) {
<add> return filename;
<add> }
<add> }
<add> return false;
<add>}
<add>
<add>
<ide> Module._findPath = function(request, paths) {
<ide> var fs = NativeModule.require('fs');
<ide> var exts = Object.keys(Module._extensions);
<ide> Module._findPath = function(request, paths) {
<ide>
<ide> var trailingSlash = (request.slice(-1) === '/');
<ide>
<del> // check if the file exists and is not a directory
<del> function tryFile(requestPath) {
<del> try {
<del> var stats = fs.statSync(requestPath);
<del> if (stats && !stats.isDirectory()) {
<del> return fs.realpathSync(requestPath);
<del> }
<del> } catch (e) {}
<del> return false;
<del> };
<del>
<del> // given a path check a the file exists with any of the set extensions
<del> function tryExtensions(p, extension) {
<del> for (var i = 0, EL = exts.length; i < EL; i++) {
<del> var filename = tryFile(p + exts[i]);
<del>
<del> if (filename) {
<del> return filename;
<del> }
<del> }
<del> return false;
<del> };
<del>
<ide> var cacheKey = JSON.stringify({request: request, paths: paths});
<ide> if (Module._pathCache[cacheKey]) {
<ide> return Module._pathCache[cacheKey];
<ide> Module._findPath = function(request, paths) {
<ide>
<ide> if (!filename && !trailingSlash) {
<ide> // try it with each of the extensions
<del> filename = tryExtensions(basePath);
<add> filename = tryExtensions(basePath, exts);
<ide> }
<ide> }
<ide>
<ide> if (!filename) {
<ide> // try it with each of the extensions at "index"
<del> filename = tryExtensions(path.resolve(basePath, 'index'));
<add> filename = tryExtensions(path.resolve(basePath, 'index'), exts);
<ide> }
<ide>
<ide> if (filename) { | 1 |
Java | Java | copy httpheaders to ensure serializability | 97ea8a67892a20b0deedd54e6489895c7f1fea24 | <ide><path>spring-webflux/src/main/java/org/springframework/web/reactive/function/client/WebClientRequestException.java
<ide> /*
<del> * Copyright 2002-2020 the original author or authors.
<add> * Copyright 2002-2022 the original author or authors.
<ide> *
<ide> * Licensed under the Apache License, Version 2.0 (the "License");
<ide> * you may not use this file except in compliance with the License.
<ide> public WebClientRequestException(Throwable ex, HttpMethod method, URI uri, HttpH
<ide>
<ide> this.method = method;
<ide> this.uri = uri;
<del> this.headers = headers;
<add> this.headers = copy(headers);
<ide> }
<ide>
<add> /**
<add> * Not all {@code HttpHeaders} implementations are serializable, so we
<add> * make a copy to ensure that {@code WebClientResponseException} is.
<add> */
<add> private static HttpHeaders copy(HttpHeaders headers) {
<add> HttpHeaders result = new HttpHeaders();
<add> result.putAll(headers);
<add> return result;
<add> }
<add>
<add>
<ide> /**
<ide> * Return the HTTP request method.
<ide> */
<ide><path>spring-webflux/src/main/java/org/springframework/web/reactive/function/client/WebClientResponseException.java
<ide> /*
<del> * Copyright 2002-2021 the original author or authors.
<add> * Copyright 2002-2022 the original author or authors.
<ide> *
<ide> * Licensed under the Apache License, Version 2.0 (the "License");
<ide> * you may not use this file except in compliance with the License.
<ide> public WebClientResponseException(String message, int statusCode, String statusT
<ide>
<ide> this.statusCode = statusCode;
<ide> this.statusText = statusText;
<del> this.headers = (headers != null ? headers : HttpHeaders.EMPTY);
<add> this.headers = copy(headers);
<ide> this.responseBody = (responseBody != null ? responseBody : new byte[0]);
<ide> this.responseCharset = charset;
<ide> this.request = request;
<ide> }
<ide>
<add> /**
<add> * Not all {@code HttpHeaders} implementations are serializable, so we
<add> * make a copy to ensure that {@code WebClientResponseException} is.
<add> */
<add> private static HttpHeaders copy(@Nullable HttpHeaders headers) {
<add> if (headers == null) {
<add> return HttpHeaders.EMPTY;
<add> }
<add> else {
<add> HttpHeaders result = new HttpHeaders();
<add> result.putAll(headers);
<add> return result;
<add> }
<add> }
<add>
<ide>
<ide> /**
<ide> * Return the HTTP status code value. | 2 |
Ruby | Ruby | remove some comments about ruby 1.9 behaviors | 21f02f34851f58103fc03e852e0315afdb6f74e0 | <ide><path>actionpack/lib/action_dispatch/journey/gtg/transition_table.rb
<ide> def visualizer(paths, title = 'FSM')
<ide> svg = to_svg
<ide> javascripts = [states, fsm_js]
<ide>
<del> # Annoying hack for 1.9 warnings
<add> # Annoying hack warnings
<ide> fun_routes = fun_routes
<ide> stylesheets = stylesheets
<ide> svg = svg
<ide><path>activesupport/lib/active_support/core_ext/object/duplicable.rb
<ide> class Object
<ide> # Can you safely dup this object?
<ide> #
<del> # False for +nil+, +false+, +true+, symbol, number and BigDecimal(in 1.9.x) objects;
<add> # False for +nil+, +false+, +true+, symbol, number objects;
<ide> # true otherwise.
<ide> def duplicable?
<ide> true
<ide><path>activesupport/lib/active_support/core_ext/time/calculations.rb
<ide> def middle_of_day
<ide> alias :at_noon :middle_of_day
<ide> alias :at_middle_of_day :middle_of_day
<ide>
<del> # Returns a new Time representing the end of the day, 23:59:59.999999 (.999999999 in ruby1.9)
<add> # Returns a new Time representing the end of the day, 23:59:59.999999
<ide> def end_of_day
<ide> change(
<ide> :hour => 23,
<ide> def beginning_of_hour
<ide> end
<ide> alias :at_beginning_of_hour :beginning_of_hour
<ide>
<del> # Returns a new Time representing the end of the hour, x:59:59.999999 (.999999999 in ruby1.9)
<add> # Returns a new Time representing the end of the hour, x:59:59.999999
<ide> def end_of_hour
<ide> change(
<ide> :min => 59,
<ide> def beginning_of_minute
<ide> end
<ide> alias :at_beginning_of_minute :beginning_of_minute
<ide>
<del> # Returns a new Time representing the end of the minute, x:xx:59.999999 (.999999999 in ruby1.9)
<add> # Returns a new Time representing the end of the minute, x:xx:59.999999
<ide> def end_of_minute
<ide> change(
<ide> :sec => 59,
<ide><path>activesupport/lib/active_support/dependencies.rb
<ide> def autoloadable_module?(path_suffix)
<ide> end
<ide>
<ide> def load_once_path?(path)
<del> # to_s works around a ruby1.9 issue where String#starts_with?(Pathname)
<add> # to_s works around a ruby issue where String#starts_with?(Pathname)
<ide> # will raise a TypeError: no implicit conversion of Pathname into String
<ide> autoload_once_paths.any? { |base| path.starts_with? base.to_s }
<ide> end
<ide><path>activesupport/lib/active_support/time_with_zone.rb
<ide> def rfc2822
<ide>
<ide> # Returns a string of the object's date and time.
<ide> # Accepts an optional <tt>format</tt>:
<del> # * <tt>:default</tt> - default value, mimics Ruby 1.9 Time#to_s format.
<add> # * <tt>:default</tt> - default value, mimics Ruby Time#to_s format.
<ide> # * <tt>:db</tt> - format outputs time in UTC :db time. See Time#to_formatted_s(:db).
<ide> # * Any key in <tt>Time::DATE_FORMATS</tt> can be used. See active_support/core_ext/time/conversions.rb.
<ide> def to_s(format = :default)
<ide> def to_s(format = :default)
<ide> elsif formatter = ::Time::DATE_FORMATS[format]
<ide> formatter.respond_to?(:call) ? formatter.call(self).to_s : strftime(formatter)
<ide> else
<del> "#{time.strftime("%Y-%m-%d %H:%M:%S")} #{formatted_offset(false, 'UTC')}" # mimicking Ruby 1.9 Time#to_s format
<add> "#{time.strftime("%Y-%m-%d %H:%M:%S")} #{formatted_offset(false, 'UTC')}" # mimicking Ruby Time#to_s format
<ide> end
<ide> end
<ide> alias_method :to_formatted_s, :to_s
<ide><path>railties/lib/rails.rb
<ide> require 'active_support/railtie'
<ide> require 'action_dispatch/railtie'
<ide>
<del># For Ruby 1.9, UTF-8 is the default internal and external encoding.
<add># UTF-8 is the default internal and external encoding.
<ide> silence_warnings do
<ide> Encoding.default_external = Encoding::UTF_8
<ide> Encoding.default_internal = Encoding::UTF_8 | 6 |
Javascript | Javascript | remove unnecessary white spaces | 2036fb1e71b7da7f16f78e602d95a5cc1e3771b0 | <ide><path>lib/grunt/utils.js
<ide> var getRandomPorts = function() {
<ide> var getPackage = function() {
<ide> if ( !pkg ) {
<ide>
<del> // Search up the folder hierarchy for the first package.json
<add> // Search up the folder hierarchy for the first package.json
<ide> var packageFolder = path.resolve('.');
<ide> while ( !fs.existsSync(path.join(packageFolder, 'package.json')) ) {
<ide> var parent = path.dirname(packageFolder);
<ide> if ( parent === packageFolder) { break; }
<ide> packageFolder = parent;
<ide> }
<ide> pkg = JSON.parse(fs.readFileSync(path.join(packageFolder,'package.json'), 'UTF-8'));
<del>
<add>
<ide> }
<ide>
<ide> return pkg;
<ide> module.exports = {
<ide> },
<ide>
<ide>
<del> updateWebdriver: function(done){
<add> updateWebdriver: function(done){
<ide> if (process.env.TRAVIS) {
<ide> // Skip the webdriver-manager update on Travis, since the browsers will
<ide> // be provided remotely. | 1 |
Ruby | Ruby | accumulate inherited options | b2c9625d780277f021c63e21cac4a7c954170784 | <ide><path>Library/Homebrew/formula_installer.rb
<ide> def expand_requirements
<ide> end
<ide>
<ide> def expand_dependencies(deps)
<del> inherited_options = {}
<add> inherited_options = Hash.new { |hash, key| hash[key] = Options.new }
<ide>
<ide> expanded_deps = Dependency.expand(formula, deps) do |dependent, dep|
<del> options = inherited_options[dep.name] = inherited_options_for(dep)
<add> inherited_options[dep.name] |= inherited_options_for(dep)
<ide> build = effective_build_options_for(
<ide> dependent,
<ide> inherited_options.fetch(dependent.name, [])
<ide> def expand_dependencies(deps)
<ide> Dependency.prune
<ide> elsif dep.build? && install_bottle_for?(dependent, build)
<ide> Dependency.prune
<del> elsif dep.satisfied?(options)
<add> elsif dep.satisfied?(inherited_options[dep.name])
<ide> Dependency.skip
<ide> end
<ide> end | 1 |
Go | Go | use mount pkg | b890c20555596911d203befaf0e30efece6371d7 | <ide><path>pkg/archive/archive_linux.go
<ide> import (
<ide> "syscall"
<ide>
<ide> "github.com/containerd/continuity/fs"
<add> "github.com/docker/docker/pkg/mount"
<ide> "github.com/docker/docker/pkg/system"
<ide> "github.com/pkg/errors"
<ide> "golang.org/x/sys/unix"
<ide> func mknodChar0Overlay(cleansedOriginalPath string) error {
<ide> return errors.Wrapf(err, "failed to create a dummy lower file %s", lowerDummy)
<ide> }
<ide> mOpts := fmt.Sprintf("lowerdir=%s,upperdir=%s,workdir=%s", lower, upper, work)
<del> // docker/pkg/mount.Mount() requires procfs to be mounted. So we use syscall.Mount() directly instead.
<del> if err := syscall.Mount("overlay", merged, "overlay", uintptr(0), mOpts); err != nil {
<del> return errors.Wrapf(err, "failed to mount overlay (%s) on %s", mOpts, merged)
<add> if err := mount.Mount("overlay", merged, "overlay", mOpts); err != nil {
<add> return err
<ide> }
<ide> mergedDummy := filepath.Join(merged, dummyBase)
<ide> if err := os.Remove(mergedDummy); err != nil {
<ide> func createDirWithOverlayOpaque(tmp string) (string, error) {
<ide> return "", errors.Wrapf(err, "failed to create a dummy lower directory %s", lowerDummy)
<ide> }
<ide> mOpts := fmt.Sprintf("lowerdir=%s,upperdir=%s,workdir=%s", lower, upper, work)
<del> // docker/pkg/mount.Mount() requires procfs to be mounted. So we use syscall.Mount() directly instead.
<del> if err := syscall.Mount("overlay", merged, "overlay", uintptr(0), mOpts); err != nil {
<del> return "", errors.Wrapf(err, "failed to mount overlay (%s) on %s", mOpts, merged)
<add> if err := mount.Mount("overlay", merged, "overlay", mOpts); err != nil {
<add> return "", err
<ide> }
<ide> mergedDummy := filepath.Join(merged, dummyBase)
<ide> if err := os.Remove(mergedDummy); err != nil {
<ide><path>pkg/archive/archive_linux_test.go
<ide> import (
<ide> "syscall"
<ide> "testing"
<ide>
<add> "github.com/docker/docker/pkg/mount"
<ide> "github.com/docker/docker/pkg/reexec"
<ide> "github.com/docker/docker/pkg/system"
<ide> rsystem "github.com/opencontainers/runc/libcontainer/system"
<ide> func supportsOverlay(dir string) error {
<ide> }
<ide> }
<ide> mOpts := fmt.Sprintf("lowerdir=%s,upperdir=%s,workdir=%s", lower, upper, work)
<del> if err := syscall.Mount("overlay", merged, "overlay", uintptr(0), mOpts); err != nil {
<del> return errors.Wrapf(err, "failed to mount overlay (%s) on %s", mOpts, merged)
<add> if err := mount.Mount("overlay", merged, "overlay", mOpts); err != nil {
<add> return err
<ide> }
<del> if err := syscall.Unmount(merged, 0); err != nil {
<del> return errors.Wrapf(err, "failed to unmount %s", merged)
<add> if err := mount.Unmount(merged); err != nil {
<add> return err
<ide> }
<ide> return nil
<ide> }
<ide><path>pkg/mount/mount.go
<ide> func Mount(device, target, mType, options string) error {
<ide> return mount(device, target, mType, uintptr(flag), data)
<ide> }
<ide>
<del>// Mount will mount filesystem according to the specified configuration.
<add>// ForceMount will mount filesystem according to the specified configuration.
<ide> // Options must be specified like the mount or fstab unix commands:
<ide> // "opt1=val1,opt2=val2". See flags.go for supported option flags.
<ide> // | 3 |
Go | Go | fix a syntax error in comments | 3fc9a9ccb8ec8783bfcc02b7e4dfd7ee6468fa86 | <ide><path>pkg/mflag/flag.go
<ide> // Define flags using flag.String(), Bool(), Int(), etc.
<ide> //
<ide> // This declares an integer flag, -f or --flagname, stored in the pointer ip, with type *int.
<del>// import "flag /github.com/docker/docker/pkg/mflag"
<add>// import flag "github.com/docker/docker/pkg/mflag"
<ide> // var ip = flag.Int([]string{"f", "-flagname"}, 1234, "help message for flagname")
<ide> // If you like, you can bind the flag to a variable using the Var() functions.
<ide> // var flagvar int | 1 |
Ruby | Ruby | use public send on the scope parameters | 05609f472912cc841d99e3b0adb18c4f3d0eb9ae | <ide><path>activesupport/lib/active_support/callbacks.rb
<ide> def _compile_source(filter)
<ide>
<ide> _normalize_legacy_filter(kind, filter)
<ide> scopes = Array(chain.config[:scope])
<del> method_to_call = scopes.map{ |s| send(s) }.join("_")
<add> method_to_call = scopes.map{ |s| public_send(s) }.join("_")
<ide>
<ide> @klass.class_eval <<-RUBY_EVAL, __FILE__, __LINE__ + 1
<ide> def #{method_name}(&blk) | 1 |
Java | Java | support multiple validators in databinder | 54c873b4c430a6c13698080fcde99835ca2a541b | <ide><path>spring-context/src/main/java/org/springframework/validation/DataBinder.java
<ide>
<ide> import java.beans.PropertyEditor;
<ide> import java.lang.reflect.Field;
<add>import java.util.ArrayList;
<add>import java.util.Arrays;
<add>import java.util.Collections;
<ide> import java.util.HashMap;
<add>import java.util.List;
<ide> import java.util.Map;
<ide>
<ide> import org.apache.commons.logging.Log;
<ide> public class DataBinder implements PropertyEditorRegistry, TypeConverter {
<ide>
<ide> private BindingErrorProcessor bindingErrorProcessor = new DefaultBindingErrorProcessor();
<ide>
<del> private Validator validator;
<add> private final List<Validator> validators = new ArrayList<Validator>();
<ide>
<ide> private ConversionService conversionService;
<ide>
<ide> public BindingErrorProcessor getBindingErrorProcessor() {
<ide>
<ide> /**
<ide> * Set the Validator to apply after each binding step.
<add> * @see #addValidators(Validator...)
<add> * @see #replaceValidators(Validator...)
<ide> */
<ide> public void setValidator(Validator validator) {
<del> if (validator != null && (getTarget() != null && !validator.supports(getTarget().getClass()))) {
<del> throw new IllegalStateException("Invalid target for Validator [" + validator + "]: " + getTarget());
<add> assertValidators(validator);
<add> this.validators.clear();
<add> this.validators.add(validator);
<add> }
<add>
<add> private void assertValidators(Validator... validators) {
<add> Assert.notNull(validators, "Validators required");
<add> for (Validator validator : validators) {
<add> if (validator != null && (getTarget() != null && !validator.supports(getTarget().getClass()))) {
<add> throw new IllegalStateException("Invalid target for Validator [" + validator + "]: " + getTarget());
<add> }
<ide> }
<del> this.validator = validator;
<ide> }
<ide>
<ide> /**
<del> * Return the Validator to apply after each binding step, if any.
<add> * Add Validators to apply after each binding step.
<add> * @see #setValidator(Validator)
<add> * @see #replaceValidators(Validator...)
<add> */
<add> public void addValidators(Validator... validators) {
<add> assertValidators(validators);
<add> this.validators.addAll(Arrays.asList(validators));
<add> }
<add>
<add> /**
<add> * Replace the Validators to apply after each binding step.
<add> * @see #setValidator(Validator)
<add> * @see #addValidators(Validator...)
<add> */
<add> public void replaceValidators(Validator... validators) {
<add> assertValidators(validators);
<add> this.validators.clear();
<add> this.validators.addAll(Arrays.asList(validators));
<add> }
<add>
<add> /**
<add> * Return the primary Validator to apply after each binding step, if any.
<ide> */
<ide> public Validator getValidator() {
<del> return this.validator;
<add> return this.validators.size() > 0 ? this.validators.get(0) : null;
<ide> }
<ide>
<add> /**
<add> * Return the Validators to apply after data binding.
<add> */
<add> public List<Validator> getValidators() {
<add> return Collections.unmodifiableList(this.validators);
<add> }
<ide>
<ide> //---------------------------------------------------------------------
<ide> // Implementation of PropertyEditorRegistry/TypeConverter interface
<ide> protected void applyPropertyValues(MutablePropertyValues mpvs) {
<ide>
<ide>
<ide> /**
<del> * Invoke the specified Validator, if any.
<add> * Invoke the specified Validators, if any.
<ide> * @see #setValidator(Validator)
<ide> * @see #getBindingResult()
<ide> */
<ide> public void validate() {
<del> this.validator.validate(getTarget(), getBindingResult());
<add> for (Validator validator : this.validators) {
<add> validator.validate(getTarget(), getBindingResult());
<add> }
<ide> }
<ide>
<ide> /**
<del> * Invoke the specified Validator, if any, with the given validation hints.
<add> * Invoke the specified Validators, if any, with the given validation hints.
<ide> * <p>Note: Validation hints may get ignored by the actual target Validator.
<ide> * @param validationHints one or more hint objects to be passed to a {@link SmartValidator}
<ide> * @see #setValidator(Validator)
<ide> * @see SmartValidator#validate(Object, Errors, Object...)
<ide> */
<ide> public void validate(Object... validationHints) {
<del> Validator validator = getValidator();
<del> if (!ObjectUtils.isEmpty(validationHints) && validator instanceof SmartValidator) {
<del> ((SmartValidator) validator).validate(getTarget(), getBindingResult(), validationHints);
<del> }
<del> else if (validator != null) {
<del> validator.validate(getTarget(), getBindingResult());
<add> for (Validator validator : getValidators()) {
<add> if (!ObjectUtils.isEmpty(validationHints) && validator instanceof SmartValidator) {
<add> ((SmartValidator) validator).validate(getTarget(), getBindingResult(), validationHints);
<add> }
<add> else if (validator != null) {
<add> validator.validate(getTarget(), getBindingResult());
<add> }
<ide> }
<ide> }
<ide> | 1 |
Javascript | Javascript | fix code comment | 4e380be3ec3b9645b2cafb6152d420dceddf41d2 | <ide><path>packages/ember-runtime/lib/mixins/mutable_array.js
<ide> Ember.MutableArray = Ember.Mixin.create(Ember.Array, Ember.MutableEnumerable,/**
<ide> method. You can pass either a single index, or a start and a length.
<ide>
<ide> If you pass a start and length that is beyond the
<del> length this method will throw an `Ember.OUT_OF_RANGE_EXCEPTION`
<add> length this method will throw an `OUT_OF_RANGE_EXCEPTION`
<ide>
<ide> ```javascript
<ide> var colors = ["red", "green", "blue", "yellow", "orange"]; | 1 |
Text | Text | note retention period | 6b3ee9b8fd813c7b1479e629348cb1b096a89819 | <ide><path>docs/Analytics.md
<ide> Homebrew is provided free of charge and run entirely by volunteers in their spar
<ide> - If a formula is widely used and is failing often it will enable us to prioritise fixing that formula over others.
<ide> - Collecting the OS version allows us to decide what versions of macOS to prioritise and support and identify build failures that occur only on single versions.
<ide>
<add>## How Long?
<add>Homebrew's anonymous user and event data have a 14 month retention period. This is the [lowest possible value for Google Analytics](https://support.google.com/analytics/answer/7667196).
<add>
<ide> ## What?
<ide> Homebrew's analytics record some shared information for every event:
<ide> | 1 |
Text | Text | add use cases for api routes to documentation. | 0a35b578d4ab482da3b199c501ce192783a7f2a1 | <ide><path>docs/api-routes/introduction.md
<ide> description: Next.js supports API Routes, which allow you to build your API with
<ide> </ul>
<ide> </details>
<ide>
<del>API routes provide a straightforward solution to build your **API** with Next.js.
<add>API routes provide a solution to build your **API** with Next.js.
<ide>
<ide> Any file inside the folder `pages/api` is mapped to `/api/*` and will be treated as an API endpoint instead of a `page`. They are server-side only bundles and won't increase your client-side bundle size.
<ide>
<ide> export default function handler(req, res) {
<ide>
<ide> To fetch API endpoints, take a look into any of the examples at the start of this section.
<ide>
<add>## Use Cases
<add>
<add>For new projects, you can build your entire API with API Routes. If you have an existing API, you do not need to forward calls to the API through an API Route. Some other use cases for API Routes are:
<add>
<add>- Masking the URL of an external service (e.g. `/api/secret` instead of `https://company.com/secret-url`)
<add>- Using [Environment Variables](/docs/basic-features/environment-variables.md) on the server to securely access external services.
<add>
<ide> ## Caveats
<ide>
<ide> - API Routes [do not specify CORS headers](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS), meaning they are **same-origin only** by default. You can customize such behavior by wrapping the request handler with the [cors middleware](/docs/api-routes/api-middlewares.md#connectexpress-middleware-support). | 1 |
PHP | PHP | add support for using keys in collection filter | d56e23f0838644c10ebb9ad3fa31412891a2886e | <ide><path>src/Illuminate/Support/Collection.php
<ide> public function except($keys)
<ide> public function filter(callable $callback = null)
<ide> {
<ide> if ($callback) {
<del> return new static(array_filter($this->items, $callback));
<add> $return = [];
<add>
<add> foreach ($this->items as $key => $value) {
<add> if ($callback($value, $key)) {
<add> $return[$key] = $value;
<add> }
<add> }
<add>
<add> return new static($return);
<ide> }
<ide>
<ide> return new static(array_filter($this->items));
<ide><path>tests/Support/SupportCollectionTest.php
<ide> public function testFilter()
<ide>
<ide> $c = new Collection(['', 'Hello', '', 'World']);
<ide> $this->assertEquals(['Hello', 'World'], $c->filter()->values()->toArray());
<add>
<add> $c = new Collection(['id' => 1, 'first' => 'Hello', 'second' => 'World']);
<add> $this->assertEquals(['first' => 'Hello', 'second' => 'World'], $c->filter(function ($item, $key) {
<add> return $key != 'id';
<add> })->all());
<ide> }
<ide>
<ide> public function testWhere() | 2 |
Ruby | Ruby | fix custome serializer setting | 3cc93de6ad58ee306b906a3d979b46276900711a | <ide><path>activejob/lib/active_job/railtie.rb
<ide> class Railtie < Rails::Railtie # :nodoc:
<ide> end
<ide>
<ide> initializer "active_job.custom_serializers" do |app|
<del> custom_serializers = app.config.active_job.delete(:custom_serializers)
<del> ActiveJob::Serializers.add_serializers custom_serializers
<add> config.after_initialize do
<add> custom_serializers = app.config.active_job.delete(:custom_serializers)
<add> ActiveJob::Serializers.add_serializers custom_serializers
<add> end
<ide> end
<ide>
<ide> initializer "active_job.set_configs" do |app|
<ide> options = app.config.active_job
<ide> options.queue_adapter ||= :async
<ide>
<ide> ActiveSupport.on_load(:active_job) do
<del> options.each { |k, v| send("#{k}=", v) }
<add> options.each do |k, v|
<add> k = "#{k}="
<add> send(k, v) if respond_to? k
<add> end
<ide> end
<ide> end
<ide>
<ide><path>activejob/lib/active_job/serializers.rb
<ide> def serializers
<ide>
<ide> # Adds a new serializer to a list of known serializers
<ide> def add_serializers(*new_serializers)
<del> self._additional_serializers += new_serializers
<add> self._additional_serializers += new_serializers.flatten
<ide> end
<ide> end
<ide>
<ide><path>railties/test/application/configuration_test.rb
<ide> def index
<ide> assert_equal Digest::SHA1, ActiveSupport::Digest.hash_digest_class
<ide> end
<ide>
<add> test "custom serializers should be able to set via config.active_job.custom_serializers in an initializer" do
<add> class ::DummySerializer < ActiveJob::Serializers::ObjectSerializer; end
<add>
<add> app_file "config/initializers/custom_serializers.rb", <<-RUBY
<add> Rails.application.config.active_job.custom_serializers << DummySerializer
<add> RUBY
<add>
<add> app "development"
<add>
<add> assert_includes ActiveJob::Serializers.serializers, DummySerializer
<add> end
<add>
<ide> private
<ide> def force_lazy_load_hooks
<ide> yield # Tasty clarifying sugar, homie! We only need to reference a constant to load it. | 3 |
PHP | PHP | declare unknown variable | a1d75bb8404ae6ccbb6242378418c77f0608469d | <ide><path>src/Routing/RouteCollection.php
<ide> public function getMatchingMiddleware($needle)
<ide>
<ide> if (preg_match($pattern, $needle)) {
<ide> $matching = array_merge($matching, $middleware);
<add> $resolved = [];
<add>
<ide> foreach ($matching as $name) {
<ide> $resolved[] = $this->_middleware[$name];
<ide> } | 1 |
Go | Go | fix conversion of restart-policy from grpc | bc32fcabebb5f3a83d47c00d85317ce82c963edf | <ide><path>daemon/cluster/convert/service.go
<ide> func restartPolicyFromGRPC(p *swarmapi.RestartPolicy) *types.RestartPolicy {
<ide> var rp *types.RestartPolicy
<ide> if p != nil {
<ide> rp = &types.RestartPolicy{}
<del> rp.Condition = types.RestartPolicyCondition(strings.ToLower(p.Condition.String()))
<add>
<add> switch p.Condition {
<add> case swarmapi.RestartOnNone:
<add> rp.Condition = types.RestartPolicyConditionNone
<add> case swarmapi.RestartOnFailure:
<add> rp.Condition = types.RestartPolicyConditionOnFailure
<add> case swarmapi.RestartOnAny:
<add> rp.Condition = types.RestartPolicyConditionAny
<add> default:
<add> rp.Condition = types.RestartPolicyConditionAny
<add> }
<add>
<ide> if p.Delay != nil {
<ide> delay, _ := ptypes.Duration(p.Delay)
<ide> rp.Delay = &delay
<ide> func restartPolicyToGRPC(p *types.RestartPolicy) (*swarmapi.RestartPolicy, error
<ide> var rp *swarmapi.RestartPolicy
<ide> if p != nil {
<ide> rp = &swarmapi.RestartPolicy{}
<del> sanatizedCondition := strings.ToUpper(strings.Replace(string(p.Condition), "-", "_", -1))
<del> if condition, ok := swarmapi.RestartPolicy_RestartCondition_value[sanatizedCondition]; ok {
<del> rp.Condition = swarmapi.RestartPolicy_RestartCondition(condition)
<del> } else if string(p.Condition) == "" {
<add>
<add> switch p.Condition {
<add> case types.RestartPolicyConditionNone:
<add> rp.Condition = swarmapi.RestartOnNone
<add> case types.RestartPolicyConditionOnFailure:
<add> rp.Condition = swarmapi.RestartOnFailure
<add> case types.RestartPolicyConditionAny:
<add> rp.Condition = swarmapi.RestartOnAny
<add> default:
<add> if string(p.Condition) != "" {
<add> return nil, fmt.Errorf("invalid RestartCondition: %q", p.Condition)
<add> }
<ide> rp.Condition = swarmapi.RestartOnAny
<del> } else {
<del> return nil, fmt.Errorf("invalid RestartCondition: %q", p.Condition)
<ide> }
<ide>
<ide> if p.Delay != nil { | 1 |
PHP | PHP | replace a `preg_split` call with `explode` | cab42a6f873253f2b0b41b67250b79f87d8f930f | <ide><path>src/Illuminate/Testing/PendingCommand.php
<ide> private function applyTableOutputExpectations($mock)
<ide> $table->render();
<ide>
<ide> $lines = array_filter(
<del> preg_split("/\n/", $output->fetch())
<add> explode("\n", $output->fetch())
<ide> );
<ide>
<ide> foreach ($lines as $line) { | 1 |
Text | Text | fix some translate error | c47a7f0bf1f331a7308cc387b1dfcdb3fe33abc9 | <ide><path>guide/chinese/javascript/await-promises/index.md
<ide> ---
<ide> title: Await Promises
<del>localeTitle: 等待承诺
<add>localeTitle: Await Promise
<ide> ---
<del>## 等待承诺
<add>## Await Promise
<ide>
<del>`async` / `await` [运算符](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators)可以更轻松地实现许多异步Promise。它们还允许工程师编写更清晰,更简洁,可测试的代码。
<add>`async` / `await` [关键字](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators)可以更轻松地实现许多异步Promise。它们能帮助工程师编写更清晰,更简洁,可测试的代码。
<ide>
<del>要理解这个主题,您应该对[Promise](https://guide.freecodecamp.org/javascript/promises)如何工作有充分的了解。
<add>要理解这个主题,您需要对[Promise](https://guide.freecodecamp.org/javascript/promises)的工作机制有充分的了解。
<ide>
<ide> * * *
<ide>
<ide> ## 基本语法
<ide>
<del>\`\`\`\`javascript function slowlyResolvedPromiseFunc(string){ 返回新的Promise(resolve => { setTimeout(()=> { 解析(字符串); },5000); }); }
<del>
<del>异步函数doIt(){ const myPromise = await slowResolvedPromiseFunc(“foo”); 的console.log(myPromise); //“foo” }
<add>```javascript
<add>function slowlyResolvedPromiseFunc (string) {
<add> return new Promise(resolve => {
<add> setTimeout(() => { resolve(string}, 5000;
<add> };
<add>}
<ide>
<del>doIt方法();
<add>async function doIt () {
<add> const myPromise = await slowResolvedPromiseFunc ('foo');
<add> console.log(myPromise); // 'foo'
<add> }
<add>
<add>doIt();
<ide> ```
<del>There are a few things to note:
<add>有几点需要注意:
<ide>
<del> * The function that encompasses the `await` declaration must include the `async` operator. This will tell the JS interpreter that it must wait until the Promise is resolved or rejected.
<del> * The `await` operator must be inline, during the const declaration.
<del> * This works for `reject` as well as `resolve`.
<add> * 包含`await`关键字的函数在定义时必须有`async`关键字修饰. 它会阻塞javascript进程,直到Promise执行了resolve或者reject。
<add> * `await`关键字必须和声明的变量在同一行。
<add> * 对`reject`和`resolve`效果相同。
<ide>
<ide> ---
<ide>
<del> ## Nested Promises vs. `Async` / `Await`
<add> ## 嵌套 Promises vs. `Async` / `Await`
<ide>
<del> Implementing a single Promise is pretty straightforward. In contrast, Chained Promises or the creation of a dependency pattern may produce "spaghetti code".
<add> 实现一个Promise很简单。 然而,链式的Promise或有依赖的模式会导致“意大利面条”式的代码。
<ide>
<ide> The following examples assume that the <a href='https://github.com/request/request-promise' target='_blank' rel='nofollow'>`request-promise`</a> library is available as `rp`.
<ide>
<ide> errorExample(); \`\`\`
<ide> #### 更多信息:
<ide>
<ide> * `await`运营商[MDN文档](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/await)
<del>* `async`功能操作员[MDN文档](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/async_function)
<ide>\ No newline at end of file
<add>* `async`功能操作员[MDN文档](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/async_function) | 1 |
Python | Python | remove some now unnecessary fixups to lib/npyio | 5657366762f802d1119de6be6b19893daca9252d | <ide><path>tools/py3tool.py
<ide> def custom_mangling(filename):
<ide> f.write(text)
<ide> f.close()
<ide>
<del> if filename.endswith(os.path.join('lib', 'npyio.py')):
<del> f = open(filename, 'r')
<del> text = f.read()
<del> f.close()
<del> text = text.replace('from . import io', 'import io')
<del> f = open(filename, 'w')
<del> f.write(text)
<del> f.close()
<del>
<ide> def walk_sync(dir1, dir2, _seen=None):
<ide> if _seen is None:
<ide> seen = {} | 1 |
PHP | PHP | fix ternary expr and add a space between lines | 3b862b0693a4ce3e13ecd73c65a9fe81e8c752f9 | <ide><path>src/Illuminate/Support/Str.php
<ide> public static function slug($title, $separator = '-')
<ide> $title = preg_replace('![^'.preg_quote($separator).'\pL\pN\s]+!u', '', mb_strtolower($title));
<ide>
<ide> // Convert all dashes/undescores into separator
<del> $flip = ($separator == '-' ? '_' : '-');
<add> $flip = $separator == '-' ? '_' : '-';
<add>
<ide> $title = preg_replace('!['.preg_quote($flip).']+!u', $separator, $title);
<ide>
<ide> // Replace all separator characters and whitespace by a single separator | 1 |
Text | Text | resume a stream after pipe() and unpipe() | 0b432e08b167ef7b3304e3cbfa5bfcfb862d185c | <ide><path>doc/api/stream.md
<ide> possible states:
<ide> * `readable._readableState.flowing = true`
<ide>
<ide> When `readable._readableState.flowing` is `null`, no mechanism for consuming the
<del>streams data is provided so the stream will not generate its data.
<del>
<del>Attaching a listener for the `'data'` event, calling the `readable.pipe()`
<add>streams data is provided so the stream will not generate its data. While in this
<add>state, attaching a listener for the `'data'` event, calling the `readable.pipe()`
<ide> method, or calling the `readable.resume()` method will switch
<ide> `readable._readableState.flowing` to `true`, causing the Readable to begin
<ide> actively emitting events as data is generated.
<ide>
<ide> Calling `readable.pause()`, `readable.unpipe()`, or receiving "back pressure"
<ide> will cause the `readable._readableState.flowing` to be set as `false`,
<ide> temporarily halting the flowing of events but *not* halting the generation of
<del>data.
<add>data. While in this state, attaching a listener for the `'data'` event
<add>would not cause `readable._readableState.flowing` to switch to `true`.
<add>
<add>```js
<add>const { PassThrough, Writable } = require('stream');
<add>const pass = new PassThrough();
<add>const writable = new Writable();
<add>
<add>pass.pipe(writable);
<add>pass.unpipe(writable);
<add>// flowing is now false
<add>
<add>pass.on('data', (chunk) => { console.log(chunk.toString()); });
<add>pass.write('ok'); // will not emit 'data'
<add>pass.resume(); // must be called to make 'data' being emitted
<add>```
<ide>
<ide> While `readable._readableState.flowing` is `false`, data may be accumulating
<ide> within the streams internal buffer. | 1 |
Go | Go | add buffer to prevent goroutine leak | c322af8019dda164bf5af974bf446c4905674e19 | <ide><path>integration-cli/docker_api_attach_test.go
<ide> func (s *DockerSuite) TestGetContainersAttachWebsocket(c *testing.T) {
<ide> expected := []byte("hello")
<ide> actual := make([]byte, len(expected))
<ide>
<del> outChan := make(chan error)
<add> outChan := make(chan error, 1)
<ide> go func() {
<ide> _, err := io.ReadFull(ws, actual)
<ide> outChan <- err
<ide> close(outChan)
<ide> }()
<ide>
<del> inChan := make(chan error)
<add> inChan := make(chan error, 1)
<ide> go func() {
<ide> _, err := ws.Write(expected)
<ide> inChan <- err
<ide> func bodyIsWritable(r *http.Response) bool {
<ide>
<ide> // readTimeout read from io.Reader with timeout
<ide> func readTimeout(r io.Reader, buf []byte, timeout time.Duration) (n int, err error) {
<del> ch := make(chan bool)
<add> ch := make(chan bool, 1)
<ide> go func() {
<ide> n, err = io.ReadFull(r, buf)
<ide> ch <- true
<ide><path>integration-cli/docker_api_containers_test.go
<ide> func (s *DockerSuite) TestGetStoppedContainerStats(c *testing.T) {
<ide> name := "statscontainer"
<ide> dockerCmd(c, "create", "--name", name, "busybox", "ps")
<ide>
<del> chResp := make(chan error)
<add> chResp := make(chan error, 1)
<ide>
<ide> // We expect an immediate response, but if it's not immediate, the test would hang, so put it in a goroutine
<ide> // below we'll check this on a timeout.
<ide><path>integration-cli/docker_api_logs_test.go
<ide> func (s *DockerSuite) TestLogsAPIWithStdout(c *testing.T) {
<ide> err error
<ide> }
<ide>
<del> chLog := make(chan logOut)
<add> chLog := make(chan logOut, 1)
<ide> res, body, err := request.Get(fmt.Sprintf("/containers/%s/logs?follow=1&stdout=1×tamps=1", id))
<ide> assert.NilError(c, err)
<ide> assert.Equal(c, res.StatusCode, http.StatusOK)
<ide> func (s *DockerSuite) TestLogsAPIUntilFutureFollow(c *testing.T) {
<ide> }
<ide>
<ide> chLog := make(chan logOut)
<add> stop := make(chan struct{})
<add> defer close(stop)
<ide>
<ide> go func() {
<ide> bufReader := bufio.NewReader(reader)
<ide> func (s *DockerSuite) TestLogsAPIUntilFutureFollow(c *testing.T) {
<ide> if err == io.EOF {
<ide> return
<ide> }
<del> chLog <- logOut{"", err}
<add> select {
<add> case <-stop:
<add> return
<add> case chLog <- logOut{"", err}:
<add> }
<add>
<ide> return
<ide> }
<ide>
<del> chLog <- logOut{strings.TrimSpace(string(out)), err}
<add> select {
<add> case <-stop:
<add> return
<add> case chLog <- logOut{strings.TrimSpace(string(out)), err}:
<add> }
<ide> }
<ide> }()
<ide>
<ide><path>integration-cli/docker_cli_attach_test.go
<ide> func (s *DockerSuite) TestAttachTTYWithoutStdin(c *testing.T) {
<ide> id := strings.TrimSpace(out)
<ide> assert.NilError(c, waitRun(id))
<ide>
<del> done := make(chan error)
<add> done := make(chan error, 1)
<ide> go func() {
<ide> defer close(done)
<ide>
<ide><path>integration-cli/docker_cli_attach_unix_test.go
<ide> func (s *DockerSuite) TestAttachClosedOnContainerStop(c *testing.T) {
<ide> err = attachCmd.Start()
<ide> assert.NilError(c, err)
<ide>
<del> errChan := make(chan error)
<add> errChan := make(chan error, 1)
<ide> go func() {
<ide> time.Sleep(300 * time.Millisecond)
<ide> defer close(errChan)
<ide> func (s *DockerSuite) TestAttachAfterDetach(c *testing.T) {
<ide> cmd.Stdout = tty
<ide> cmd.Stderr = tty
<ide>
<del> cmdExit := make(chan error)
<add> cmdExit := make(chan error, 1)
<ide> go func() {
<ide> cmdExit <- cmd.Run()
<ide> close(cmdExit)
<ide><path>integration-cli/docker_cli_build_test.go
<ide> func (s *DockerSuite) TestBuildAddSingleFileToWorkdir(c *testing.T) {
<ide> }))
<ide> defer ctx.Close()
<ide>
<del> errChan := make(chan error)
<add> errChan := make(chan error, 1)
<ide> go func() {
<ide> errChan <- buildImage(name, build.WithExternalBuildContext(ctx)).Error
<ide> close(errChan)
<ide> COPY test_file .`),
<ide> }))
<ide> defer ctx.Close()
<ide>
<del> errChan := make(chan error)
<add> errChan := make(chan error, 1)
<ide> go func() {
<ide> errChan <- buildImage(name, build.WithExternalBuildContext(ctx)).Error
<ide> close(errChan)
<ide><path>integration-cli/docker_cli_daemon_test.go
<ide> func (s *DockerDaemonSuite) TestDaemonRestartKillWait(c *testing.T) {
<ide>
<ide> s.d.Restart(c)
<ide>
<del> errchan := make(chan error)
<add> errchan := make(chan error, 1)
<ide> go func() {
<ide> if out, err := s.d.Cmd("wait", containerID); err != nil {
<ide> errchan <- fmt.Errorf("%v:\n%s", err, out)
<ide> func (s *DockerDaemonSuite) TestDaemonRestartWithPausedContainer(c *testing.T) {
<ide> }
<ide> s.d.Restart(c)
<ide>
<del> errchan := make(chan error)
<add> errchan := make(chan error, 1)
<ide> go func() {
<ide> out, err := s.d.Cmd("start", "test")
<ide> if err != nil {
<ide> errchan <- fmt.Errorf("%v:\n%s", err, out)
<add> return
<ide> }
<ide> name := strings.TrimSpace(out)
<ide> if name != "test" {
<ide> errchan <- fmt.Errorf("Paused container start error on docker daemon restart, expected 'test' but got '%s'", name)
<add> return
<ide> }
<ide> close(errchan)
<ide> }()
<ide><path>integration-cli/docker_cli_events_unix_test.go
<ide> func (s *DockerSuite) TestEventsRedirectStdout(c *testing.T) {
<ide> func (s *DockerSuite) TestEventsOOMDisableFalse(c *testing.T) {
<ide> testRequires(c, DaemonIsLinux, oomControl, memoryLimitSupport, swapMemorySupport, NotPpc64le)
<ide>
<del> errChan := make(chan error)
<add> errChan := make(chan error, 1)
<ide> go func() {
<ide> defer close(errChan)
<ide> out, exitCode, _ := dockerCmdWithError("run", "--name", "oomFalse", "-m", "10MB", "busybox", "sh", "-c", "x=a; while true; do x=$x$x$x$x; done")
<ide> func (s *DockerSuite) TestEventsOOMDisableFalse(c *testing.T) {
<ide> func (s *DockerSuite) TestEventsOOMDisableTrue(c *testing.T) {
<ide> testRequires(c, DaemonIsLinux, oomControl, memoryLimitSupport, NotArm, swapMemorySupport, NotPpc64le)
<ide>
<del> errChan := make(chan error)
<add> errChan := make(chan error, 1)
<ide> observer, err := newEventObserver(c)
<ide> assert.NilError(c, err)
<ide> err = observer.Start()
<ide><path>integration-cli/docker_cli_exec_test.go
<ide> func (s *DockerSuite) TestExecInteractive(c *testing.T) {
<ide> assert.Equal(c, line, "test")
<ide> err = stdin.Close()
<ide> assert.NilError(c, err)
<del> errChan := make(chan error)
<add> errChan := make(chan error, 1)
<ide> go func() {
<ide> errChan <- execCmd.Wait()
<ide> close(errChan)
<ide> func (s *DockerSuite) TestExecTTYWithoutStdin(c *testing.T) {
<ide> id := strings.TrimSpace(out)
<ide> assert.NilError(c, waitRun(id))
<ide>
<del> errChan := make(chan error)
<add> errChan := make(chan error, 1)
<ide> go func() {
<ide> defer close(errChan)
<ide>
<ide> func (s *DockerSuite) TestExecStopNotHanging(c *testing.T) {
<ide> out string
<ide> err error
<ide> }
<del> ch := make(chan dstop)
<add> ch := make(chan dstop, 1)
<ide> go func() {
<ide> result := icmd.RunCommand(dockerBinary, "stop", "testing")
<ide> ch <- dstop{result.Combined(), result.Error}
<ide> func (s *DockerSuite) TestExecCgroup(c *testing.T) {
<ide> var wg sync.WaitGroup
<ide> var mu sync.Mutex
<ide> var execCgroups []sort.StringSlice
<del> errChan := make(chan error)
<add> errChan := make(chan error, 5)
<ide> // exec a few times concurrently to get consistent failure
<ide> for i := 0; i < 5; i++ {
<ide> wg.Add(1)
<ide> go func() {
<add> defer wg.Done()
<ide> out, _, err := dockerCmdWithError("exec", "testing", "cat", "/proc/self/cgroup")
<ide> if err != nil {
<ide> errChan <- err
<ide> func (s *DockerSuite) TestExecCgroup(c *testing.T) {
<ide> mu.Lock()
<ide> execCgroups = append(execCgroups, cg)
<ide> mu.Unlock()
<del> wg.Done()
<ide> }()
<ide> }
<ide> wg.Wait()
<ide><path>integration-cli/docker_cli_exec_unix_test.go
<ide> func (s *DockerSuite) TestExecInteractiveStdinClose(c *testing.T) {
<ide>
<ide> b := bytes.NewBuffer(nil)
<ide>
<del> ch := make(chan error)
<add> ch := make(chan error, 1)
<ide> go func() { ch <- cmd.Wait() }()
<ide>
<ide> select {
<ide> func (s *DockerSuite) TestExecTTY(c *testing.T) {
<ide> _, err = p.Write([]byte("cat /foo && exit\n"))
<ide> assert.NilError(c, err)
<ide>
<del> chErr := make(chan error)
<add> chErr := make(chan error, 1)
<ide> go func() {
<ide> chErr <- cmd.Wait()
<ide> }()
<ide><path>integration-cli/docker_cli_external_volume_driver_test.go
<ide> func (s *DockerExternalVolumeSuite) TestExternalVolumeDriverLookupNotBlocked(c *
<ide> defer os.RemoveAll(specPath)
<ide>
<ide> chCmd1 := make(chan struct{})
<del> chCmd2 := make(chan error)
<add> chCmd2 := make(chan error, 1)
<ide> cmd1 := exec.Command(dockerBinary, "volume", "create", "-d", "down-driver")
<ide> cmd2 := exec.Command(dockerBinary, "volume", "create")
<ide>
<ide> func (s *DockerExternalVolumeSuite) TestExternalVolumeDriverRetryNotImmediatelyE
<ide> s.d.StartWithBusybox(c)
<ide> driverName := "test-external-volume-driver-retry"
<ide>
<del> errchan := make(chan error)
<add> errchan := make(chan error, 1)
<ide> started := make(chan struct{})
<ide> go func() {
<ide> close(started)
<ide><path>integration-cli/docker_cli_logs_test.go
<ide> func (s *DockerSuite) TestLogsFollowStopped(c *testing.T) {
<ide> logsCmd := exec.Command(dockerBinary, "logs", "-f", id)
<ide> assert.NilError(c, logsCmd.Start())
<ide>
<del> errChan := make(chan error)
<add> errChan := make(chan error, 1)
<ide> go func() {
<ide> errChan <- logsCmd.Wait()
<ide> close(errChan)
<ide><path>integration-cli/docker_cli_pull_local_test.go
<ide> func testConcurrentPullWholeRepo(c *testing.T) {
<ide> dockerCmd(c, args...)
<ide>
<ide> // Run multiple re-pulls concurrently
<del> results := make(chan error)
<ide> numPulls := 3
<add> results := make(chan error, numPulls)
<ide>
<ide> for i := 0; i != numPulls; i++ {
<ide> go func() {
<ide> func testConcurrentFailingPull(c *testing.T) {
<ide> repoName := fmt.Sprintf("%v/dockercli/busybox", privateRegistryURL)
<ide>
<ide> // Run multiple pulls concurrently
<del> results := make(chan error)
<ide> numPulls := 3
<add> results := make(chan error, numPulls)
<ide>
<ide> for i := 0; i != numPulls; i++ {
<ide> go func() {
<ide> func testConcurrentPullMultipleTags(c *testing.T) {
<ide> dockerCmd(c, args...)
<ide>
<ide> // Re-pull individual tags, in parallel
<del> results := make(chan error)
<add> results := make(chan error, len(repos))
<ide>
<ide> for _, repo := range repos {
<ide> go func(repo string) {
<ide><path>integration-cli/docker_cli_push_test.go
<ide> func testConcurrentPush(c *testing.T) {
<ide> }
<ide>
<ide> // Push tags, in parallel
<del> results := make(chan error)
<add> results := make(chan error, len(repos))
<ide>
<ide> for _, repo := range repos {
<ide> go func(repo string) {
<ide><path>integration-cli/docker_cli_run_test.go
<ide> func (s *DockerSuite) TestRunExitOnStdinClose(c *testing.T) {
<ide> if err := stdin.Close(); err != nil {
<ide> c.Fatal(err)
<ide> }
<del> finish := make(chan error)
<add> finish := make(chan error, 1)
<ide> go func() {
<ide> finish <- runCmd.Wait()
<ide> close(finish)
<ide> func (s *DockerSuite) TestRunPortFromDockerRangeInUse(c *testing.T) {
<ide> }
<ide>
<ide> func (s *DockerSuite) TestRunTTYWithPipe(c *testing.T) {
<del> errChan := make(chan error)
<add> errChan := make(chan error, 1)
<ide> go func() {
<ide> defer close(errChan)
<ide>
<ide> func (s *DockerSuite) TestRunPIDHostWithChildIsKillable(c *testing.T) {
<ide>
<ide> assert.Assert(c, waitRun(name) == nil)
<ide>
<del> errchan := make(chan error)
<add> errchan := make(chan error, 1)
<ide> go func() {
<ide> if out, _, err := dockerCmdWithError("kill", name); err != nil {
<ide> errchan <- fmt.Errorf("%v:\n%s", err, out)
<ide> func (s *DockerSuite) TestRunStdinBlockedAfterContainerExit(c *testing.T) {
<ide> cmd.Stderr = stdout
<ide> assert.Assert(c, cmd.Start() == nil)
<ide>
<del> waitChan := make(chan error)
<add> waitChan := make(chan error, 1)
<ide> go func() {
<ide> waitChan <- cmd.Wait()
<ide> }()
<ide><path>integration-cli/docker_cli_run_unix_test.go
<ide> func (s *DockerSuite) TestRunRedirectStdout(c *testing.T) {
<ide> cmd.Stdout = tty
<ide> cmd.Stderr = tty
<ide> assert.NilError(c, cmd.Start())
<del> ch := make(chan error)
<add> ch := make(chan error, 1)
<ide> go func() {
<ide> ch <- cmd.Wait()
<ide> close(ch)
<ide> func (s *DockerSuite) TestRunAttachDetach(c *testing.T) {
<ide> _, err = cpty.Write([]byte{17})
<ide> assert.NilError(c, err)
<ide>
<del> ch := make(chan struct{})
<add> ch := make(chan struct{}, 1)
<ide> go func() {
<ide> cmd.Wait()
<ide> ch <- struct{}{}
<ide> func (s *DockerSuite) TestRunAttachDetachFromFlag(c *testing.T) {
<ide> c.Fatal(err)
<ide> }
<ide>
<del> ch := make(chan struct{})
<add> ch := make(chan struct{}, 1)
<ide> go func() {
<ide> cmd.Wait()
<ide> ch <- struct{}{}
<ide> func (s *DockerSuite) TestRunAttachDetachFromConfig(c *testing.T) {
<ide> c.Fatal(err)
<ide> }
<ide>
<del> ch := make(chan struct{})
<add> ch := make(chan struct{}, 1)
<ide> go func() {
<ide> cmd.Wait()
<ide> ch <- struct{}{}
<ide> func (s *DockerSuite) TestRunAttachDetachKeysOverrideConfig(c *testing.T) {
<ide> c.Fatal(err)
<ide> }
<ide>
<del> ch := make(chan struct{})
<add> ch := make(chan struct{}, 1)
<ide> go func() {
<ide> cmd.Wait()
<ide> ch <- struct{}{}
<ide> func (s *DockerSuite) TestRunWithInvalidPathforBlkioDeviceWriteIOps(c *testing.T
<ide>
<ide> func (s *DockerSuite) TestRunOOMExitCode(c *testing.T) {
<ide> testRequires(c, memoryLimitSupport, swapMemorySupport, NotPpc64le)
<del> errChan := make(chan error)
<add> errChan := make(chan error, 1)
<ide> go func() {
<ide> defer close(errChan)
<ide> // memory limit lower than 8MB will raise an error of "device or resource busy" from docker-runc.
<ide><path>integration-cli/docker_cli_service_logs_test.go
<ide> func (s *DockerSwarmSuite) TestServiceLogsFollow(c *testing.T) {
<ide> // Make sure pipe is written to
<ide> ch := make(chan *logMessage)
<ide> done := make(chan struct{})
<add> stop := make(chan struct{})
<add> defer close(stop)
<ide> go func() {
<ide> reader := bufio.NewReader(r)
<ide> for {
<ide> msg := &logMessage{}
<ide> msg.data, _, msg.err = reader.ReadLine()
<ide> select {
<ide> case ch <- msg:
<add> case <-stop:
<add> return
<ide> case <-done:
<ide> return
<ide> }
<ide><path>integration-cli/docker_cli_start_test.go
<ide> func (s *DockerSuite) TestStartAttachReturnsOnError(c *testing.T) {
<ide> // err shouldn't be nil because container test2 try to link to stopped container
<ide> assert.Assert(c, err != nil, "out: %s", out)
<ide>
<del> ch := make(chan error)
<add> ch := make(chan error, 1)
<ide> go func() {
<ide> // Attempt to start attached to the container that won't start
<ide> // This should return an error immediately since the container can't be started
<ide><path>integration-cli/docker_cli_stats_test.go
<ide> func (s *DockerSuite) TestStatsNoStream(c *testing.T) {
<ide> err error
<ide> }
<ide>
<del> ch := make(chan output)
<add> ch := make(chan output, 1)
<ide> go func() {
<ide> out, err := statsCmd.Output()
<ide> ch <- output{out, err}
<ide><path>integration/container/exec_test.go
<ide> func TestExecWithCloseStdin(t *testing.T) {
<ide> resCh = make(chan struct {
<ide> content string
<ide> err error
<del> })
<add> }, 1)
<ide> )
<ide>
<ide> go func() {
<ide><path>integration/plugin/logging/logging_linux_test.go
<ide> func TestContinueAfterPluginCrash(t *testing.T) {
<ide> attach, err := client.ContainerAttach(context.Background(), id, types.ContainerAttachOptions{Stream: true, Stdout: true})
<ide> assert.NilError(t, err)
<ide>
<del> chErr := make(chan error)
<add> chErr := make(chan error, 1)
<ide> go func() {
<ide> defer close(chErr)
<ide> rdr := bufio.NewReader(attach.Reader)
<ide><path>testutil/daemon/daemon.go
<ide> func (d *Daemon) ReloadConfig() error {
<ide> return errors.New("daemon is not running")
<ide> }
<ide>
<del> errCh := make(chan error)
<add> errCh := make(chan error, 1)
<ide> started := make(chan struct{})
<ide> go func() {
<ide> _, body, err := request.Get("/events", request.Host(d.Sock()))
<ide> close(started)
<ide> if err != nil {
<ide> errCh <- err
<add> return
<ide> }
<ide> defer body.Close()
<ide> dec := json.NewDecoder(body) | 22 |
Ruby | Ruby | fix error in formula#specified_path with aliases | 16afcff557af2ffaea7f76ea70574f831f7f9ae2 | <ide><path>Library/Homebrew/formula.rb
<ide> def full_installed_alias_name
<ide>
<ide> # The path that was specified to find this formula.
<ide> def specified_path
<del> default_specified_path = alias_path || path
<add> default_specified_path = Pathname(alias_path) if alias_path.present?
<add> default_specified_path ||= path
<ide>
<ide> return default_specified_path if default_specified_path.presence&.exist?
<ide> return local_bottle_path if local_bottle_path.presence&.exist?
<ide><path>Library/Homebrew/test/formula_spec.rb
<ide> let(:path) { Formulary.core_path(name) }
<ide> let(:spec) { :stable }
<ide> let(:alias_name) { "baz@1" }
<del> let(:alias_path) { CoreTap.instance.alias_dir/alias_name }
<add> let(:alias_path) { (CoreTap.instance.alias_dir/alias_name).to_s }
<ide> let(:f) { klass.new(name, path, spec) }
<ide> let(:f_alias) { klass.new(name, path, spec, alias_path: alias_path) }
<ide>
<ide> expect(f.alias_path).to be nil
<ide> expect(f.alias_name).to be nil
<ide> expect(f.full_alias_name).to be nil
<add> expect(f.specified_path).to eq(path)
<ide> expect { klass.new }.to raise_error(ArgumentError)
<ide> end
<ide>
<ide> expect(f_alias.alias_path).to eq(alias_path)
<ide> expect(f_alias.alias_name).to eq(alias_name)
<ide> expect(f_alias.specified_name).to eq(alias_name)
<add> expect(f_alias.specified_path).to eq(Pathname(alias_path))
<ide> expect(f_alias.full_alias_name).to eq(alias_name)
<ide> expect(f_alias.full_specified_name).to eq(alias_name)
<ide> expect { klass.new }.to raise_error(ArgumentError)
<ide> expect(f.alias_path).to be nil
<ide> expect(f.alias_name).to be nil
<ide> expect(f.full_alias_name).to be nil
<add> expect(f.specified_path).to eq(path)
<ide> expect { klass.new }.to raise_error(ArgumentError)
<ide> end
<ide>
<ide> expect(f_alias.alias_path).to eq(alias_path)
<ide> expect(f_alias.alias_name).to eq(alias_name)
<ide> expect(f_alias.specified_name).to eq(alias_name)
<add> expect(f_alias.specified_path).to eq(Pathname(alias_path))
<ide> expect(f_alias.full_alias_name).to eq(full_alias_name)
<ide> expect(f_alias.full_specified_name).to eq(full_alias_name)
<ide> expect { klass.new }.to raise_error(ArgumentError) | 2 |
Ruby | Ruby | fix ambigious error message of select query method | 0f5325a625e757d2e4374ce41cabad63f087c0b5 | <ide><path>activerecord/lib/active_record/relation/query_methods.rb
<ide> def select(*fields)
<ide> return super()
<ide> end
<ide>
<del> raise ArgumentError, "Call this with at least one field" if fields.empty?
<add> raise ArgumentError, "Call `select' with at least one field" if fields.empty?
<ide> spawn._select!(*fields)
<ide> end
<ide> | 1 |
Go | Go | stream json & decode | f665be55fe832086202e54449402c1513cf4f195 | <ide><path>volumes/volume.go
<ide> func (v *Volume) FromDisk() error {
<ide> return err
<ide> }
<ide>
<del> data, err := ioutil.ReadFile(pth)
<add> jsonSource, err := os.Open(pth)
<ide> if err != nil {
<ide> return err
<ide> }
<add> defer jsonSource.Close()
<ide>
<del> return json.Unmarshal(data, v)
<add> dec := json.NewDecoder(jsonSource)
<add>
<add> return dec.Decode(v)
<ide> }
<ide>
<ide> func (v *Volume) jsonPath() (string, error) { | 1 |
Javascript | Javascript | ensure readfile[sync] reads from the beginning | 4444e731f218edf265a0b160bf1d561df2d5e5b3 | <ide><path>lib/fs.js
<ide> ReadFileContext.prototype.read = function() {
<ide> req.oncomplete = readFileAfterRead;
<ide> req.context = this;
<ide>
<del> binding.read(this.fd, buffer, offset, length, -1, req);
<add> binding.read(this.fd, buffer, offset, length, this.pos, req);
<ide> };
<ide>
<ide> ReadFileContext.prototype.close = function(err) {
<ide> function tryCreateBuffer(size, fd, isUserFd) {
<ide> return buffer;
<ide> }
<ide>
<del>function tryReadSync(fd, isUserFd, buffer, pos, len) {
<add>function tryReadSync(fd, isUserFd, buffer, pos, len, offset) {
<ide> var threw = true;
<ide> var bytesRead;
<ide> try {
<del> bytesRead = fs.readSync(fd, buffer, pos, len);
<add> bytesRead = fs.readSync(fd, buffer, pos, len, offset);
<ide> threw = false;
<ide> } finally {
<ide> if (threw && !isUserFd) fs.closeSync(fd);
<ide> fs.readFileSync = function(path, options) {
<ide>
<ide> if (size !== 0) {
<ide> do {
<del> bytesRead = tryReadSync(fd, isUserFd, buffer, pos, size - pos);
<add> bytesRead = tryReadSync(fd, isUserFd, buffer, pos, size - pos, pos);
<ide> pos += bytesRead;
<ide> } while (bytesRead !== 0 && pos < size);
<ide> } else {
<ide> do {
<ide> // the kernel lies about many files.
<ide> // Go ahead and try to read some bytes.
<ide> buffer = Buffer.allocUnsafe(8192);
<del> bytesRead = tryReadSync(fd, isUserFd, buffer, 0, 8192);
<add> bytesRead = tryReadSync(fd, isUserFd, buffer, 0, 8192, pos);
<ide> if (bytesRead !== 0) {
<ide> buffers.push(buffer.slice(0, bytesRead));
<ide> }
<ide><path>test/parallel/test-fs-readfile-fd-offset.js
<add>'use strict';
<add>const common = require('../common');
<add>const assert = require('assert');
<add>const fs = require('fs');
<add>const path = require('path');
<add>
<add>const filename = path.join(common.tmpDir, 'readfile.txt');
<add>const dataExpected = 'a'.repeat(100);
<add>fs.writeFileSync(filename, dataExpected);
<add>const fileLength = dataExpected.length;
<add>
<add>['r', 'a+'].forEach((mode) => {
<add> const fd = fs.openSync(filename, mode);
<add> assert.strictEqual(fs.readFileSync(fd).length, fileLength);
<add>
<add> // Reading again should result in the same length.
<add> assert.strictEqual(fs.readFileSync(fd).length, fileLength);
<add>
<add> fs.readFile(fd, common.mustCall((err, buf) => {
<add> assert.ifError(err);
<add> assert.strictEqual(buf.length, fileLength);
<add> }));
<add>}); | 2 |
Javascript | Javascript | replace `cliengine` with `eslint` | 3821662eb7df7ad16f2c727310d90797090f408f | <ide><path>lint-staged.config.js
<ide> const escape = require('shell-quote').quote
<del>const { CLIEngine } = require('eslint')
<add>const { ESLint } = require('eslint')
<ide>
<del>const cli = new CLIEngine({})
<add>const eslint = new ESLint()
<ide> const isWin = process.platform === 'win32'
<ide>
<ide> module.exports = {
<ide> module.exports = {
<ide> return [
<ide> `prettier --with-node-modules --ignore-path .prettierignore_staged --write ${escapedFileNames}`,
<ide> `eslint --no-ignore --max-warnings=0 --fix ${filenames
<del> .filter((file) => !cli.isPathIgnored(file))
<add> .filter((file) => !eslint.isPathIgnored(file))
<ide> .map((f) => `"${f}"`)
<ide> .join(' ')}`,
<ide> ] | 1 |
PHP | PHP | avoid notice in mime() | e1365336e20f3581a59cc9b311c686534af24fd1 | <ide><path>lib/Cake/Utility/File.php
<ide> public function mime() {
<ide> }
<ide> if (function_exists('finfo_open')) {
<ide> $finfo = finfo_open(FILEINFO_MIME);
<del> list($type, $charset) = explode(';', finfo_file($finfo, $this->pwd()));
<add> $finfo = finfo_file($finfo, $this->pwd());
<add> if (!$finfo) {
<add> return false;
<add> }
<add> list($type, $charset) = explode(';', $finfo);
<ide> return $type;
<del> } elseif (function_exists('mime_content_type')) {
<add> }
<add> if (function_exists('mime_content_type')) {
<ide> return mime_content_type($this->pwd());
<ide> }
<ide> return false; | 1 |
Text | Text | add a tip about conditionally applying middleware | 0c0434d4b5df1d64033c4eb622e3e12fd9515d55 | <ide><path>docs/api/applyMiddleware.md
<ide> store.dispatch({
<ide>
<ide> * If you use other store enhancers in addition to `applyMiddleware`, make sure to put `applyMiddleware` before them in the composition chain because the middleware is potentially asynchronous. For example, it should go before [redux-devtools](https://github.com/gaearon/redux-devtools) because otherwise the DevTools won’t see the raw actions emitted by the Promise middleware and such.
<ide>
<add>* If you want to conditionally apply a middleware, make sure to only import it when it's needed:
<add>
<add> ```js
<add> let middleware = [a, b];
<add> if (process.env.NODE_ENV !== 'production') {
<add> let c = require('some-debug-middleware');
<add> let d = require('another-debug-middleware');
<add> middleware = [...middleware, c, d];
<add> }
<add> const createStoreWithMiddleware = applyMiddleware(...middleware)(createStore);
<add> ```
<add>
<add> This makes it easier for bundling tools to cut out unneeded modules and reduces the size of your builds.
<add>
<ide> * Ever wondered what `applyMiddleware` itself is? It ought to be an extension mechanism more powerful than the middleware itself. Indeed, `applyMiddleware` is an example of the most poweful Redux extension mechanism called [store enhancers](../Glossary.md#store-enhancer). It is highly unlikely you’ll ever want to write a store enhancer yourself. Another example of a store enhancer is [redux-devtools](https://github.com/gaearon/redux-devtools). Middleware is less powerful than a store enhancer, but it is easier to write.
<ide>
<ide> * Middleware sounds much more complicated than it really is. The only way to really understand middleware is to see how the existing middleware works, and try to write your own. The function nesting can be intimidating, but most of the middleware you’ll find are, in fact, 10-liners, and the nesting and composability is what makes the middleware system powerful. | 1 |
Text | Text | update changelog for 1.12.0 | 0a7c0fce26f11f14f66493758d568c00815569f4 | <ide><path>CHANGELOG.md
<ide> # Ember Changelog
<ide>
<del>### Canary
<del>
<add>### 1.12.0 (May 13, 2015)
<add>
<add>- [#10874](https://github.com/emberjs/ember.js/pull/10874) Include all files in jspm package.
<add>- [#10876](https://github.com/emberjs/ember.js/pull/10876) [BUGFIX] Make the `{{component}}` helper deal with dynamically set falsey values.
<add>- [#10883](https://github.com/emberjs/ember.js/pull/10883) [BUGFIX] Fix `View.prototype.replaceIn` functionality.
<add>- [#10920](https://github.com/emberjs/ember.js/pull/10920) [BUGFIX] Fix `Component.prototype.layout` so that it can now be set and recompute properly.
<add>- [#10968](https://github.com/emberjs/ember.js/pull/10968) [BUGFIX] Fix assertion that incorrectly fired on legacy settable computed properties.
<add>- [CVE-2015-1866] Ember.js XSS Vulnerability With {{view "select"}} Options
<ide> - [#3852](https://github.com/emberjs/ember.js/pull/3852) [BREAKING BUGFIX] Do not assume null Ember.get targets always refer to a global
<add>- [#10200](https://github.com/emberjs/ember.js/pull/10200) Add 'autocomplete' to Ember.Select view
<add>- [#10464](https://github.com/emberjs/ember.js/pull/10464) Ensure templates were compiled with the current compiler version.
<add>- [#10494](https://github.com/emberjs/ember.js/pull/10494) Make it easier to write lazy streams.
<add>- [#10483](https://github.com/emberjs/ember.js/pull/10483) [REFACTOR] Lazily reify router’s location.
<add>- [#10673](https://github.com/emberjs/ember.js/pull/10673) Remove EachProxy and EachArray from exports.
<add>- [#10572](https://github.com/emberjs/ember.js/pull/10572) Fix UnrecognizedURLError not being an Error.
<add>- [#10585](https://github.com/emberjs/ember.js/pull/10585) Deprecate direct use of `Ember.CoreView`.
<add>- [#10599](https://github.com/emberjs/ember.js/pull/10599) Don’t share view registry across containers.
<add>- [#10667](https://github.com/emberjs/ember.js/pull/10667) Deprecate `Ember.tryFinally` and `Ember.tryCatchFinally`.
<add>- [#10668](https://github.com/emberjs/ember.js/pull/10668) Deprecate `Ember.required`.
<add>- [#10678](https://github.com/emberjs/ember.js/pull/10678) Fix typos in deprecations of unescaped style attribute
<add>- [#10679](https://github.com/emberjs/ember.js/pull/10679) Ensure docs are not detected for deprecation mixins.
<add>- [#10672](https://github.com/emberjs/ember.js/pull/10672) Do not export `Ember.Descriptor`.
<add>- [#10695](https://github.com/emberjs/ember.js/pull/10695) Require that `base` `href` and `embed` `src` are escaped.
<add>- [#10690](https://github.com/emberjs/ember.js/pull/10690) [BUGFIX canary] Prevent unknown input types from erroring.
<add>- [#10731](https://github.com/emberjs/ember.js/pull/10731) [FEATURE] Enable `new-computed-syntax` feature. See [emberjs/rfcs#11](https://github.com/emberjs/rfcs/pull/11) for more details.
<add>- [#10731](https://github.com/emberjs/ember.js/pull/10731) [FEATURE] Enable `ember-application-instance-initializers` feature.
<add>- [#10731](https://github.com/emberjs/ember.js/pull/10731) [FEATURE] Enable `ember-application-initializer-context` feature.
<ide>
<ide> ### 1.11.0 (March 28, 2015)
<ide> | 1 |
Text | Text | add note about forwarding stream options | 5133e783ba0fbd93bf3a65e18c450b61c18f55a0 | <ide><path>doc/api/stream.md
<ide> parent class constructor:
<ide> const { Writable } = require('stream');
<ide>
<ide> class MyWritable extends Writable {
<del> constructor(options) {
<del> super(options);
<add> constructor({ highWaterMark, ...options }) {
<add> super({
<add> highWaterMark,
<add> autoDestroy: true,
<add> emitClose: true
<add> });
<ide> // ...
<ide> }
<ide> }
<ide> ```
<ide>
<add>When extending streams, it is important to keep in mind what options the user
<add>can and should provide before forwarding these to the base constructor. For
<add>example, if the implementation makes assumptions in regard to e.g. the
<add>`autoDestroy` and `emitClose` options, it becomes important to not allow the
<add>user to override these. It is therefore recommended to be explicit about what
<add>options are forwarded instead of implicitly forwarding all options.
<add>
<ide> The new stream class must then implement one or more specific methods, depending
<ide> on the type of stream being created, as detailed in the chart below:
<ide> | 1 |
PHP | PHP | increase code coverage in connectionmanager | b2317272fe3974f38e7e7a642cf97bd915288074 | <ide><path>tests/TestCase/Datasource/ConnectionManagerTest.php
<ide> public function testAliasError()
<ide> $this->assertNotContains('test_kaboom', ConnectionManager::configured());
<ide> ConnectionManager::alias('test_kaboom', 'other_name');
<ide> }
<add>
<add> /**
<add> * Test parseDsn method.
<add> *
<add> * @return void
<add> */
<add> public function testParseDsn()
<add> {
<add> $result = ConnectionManager::parseDsn('mysql://root:secret@localhost:3306/database?log=1');
<add> $expected = [
<add> 'scheme' => 'mysql',
<add> 'className' => 'Cake\Database\Connection',
<add> 'driver' => 'Cake\Database\Driver\Mysql',
<add> 'host' => 'localhost',
<add> 'username' => 'root',
<add> 'password' => 'secret',
<add> 'port' => 3306,
<add> 'database' => 'database',
<add> 'log' => '1'
<add> ];
<add> $this->assertEquals($expected, $result);
<add> }
<ide> } | 1 |
Python | Python | fix some typos | e1ec661d4e368ceabd50e7ef3714c85dbe139c02 | <ide><path>ciphers/shuffled_shift_cipher.py
<ide> class ShuffledShiftCipher:
<ide> This algorithm uses the Caesar Cipher algorithm but removes the option to
<ide> use brute force to decrypt the message.
<ide>
<del> The passcode is a a random password from the selection buffer of
<add> The passcode is a random password from the selection buffer of
<ide> 1. uppercase letters of the English alphabet
<ide> 2. lowercase letters of the English alphabet
<ide> 3. digits from 0 to 9
<ide><path>data_structures/stacks/dijkstras_two_stack_algorithm.py
<ide>
<ide> THESE ARE THE ALGORITHM'S RULES:
<ide> RULE 1: Scan the expression from left to right. When an operand is encountered,
<del> push it onto the the operand stack.
<add> push it onto the operand stack.
<ide>
<ide> RULE 2: When an operator is encountered in the expression,
<ide> push it onto the operator stack.
<ide><path>divide_and_conquer/inversions.py
<ide>
<ide> def count_inversions_bf(arr):
<ide> """
<del> Counts the number of inversions using a a naive brute-force algorithm
<add> Counts the number of inversions using a naive brute-force algorithm
<ide> Parameters
<ide> ----------
<ide> arr: arr: array-like, the list containing the items for which the number
<ide><path>maths/volume.py
<ide> def vol_spheres_intersect(
<ide> Calculate the volume of the intersection of two spheres.
<ide>
<ide> The intersection is composed by two spherical caps and therefore its volume is the
<del> sum of the volumes of the spherical caps. First it calculates the heights (h1, h2)
<del> of the the spherical caps, then the two volumes and it returns the sum.
<add> sum of the volumes of the spherical caps. First, it calculates the heights (h1, h2)
<add> of the spherical caps, then the two volumes and it returns the sum.
<ide> The height formulas are
<ide> h1 = (radius_1 - radius_2 + centers_distance)
<ide> * (radius_1 + radius_2 - centers_distance)
<ide><path>strings/manacher.py
<ide> def palindromic_string(input_string: str) -> str:
<ide> now for a5 we will calculate the length of palindromic substring with center as a5 but
<ide> can we use previously calculated information in some way?
<ide> Yes, look the above string we know that a5 is inside the palindrome with center a3 and
<del>previously we have have calculated that
<add>previously we have calculated that
<ide> a0==a2 (palindrome of center a1)
<ide> a2==a4 (palindrome of center a3)
<ide> a0==a6 (palindrome of center a3) | 5 |
Javascript | Javascript | initialize transform lazily | 855caa82aaef5c6e6dd244b5d9df314994cb23eb | <ide><path>lib/crypto.js
<ide> exports.createCredentials = function(options, context) {
<ide> };
<ide>
<ide>
<add>function LazyTransform(options) {
<add> this._options = options;
<add>}
<add>util.inherits(LazyTransform, stream.Transform);
<add>
<add>['read', 'write', 'end'].forEach(function(action, i, actions) {
<add> LazyTransform.prototype[action] = function() {
<add> stream.Transform.call(this, this._options);
<add>
<add> actions.forEach(function(action) {
<add> this[action] = stream.Transform.prototype[action];
<add> }, this);
<add>
<add> return this[action].apply(this, arguments);
<add> };
<add>});
<add>
<add>
<ide> exports.createHash = exports.Hash = Hash;
<ide> function Hash(algorithm, options) {
<ide> if (!(this instanceof Hash))
<ide> return new Hash(algorithm);
<ide> this._binding = new binding.Hash(algorithm);
<del> stream.Transform.call(this, options);
<add> LazyTransform.call(this, options);
<ide> }
<ide>
<del>util.inherits(Hash, stream.Transform);
<add>util.inherits(Hash, LazyTransform);
<ide>
<ide> Hash.prototype._transform = function(chunk, encoding, callback) {
<ide> this._binding.update(chunk, encoding);
<ide> function Hmac(hmac, key, options) {
<ide> return new Hmac(hmac, key);
<ide> this._binding = new binding.Hmac();
<ide> this._binding.init(hmac, toBuf(key));
<del> stream.Transform.call(this, options);
<add> LazyTransform.call(this, options);
<ide> }
<ide>
<del>util.inherits(Hmac, stream.Transform);
<add>util.inherits(Hmac, LazyTransform);
<ide>
<ide> Hmac.prototype.update = Hash.prototype.update;
<ide> Hmac.prototype.digest = Hash.prototype.digest;
<ide> function Cipher(cipher, password, options) {
<ide> this._binding.init(cipher, toBuf(password));
<ide> this._decoder = null;
<ide>
<del> stream.Transform.call(this, options);
<add> LazyTransform.call(this, options);
<ide> }
<ide>
<del>util.inherits(Cipher, stream.Transform);
<add>util.inherits(Cipher, LazyTransform);
<ide>
<ide> Cipher.prototype._transform = function(chunk, encoding, callback) {
<ide> this.push(this._binding.update(chunk, encoding));
<ide> function Cipheriv(cipher, key, iv, options) {
<ide> this._binding.initiv(cipher, toBuf(key), toBuf(iv));
<ide> this._decoder = null;
<ide>
<del> stream.Transform.call(this, options);
<add> LazyTransform.call(this, options);
<ide> }
<ide>
<del>util.inherits(Cipheriv, stream.Transform);
<add>util.inherits(Cipheriv, LazyTransform);
<ide>
<ide> Cipheriv.prototype._transform = Cipher.prototype._transform;
<ide> Cipheriv.prototype._flush = Cipher.prototype._flush;
<ide> function Decipher(cipher, password, options) {
<ide> this._binding.init(cipher, toBuf(password));
<ide> this._decoder = null;
<ide>
<del> stream.Transform.call(this, options);
<add> LazyTransform.call(this, options);
<ide> }
<ide>
<del>util.inherits(Decipher, stream.Transform);
<add>util.inherits(Decipher, LazyTransform);
<ide>
<ide> Decipher.prototype._transform = Cipher.prototype._transform;
<ide> Decipher.prototype._flush = Cipher.prototype._flush;
<ide> function Decipheriv(cipher, key, iv, options) {
<ide> this._binding.initiv(cipher, toBuf(key), toBuf(iv));
<ide> this._decoder = null;
<ide>
<del> stream.Transform.call(this, options);
<add> LazyTransform.call(this, options);
<ide> }
<ide>
<del>util.inherits(Decipheriv, stream.Transform);
<add>util.inherits(Decipheriv, LazyTransform);
<ide>
<ide> Decipheriv.prototype._transform = Cipher.prototype._transform;
<ide> Decipheriv.prototype._flush = Cipher.prototype._flush; | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.