content_type
stringclasses 8
values | main_lang
stringclasses 7
values | message
stringlengths 1
50
| sha
stringlengths 40
40
| patch
stringlengths 52
962k
| file_count
int64 1
300
|
|---|---|---|---|---|---|
Text
|
Text
|
translate tip-17 to korean
|
4df79154de13a6ff7b5f99628fbc9e5f5294d5dc
|
<ide><path>docs/tips/17-children-undefined-ko-KR.md
<add>---
<add>id: children-undefined-ko-KR
<add>title: 정의되지 않은 this.props.children
<add>layout: tips
<add>permalink: children-undefined-ko-KR.html
<add>prev: references-to-components-ko-KR.html
<add>next: use-react-with-other-libraries-ko-KR.html
<add>---
<add>
<add>`this.props.children`을 통해 자식 컴포넌트에 접근할 수 없습니다. `this.props.children`은 소유자에 의해 자식이 **전달**되도록 지정합니다:
<add>
<add>```js
<add>var App = React.createClass({
<add> componentDidMount: function() {
<add> // 이는 `span`을 참조하지 않습니다!
<add> // 마지막 줄의 `<App></App>` 사이의 정의되지 않은 자식을 참조합니다.
<add> console.log(this.props.children);
<add> },
<add>
<add> render: function() {
<add> return <div><span/><span/></div>;
<add> }
<add>});
<add>
<add>React.render(<App></App>, mountNode);
<add>```
<add>
<add>괜찮은 예제들을 더 알아보려면, [프론트 페이지](/)의 마지막 예제를 참고하세요.
| 1
|
Javascript
|
Javascript
|
change callback function to arrow function
|
d964d276dbd41dd383f9463ab238b4ef69920b36
|
<ide><path>test/parallel/test-stream-unshift-empty-chunk.js
<ide> let nChunks = 10;
<ide> const chunk = Buffer.alloc(10, 'x');
<ide>
<ide> r._read = function(n) {
<del> setImmediate(function() {
<add> setImmediate(() => {
<ide> r.push(--nChunks === 0 ? null : chunk);
<ide> });
<ide> };
<ide>
<ide> let readAll = false;
<ide> const seen = [];
<del>r.on('readable', function() {
<add>r.on('readable', () => {
<ide> let chunk;
<ide> while (chunk = r.read()) {
<ide> seen.push(chunk.toString());
<ide> const expect =
<ide> 'xxxxxxxxxx',
<ide> 'yyyyy' ];
<ide>
<del>r.on('end', function() {
<add>r.on('end', () => {
<ide> assert.deepStrictEqual(seen, expect);
<ide> console.log('ok');
<ide> });
| 1
|
Python
|
Python
|
fix syntax mistake
|
6c4f488e8912d83b6191ddd09cd076732b36ffad
|
<ide><path>spacy/language.py
<ide> def train(cls, path, gold_tuples, *configs):
<ide> self.end_training()
<ide>
<ide> def __init__(self, **overrides):
<del> if 'data_dir' in overrides and overrides.get(path) is True:
<add> if 'data_dir' in overrides and 'path' not in overrides:
<ide> raise ValueError("The argument 'data_dir' has been renamed to 'path'")
<ide> path = overrides.get('path')
<ide> if isinstance(path, basestring):
| 1
|
Text
|
Text
|
use repository instead of repo
|
b9866098a7f8588970bb9107eadd6deb853c1fa2
|
<ide><path>doc/guides/collaborator-guide.md
<ide> issues. If a user opens a security issue in the public repository:
<ide>
<ide> * Ask the user to submit a report through HackerOne as outlined in
<ide> [SECURITY.md][].
<del>* Move the issue to the private repo called
<add>* Move the issue to the private repository called
<ide> [premature-disclosures](https://github.com/nodejs/premature-disclosures).
<ide> * For any related pull requests, create an associated issue in the
<ide> `premature-disclosures` repository. Add a copy of the patch for the
<ide> Checkout proper target branch:
<ide> $ git checkout master
<ide> ```
<ide>
<del>Update the tree (assumes your repo is set up as detailed in
<add>Update the tree (assumes your repository is set up as detailed in
<ide> [CONTRIBUTING.md](./contributing/pull-requests.md#step-1-fork)):
<ide>
<ide> ```text
| 1
|
Mixed
|
Javascript
|
add once method to use promises with eventemitter
|
b1ef279d5726905d7941f4a58978b379daa3cdc4
|
<ide><path>doc/api/events.md
<ide> newListeners[0]();
<ide> emitter.emit('log');
<ide> ```
<ide>
<add>## events.once(emitter, name)
<add><!-- YAML
<add>added: REPLACEME
<add>-->
<add>* `emitter` {EventEmitter}
<add>* `name` {string}
<add>* Returns: {Promise}
<add>
<add>Creates a `Promise` that is resolved when the `EventEmitter` emits the given
<add>event or that is rejected when the `EventEmitter` emits `'error'`.
<add>The `Promise` will resolve with an array of all the arguments emitted to the
<add>given event.
<add>
<add>```js
<add>const { once, EventEmitter } = require('events');
<add>
<add>async function run() {
<add> const ee = new EventEmitter();
<add>
<add> process.nextTick(() => {
<add> ee.emit('myevent', 42);
<add> });
<add>
<add> const [value] = await once(ee, 'myevent');
<add> console.log(value);
<add>
<add> const err = new Error('kaboom');
<add> process.nextTick(() => {
<add> ee.emit('error', err);
<add> });
<add>
<add> try {
<add> await once(ee, 'myevent');
<add> } catch (err) {
<add> console.log('error happened', err);
<add> }
<add>}
<add>
<add>run();
<add>```
<add>
<ide> [`--trace-warnings`]: cli.html#cli_trace_warnings
<ide> [`EventEmitter.defaultMaxListeners`]: #events_eventemitter_defaultmaxlisteners
<ide> [`domain`]: domain.html
<ide><path>lib/events.js
<ide> function EventEmitter() {
<ide> EventEmitter.init.call(this);
<ide> }
<ide> module.exports = EventEmitter;
<add>module.exports.once = once;
<ide>
<ide> // Backwards-compat with node 0.10.x
<ide> EventEmitter.EventEmitter = EventEmitter;
<ide> function unwrapListeners(arr) {
<ide> }
<ide> return ret;
<ide> }
<add>
<add>function once(emitter, name) {
<add> return new Promise((resolve, reject) => {
<add> const eventListener = (...args) => {
<add> if (errorListener !== undefined) {
<add> emitter.removeListener('error', errorListener);
<add> }
<add> resolve(args);
<add> };
<add> let errorListener;
<add>
<add> // Adding an error listener is not optional because
<add> // if an error is thrown on an event emitter we cannot
<add> // guarantee that the actual event we are waiting will
<add> // be fired. The result could be a silent way to create
<add> // memory or file descriptor leaks, which is something
<add> // we should avoid.
<add> if (name !== 'error') {
<add> errorListener = (err) => {
<add> emitter.removeListener(name, eventListener);
<add> reject(err);
<add> };
<add>
<add> emitter.once('error', errorListener);
<add> }
<add>
<add> emitter.once(name, eventListener);
<add> });
<add>}
<ide><path>test/parallel/test-events-once.js
<add>'use strict';
<add>
<add>const common = require('../common');
<add>const { once, EventEmitter } = require('events');
<add>const { strictEqual, deepStrictEqual } = require('assert');
<add>
<add>async function onceAnEvent() {
<add> const ee = new EventEmitter();
<add>
<add> process.nextTick(() => {
<add> ee.emit('myevent', 42);
<add> });
<add>
<add> const [value] = await once(ee, 'myevent');
<add> strictEqual(value, 42);
<add> strictEqual(ee.listenerCount('error'), 0);
<add> strictEqual(ee.listenerCount('myevent'), 0);
<add>}
<add>
<add>async function onceAnEventWithTwoArgs() {
<add> const ee = new EventEmitter();
<add>
<add> process.nextTick(() => {
<add> ee.emit('myevent', 42, 24);
<add> });
<add>
<add> const value = await once(ee, 'myevent');
<add> deepStrictEqual(value, [42, 24]);
<add>}
<add>
<add>async function catchesErrors() {
<add> const ee = new EventEmitter();
<add>
<add> const expected = new Error('kaboom');
<add> let err;
<add> process.nextTick(() => {
<add> ee.emit('error', expected);
<add> });
<add>
<add> try {
<add> await once(ee, 'myevent');
<add> } catch (_e) {
<add> err = _e;
<add> }
<add> strictEqual(err, expected);
<add> strictEqual(ee.listenerCount('error'), 0);
<add> strictEqual(ee.listenerCount('myevent'), 0);
<add>}
<add>
<add>async function stopListeningAfterCatchingError() {
<add> const ee = new EventEmitter();
<add>
<add> const expected = new Error('kaboom');
<add> let err;
<add> process.nextTick(() => {
<add> ee.emit('error', expected);
<add> ee.emit('myevent', 42, 24);
<add> });
<add>
<add> process.on('multipleResolves', common.mustNotCall());
<add>
<add> try {
<add> await once(ee, 'myevent');
<add> } catch (_e) {
<add> err = _e;
<add> }
<add> process.removeAllListeners('multipleResolves');
<add> strictEqual(err, expected);
<add> strictEqual(ee.listenerCount('error'), 0);
<add> strictEqual(ee.listenerCount('myevent'), 0);
<add>}
<add>
<add>async function onceError() {
<add> const ee = new EventEmitter();
<add>
<add> const expected = new Error('kaboom');
<add> process.nextTick(() => {
<add> ee.emit('error', expected);
<add> });
<add>
<add> const [err] = await once(ee, 'error');
<add> strictEqual(err, expected);
<add> strictEqual(ee.listenerCount('error'), 0);
<add> strictEqual(ee.listenerCount('myevent'), 0);
<add>}
<add>
<add>Promise.all([
<add> onceAnEvent(),
<add> onceAnEventWithTwoArgs(),
<add> catchesErrors(),
<add> stopListeningAfterCatchingError(),
<add> onceError()
<add>]);
| 3
|
Javascript
|
Javascript
|
fix casing of shouldflushdiscreteupdates
|
7aa35ceae0d3a4d846498f128227ffa7c84900f4
|
<ide><path>packages/react-dom/src/events/DOMEventResponderSystem.js
<ide> export function processEventQueue(): void {
<ide> return;
<ide> }
<ide> if (discrete) {
<del> if (shouldflushDiscreteUpdates(currentTimeStamp)) {
<add> if (shouldFlushDiscreteUpdates(currentTimeStamp)) {
<ide> flushDiscreteUpdates();
<ide> }
<ide> discreteUpdates(() => {
<ide> export function generateListeningKey(
<ide>
<ide> let lastDiscreteEventTimeStamp = 0;
<ide>
<del>export function shouldflushDiscreteUpdates(timeStamp: number): boolean {
<add>export function shouldFlushDiscreteUpdates(timeStamp: number): boolean {
<ide> // event.timeStamp isn't overly reliable due to inconsistencies in
<ide> // how different browsers have historically provided the time stamp.
<ide> // Some browsers provide high-resolution time stamps for all events,
<ide><path>packages/react-dom/src/events/ReactDOMEventListener.js
<ide> import {
<ide> import {runExtractedPluginEventsInBatch} from 'events/EventPluginHub';
<ide> import {
<ide> dispatchEventForResponderEventSystem,
<del> shouldflushDiscreteUpdates,
<add> shouldFlushDiscreteUpdates,
<ide> } from '../events/DOMEventResponderSystem';
<ide> import {isFiberMounted} from 'react-reconciler/reflection';
<ide> import {HostRoot} from 'shared/ReactWorkTags';
<ide> function trapEventForPluginEventSystem(
<ide> }
<ide>
<ide> function dispatchDiscreteEvent(topLevelType, eventSystemFlags, nativeEvent) {
<del> if (!enableEventAPI || shouldflushDiscreteUpdates(nativeEvent.timeStamp)) {
<add> if (!enableEventAPI || shouldFlushDiscreteUpdates(nativeEvent.timeStamp)) {
<ide> flushDiscreteUpdates();
<ide> }
<ide> discreteUpdates(dispatchEvent, topLevelType, eventSystemFlags, nativeEvent);
| 2
|
Python
|
Python
|
fix logic for using threads
|
a601525837bd2357c7a428942add2907c623f91a
|
<ide><path>numpy/core/setup.py
<ide> def check_func(func_name):
<ide> if check_func('strtod'):
<ide> moredefs.append(('PyOS_ascii_strtod', 'strtod'))
<ide>
<del> if moredefs:
<del> target_f = open(target,'a')
<del> for d in moredefs:
<del> if isinstance(d,str):
<del> target_f.write('#define %s\n' % (d))
<del> else:
<del> target_f.write('#define %s %s\n' % (d[0],d[1]))
<del> if not nosmp: # default is to use WITH_THREAD
<del> target_f.write('#ifdef WITH_THREAD\n#define NPY_ALLOW_THREADS 1\n#else\n#define NPY_ALLOW_THREADS 0\n#endif\n')
<del> target_f.close()
<add> target_f = open(target,'a')
<add> for d in moredefs:
<add> if isinstance(d,str):
<add> target_f.write('#define %s\n' % (d))
<add> else:
<add> target_f.write('#define %s %s\n' % (d[0],d[1]))
<add> if not nosmp: # default is to use WITH_THREAD
<add> target_f.write('#ifdef WITH_THREAD\n#define NPY_ALLOW_THREADS 1\n#else\n#define NPY_ALLOW_THREADS 0\n#endif\n')
<add> target_f.close()
<ide> else:
<ide> mathlibs = []
<ide> target_f = open(target)
| 1
|
Ruby
|
Ruby
|
reduce number of strings
|
ef9bbb87e8228940f768580a7d05d5feadec0d1e
|
<ide><path>activerecord/lib/active_record/schema_dumper.rb
<ide> def default_string(value)
<ide> when BigDecimal
<ide> value.to_s
<ide> when Date, DateTime, Time
<del> "'" + value.to_s(:db) + "'"
<add> "'#{value.to_s(:db)}'"
<ide> else
<ide> value.inspect
<ide> end
<ide><path>railties/lib/rails/generators/named_base.rb
<ide> def plural_file_name
<ide> end
<ide>
<ide> def route_url
<del> @route_url ||= class_path.collect{|dname| "/" + dname }.join('') + "/" + plural_file_name
<add> @route_url ||= class_path.collect {|dname| "/" + dname }.join + "/" + plural_file_name
<ide> end
<ide>
<ide> # Tries to retrieve the application name or simple return application.
| 2
|
Go
|
Go
|
add missing tests and docs for pkg/fileutils
|
09adf87f23b3de97f90612c9bc08413053b81a14
|
<ide><path>pkg/fileutils/fileutils.go
<ide> import (
<ide> "github.com/Sirupsen/logrus"
<ide> )
<ide>
<del>func Exclusion(pattern string) bool {
<add>// exclusion return true if the specified pattern is an exclusion
<add>func exclusion(pattern string) bool {
<ide> return pattern[0] == '!'
<ide> }
<ide>
<del>func Empty(pattern string) bool {
<add>// empty return true if the specified pattern is empty
<add>func empty(pattern string) bool {
<ide> return pattern == ""
<ide> }
<ide>
<ide> func CleanPatterns(patterns []string) ([]string, [][]string, bool, error) {
<ide> for _, pattern := range patterns {
<ide> // Eliminate leading and trailing whitespace.
<ide> pattern = strings.TrimSpace(pattern)
<del> if Empty(pattern) {
<add> if empty(pattern) {
<ide> continue
<ide> }
<del> if Exclusion(pattern) {
<add> if exclusion(pattern) {
<ide> if len(pattern) == 1 {
<ide> return nil, nil, false, errors.New("Illegal exclusion pattern: !")
<ide> }
<ide> exceptions = true
<ide> }
<ide> pattern = filepath.Clean(pattern)
<ide> cleanedPatterns = append(cleanedPatterns, pattern)
<del> if Exclusion(pattern) {
<add> if exclusion(pattern) {
<ide> pattern = pattern[1:]
<ide> }
<ide> patternDirs = append(patternDirs, strings.Split(pattern, "/"))
<ide> func OptimizedMatches(file string, patterns []string, patDirs [][]string) (bool,
<ide> for i, pattern := range patterns {
<ide> negative := false
<ide>
<del> if Exclusion(pattern) {
<add> if exclusion(pattern) {
<ide> negative = true
<ide> pattern = pattern[1:]
<ide> }
<ide> func OptimizedMatches(file string, patterns []string, patDirs [][]string) (bool,
<ide> return matched, nil
<ide> }
<ide>
<add>// CopyFile copies from src to dst until either EOF is reached
<add>// on src or an error occurs. It verifies src exists and remove
<add>// the dst if it exists.
<ide> func CopyFile(src, dst string) (int64, error) {
<ide> cleanSrc := filepath.Clean(src)
<ide> cleanDst := filepath.Clean(dst)
<ide> func CopyFile(src, dst string) (int64, error) {
<ide> return io.Copy(df, sf)
<ide> }
<ide>
<add>// GetTotalUsedFds Returns the number of used File Descriptors by
<add>// reading it via /proc filesystem.
<ide> func GetTotalUsedFds() int {
<ide> if fds, err := ioutil.ReadDir(fmt.Sprintf("/proc/%d/fd", os.Getpid())); err != nil {
<ide> logrus.Errorf("Error opening /proc/%d/fd: %s", os.Getpid(), err)
<ide><path>pkg/fileutils/fileutils_test.go
<ide> import (
<ide> "io/ioutil"
<ide> "os"
<ide> "path"
<add> "path/filepath"
<ide> "testing"
<ide> )
<ide>
<ide> func TestSingleExclamationError(t *testing.T) {
<ide>
<ide> // A string preceded with a ! should return true from Exclusion.
<ide> func TestExclusion(t *testing.T) {
<del> exclusion := Exclusion("!")
<add> exclusion := exclusion("!")
<ide> if !exclusion {
<ide> t.Errorf("failed to get true for a single !, got %v", exclusion)
<ide> }
<ide> func TestMatchesWithMalformedPatterns(t *testing.T) {
<ide>
<ide> // An empty string should return true from Empty.
<ide> func TestEmpty(t *testing.T) {
<del> empty := Empty("")
<add> empty := empty("")
<ide> if !empty {
<ide> t.Errorf("failed to get true for an empty string, got %v", empty)
<ide> }
<ide> func TestCleanPatternsFolderSplit(t *testing.T) {
<ide> t.Errorf("expected first element in dirs slice to be config, got %v", dirs[0][1])
<ide> }
<ide> }
<add>
<add>func TestCreateIfNotExistsDir(t *testing.T) {
<add> tempFolder, err := ioutil.TempDir("", "docker-fileutils-test")
<add> if err != nil {
<add> t.Fatal(err)
<add> }
<add> defer os.RemoveAll(tempFolder)
<add>
<add> folderToCreate := filepath.Join(tempFolder, "tocreate")
<add>
<add> if err := CreateIfNotExists(folderToCreate, true); err != nil {
<add> t.Fatal(err)
<add> }
<add> fileinfo, err := os.Stat(folderToCreate)
<add> if err != nil {
<add> t.Fatalf("Should have create a folder, got %v", err)
<add> }
<add>
<add> if !fileinfo.IsDir() {
<add> t.Fatalf("Should have been a dir, seems it's not")
<add> }
<add>}
<add>
<add>func TestCreateIfNotExistsFile(t *testing.T) {
<add> tempFolder, err := ioutil.TempDir("", "docker-fileutils-test")
<add> if err != nil {
<add> t.Fatal(err)
<add> }
<add> defer os.RemoveAll(tempFolder)
<add>
<add> fileToCreate := filepath.Join(tempFolder, "file/to/create")
<add>
<add> if err := CreateIfNotExists(fileToCreate, false); err != nil {
<add> t.Fatal(err)
<add> }
<add> fileinfo, err := os.Stat(fileToCreate)
<add> if err != nil {
<add> t.Fatalf("Should have create a file, got %v", err)
<add> }
<add>
<add> if fileinfo.IsDir() {
<add> t.Fatalf("Should have been a file, seems it's not")
<add> }
<add>}
| 2
|
Go
|
Go
|
reject multiple networks on container creation
|
cfa515fd9d1530bd84e98c6d6564e641dcb2d0fe
|
<ide><path>daemon/create.go
<ide> func (daemon *Daemon) ContainerCreate(params types.ContainerCreateConfig) (types
<ide> return types.ContainerCreateResponse{Warnings: warnings}, err
<ide> }
<ide>
<add> err = daemon.verifyNetworkingConfig(params.NetworkingConfig)
<add> if err != nil {
<add> return types.ContainerCreateResponse{}, err
<add> }
<add>
<ide> if params.HostConfig == nil {
<ide> params.HostConfig = &containertypes.HostConfig{}
<ide> }
<ide><path>daemon/daemon.go
<ide> import (
<ide> containertypes "github.com/docker/engine-api/types/container"
<ide> eventtypes "github.com/docker/engine-api/types/events"
<ide> "github.com/docker/engine-api/types/filters"
<add> networktypes "github.com/docker/engine-api/types/network"
<ide> registrytypes "github.com/docker/engine-api/types/registry"
<ide> "github.com/docker/engine-api/types/strslice"
<ide> // register graph drivers
<ide> func (daemon *Daemon) verifyContainerSettings(hostConfig *containertypes.HostCon
<ide> return verifyPlatformContainerSettings(daemon, hostConfig, config)
<ide> }
<ide>
<add>// Checks if the client set configurations for more than one network while creating a container
<add>func (daemon *Daemon) verifyNetworkingConfig(nwConfig *networktypes.NetworkingConfig) error {
<add> if nwConfig == nil || len(nwConfig.EndpointsConfig) <= 1 {
<add> return nil
<add> }
<add> l := make([]string, 0, len(nwConfig.EndpointsConfig))
<add> for k := range nwConfig.EndpointsConfig {
<add> l = append(l, k)
<add> }
<add> return derr.ErrorCodeMultipleNetworkConnect.WithArgs(fmt.Sprintf("%v", l))
<add>}
<add>
<ide> func configureVolumes(config *Config, rootUID, rootGID int) (*store.VolumeStore, error) {
<ide> volumesDriver, err := local.New(config.Root, rootUID, rootGID)
<ide> if err != nil {
<ide><path>errors/daemon.go
<ide> var (
<ide> Description: "Engine's predefined networks cannot be deleted",
<ide> HTTPStatusCode: http.StatusForbidden,
<ide> })
<add>
<add> // ErrorCodeMultipleNetworkConnect is generated when more than one network is passed
<add> // when creating a container
<add> ErrorCodeMultipleNetworkConnect = errcode.Register(errGroup, errcode.ErrorDescriptor{
<add> Value: "CANNOT_CONNECT_TO_MULTIPLE_NETWORKS",
<add> Message: "Container cannot be connected to %s",
<add> Description: "A container can only be connected to one network at the time",
<add> HTTPStatusCode: http.StatusBadRequest,
<add> })
<ide> )
<ide><path>integration-cli/docker_api_containers_test.go
<ide> import (
<ide> "github.com/docker/docker/pkg/stringid"
<ide> "github.com/docker/engine-api/types"
<ide> containertypes "github.com/docker/engine-api/types/container"
<add> networktypes "github.com/docker/engine-api/types/network"
<ide> "github.com/go-check/check"
<ide> )
<ide>
<ide> func (s *DockerSuite) TestContainerApiCreateEmptyConfig(c *check.C) {
<ide> c.Assert(string(b), checker.Equals, expected)
<ide> }
<ide>
<add>func (s *DockerSuite) TestContainerApiCreateMultipleNetworksConfig(c *check.C) {
<add> // Container creation must fail if client specified configurations for more than one network
<add> config := map[string]interface{}{
<add> "Image": "busybox",
<add> "NetworkingConfig": networktypes.NetworkingConfig{
<add> EndpointsConfig: map[string]*networktypes.EndpointSettings{
<add> "net1": {},
<add> "net2": {},
<add> "net3": {},
<add> },
<add> },
<add> }
<add>
<add> status, b, err := sockRequest("POST", "/containers/create", config)
<add> c.Assert(err, checker.IsNil)
<add> c.Assert(status, checker.Equals, http.StatusBadRequest)
<add> // network name order in error message is not deterministic
<add> c.Assert(string(b), checker.Contains, "Container cannot be connected to [")
<add> c.Assert(string(b), checker.Contains, "net1")
<add> c.Assert(string(b), checker.Contains, "net2")
<add> c.Assert(string(b), checker.Contains, "net3")
<add>}
<add>
<ide> func (s *DockerSuite) TestContainerApiCreateWithHostName(c *check.C) {
<ide> testRequires(c, DaemonIsLinux)
<ide> hostName := "test-host"
| 4
|
Python
|
Python
|
implement list_locations & list_images
|
1271ee438f631c57867958fba530df52a08e904c
|
<ide><path>libcloud/compute/drivers/vsphere.py
<ide> from libcloud.common.types import InvalidCredsError
<ide> from libcloud.compute.base import NodeDriver
<ide> from libcloud.compute.base import Node
<add>from libcloud.compute.base import NodeImage, NodeLocation
<ide> from libcloud.compute.types import NodeState, Provider
<ide> from libcloud.utils.networking import is_public_subnet
<ide>
<ide> def list_locations(self):
<ide> """
<ide> Lists locations
<ide> """
<del> return []
<add> #datacenters = [dc for dc in
<add> # content.viewManager.CreateContainerView(
<add> # content.rootFolder,
<add> # [vim.Datacenter],
<add> # recursive=True
<add> # ).view
<add> #]
<add>
<add> # TODO: Clusters should be selectable as locations drs is enabled
<add> # check property cluster.configuration.drsConfig.enabled
<add>
<add> #clusters = [dc for dc in
<add> # content.viewManager.CreateContainerView(
<add> # content.rootFolder,
<add> # [vim.ClusterComputeResource, vim.HostSystem],
<add> # recursive=True
<add> # ).view
<add> #]
<add>
<add> locations = []
<add> content = self.connection.RetrieveContent()
<add> hosts = content.viewManager.CreateContainerView(
<add> content.rootFolder,
<add> [vim.HostSystem],
<add> recursive=True
<add> ).view
<ide>
<del> def list_sizes(self):
<add> for host in hosts:
<add> locations.append(self._to_location(host))
<add>
<add> return locations
<add>
<add>
<add> def _to_location(self, data):
<add> extra = {
<add> "state": data.runtime.connectionState,
<add> "type": data.config.product.fullName,
<add> "vendor": data.hardware.systemInfo.vendor,
<add> "model": data.hardware.systemInfo.model,
<add> "ram": data.hardware.memorySize,
<add> "cpu": {
<add> "packages": data.hardware.cpuInfo.numCpuPackages,
<add> "cores": data.hardware.cpuInfo.numCpuCores,
<add> "threads": data.hardware.cpuInfo.numCpuThreads,
<add> },
<add> "uptime": data.summary.quickStats.uptime
<add> }
<add>
<add> return NodeLocation(id=data.name, name=data.name, country=None,
<add> extra=extra, driver=self)
<add>
<add> def ex_list_networks(self):
<add> """
<add> List networks
<add> """
<add> content = self.connection.RetrieveContent()
<add> networks = content.viewManager.CreateContainerView(
<add> content.rootFolder,
<add> [vim.Network],
<add> recursive=True
<add> ).view
<add>
<add> return [self._to_network(network) for network in networks]
<add>
<add> def _to_network(self, data):
<add> summary = data.summary
<add> extra = {
<add> 'hosts': [h.name for h in data.host],
<add> 'ip_pool_name': summary.ipPoolName,
<add> 'ip_pool_id': summary.ipPoolId,
<add> 'accessible': summary.accessible
<add> }
<add> return VSphereNetwork(id=data.name, name=data.name, extra=extra)
<add>
<add> def list_sizes(self, location=None):
<ide> """
<ide> Lists sizes
<ide> """
<ide> return []
<ide>
<del> def list_images(self):
<add> def list_images(self, location=None):
<ide> """
<del> Lists images
<add> Lists VM templates as images
<ide> """
<del> return []
<add>
<add> images = []
<add> content = self.connection.RetrieveContent()
<add> vms = content.viewManager.CreateContainerView(
<add> content.rootFolder,
<add> [vim.VirtualMachine],
<add> recursive=True
<add> ).view
<add>
<add> for vm in vms:
<add> if vm.config.template:
<add> images.append(self._to_image(vm))
<add>
<add> return images
<add>
<add> def _to_image(self, data):
<add> summary = data.summary
<add> name = summary.config.name
<add> uuid = summary.config.instanceUuid
<add> memory = summary.config.memorySizeMB
<add> cpus = summary.config.numCpu
<add> operating_system = summary.config.guestFullName
<add> os_type = 'unix'
<add> if 'Microsoft' in str(operating_system):
<add> os_type = 'windows'
<add> annotation = summary.config.annotation
<add> extra = {
<add> "path": summary.config.vmPathName,
<add> "operating_system": operating_system,
<add> "os_type": os_type,
<add> "memory_MB": memory,
<add> "cpus": cpus,
<add> "overallStatus": str(summary.overallStatus),
<add> "metadata": {}
<add> }
<add>
<add> boot_time = summary.runtime.bootTime
<add> if boot_time:
<add> extra['boot_time'] = boot_time.isoformat()
<add> if annotation:
<add> extra['annotation'] = annotation
<add>
<add>
<add> for custom_field in data.customValue:
<add> key_id = custom_field.key
<add> key = self.find_custom_field_key(key_id)
<add> extra["metadata"][key] = custom_field.value
<add>
<add> return NodeImage(id=uuid, name=name, driver=self,
<add> extra=extra)
<ide>
<ide> def list_nodes(self):
<ide> """
<del> Lists nodes
<add> Lists nodes, excluding templates
<ide> """
<ide> nodes = []
<ide> content = self.connection.RetrieveContent()
<ide> def list_nodes(self):
<ide> def _to_nodes(self, vm_list):
<ide> nodes = []
<ide> for virtual_machine in vm_list:
<add> if virtual_machine.config.template:
<add> continue # Do not include templates in node list
<ide> if hasattr(virtual_machine, 'childEntity'):
<ide> # If this is a group it will have children.
<ide> # If it does, recurse into them and then return
<ide> def _to_node(self, virtual_machine):
<ide> if summary.guest is not None:
<ide> ip_addresses.append(summary.guest.ipAddress)
<ide>
<del> overallStatus = str(summary.overallStatus)
<add> overall_status = str(summary.overallStatus)
<ide> public_ips = []
<ide> private_ips = []
<ide>
<ide> def _to_node(self, virtual_machine):
<ide> "os_type": os_type,
<ide> "memory_MB": memory,
<ide> "cpus": cpus,
<del> "overallStatus": overallStatus,
<add> "overallStatus": overall_status,
<ide> "metadata": {}
<ide> }
<ide>
<ide> def find_custom_field_key(self, key_id):
<ide> if k.key == key_id:
<ide> return k.name
<ide> return None
<add>
<add>class VSphereNetwork(object):
<add> """
<add> Represents information about a VPC (Virtual Private Cloud) network
<add>
<add> Note: This class is EC2 specific.
<add> """
<add>
<add> def __init__(self, id, name, extra=None):
<add> self.id = id
<add> self.name = name
<add> self.extra = extra or {}
<add>
<add> def __repr__(self):
<add> return (('<VSphereNetwork: id=%s, name=%s')
<add> % (self.id, self.name))
<ide>\ No newline at end of file
| 1
|
Python
|
Python
|
fix concat bp and more efficient batch calls
|
2fa3fac8512c1ed102a64017123246ca156cfef5
|
<ide><path>examples/pipeline/wiki_entity_linking/train_el.py
<ide> def train_model(self, training_dir, entity_descr_output, trainlimit=None, devlim
<ide> # raise errors instead of runtime warnings in case of int/float overflow
<ide> np.seterr(all='raise')
<ide>
<del> Doc.set_extension("entity_id", default=None)
<del>
<del> train_inst, train_pos, train_neg, train_doc = self._get_training_data(training_dir,
<del> entity_descr_output,
<del> False,
<del> trainlimit,
<del> to_print=False)
<del>
<del> dev_inst, dev_pos, dev_neg, dev_doc = self._get_training_data(training_dir,
<del> entity_descr_output,
<del> True,
<del> devlimit,
<del> to_print=False)
<add> train_inst, train_pos, train_neg, train_texts = self._get_training_data(training_dir,
<add> entity_descr_output,
<add> False,
<add> trainlimit,
<add> to_print=False)
<add>
<add> dev_inst, dev_pos, dev_neg, dev_texts = self._get_training_data(training_dir,
<add> entity_descr_output,
<add> True,
<add> devlimit,
<add> to_print=False)
<ide> self._begin_training()
<ide>
<ide> print()
<del> self._test_dev(train_inst, train_pos, train_neg, train_doc, print_string="train_random", calc_random=True)
<del> self._test_dev(dev_inst, dev_pos, dev_neg, dev_doc, print_string="dev_random", calc_random=True)
<add> self._test_dev(train_inst, train_pos, train_neg, train_texts, print_string="train_random", calc_random=True)
<add> self._test_dev(dev_inst, dev_pos, dev_neg, dev_texts, print_string="dev_random", calc_random=True)
<ide> print()
<del> self._test_dev(train_inst, train_pos, train_neg, train_doc, print_string="train_pre", avg=False)
<del> self._test_dev(dev_inst, dev_pos, dev_neg, dev_doc, print_string="dev_pre", avg=False)
<add> self._test_dev(train_inst, train_pos, train_neg, train_texts, print_string="train_pre", avg=False)
<add> self._test_dev(dev_inst, dev_pos, dev_neg, dev_texts, print_string="dev_pre", avg=False)
<ide>
<ide> instance_pos_count = 0
<ide> instance_neg_count = 0
<ide> def train_model(self, training_dir, entity_descr_output, trainlimit=None, devlim
<ide> # print()
<ide> # print(article_count, "Training on article", article_id)
<ide> article_count += 1
<del> article_docs = list()
<add> article_text = train_texts[article_id]
<ide> entities = list()
<ide> golds = list()
<ide> for inst_cluster in inst_cluster_set:
<del> article_docs.append(train_doc[article_id])
<ide> entities.append(train_pos.get(inst_cluster))
<ide> golds.append(float(1.0))
<ide> instance_pos_count += 1
<ide> for neg_entity in train_neg.get(inst_cluster, []):
<del> article_docs.append(train_doc[article_id])
<ide> entities.append(neg_entity)
<ide> golds.append(float(0.0))
<ide> instance_neg_count += 1
<ide>
<del> self.update(article_docs=article_docs, entities=entities, golds=golds)
<add> self.update(article_text=article_text, entities=entities, golds=golds)
<ide>
<ide> # dev eval
<del> # self._test_dev(dev_inst, dev_pos, dev_neg, dev_doc, print_string="dev_inter", avg=False)
<del> self._test_dev(dev_inst, dev_pos, dev_neg, dev_doc, print_string="dev_inter_avg", avg=True)
<del> print()
<add> self._test_dev(dev_inst, dev_pos, dev_neg, dev_texts, print_string="dev_inter_avg", avg=True)
<ide> except ValueError as e:
<ide> print("Error in article id", article_id)
<ide>
<ide> def train_model(self, training_dir, entity_descr_output, trainlimit=None, devlim
<ide> print("Trained on", instance_pos_count, "/", instance_neg_count, "instances pos/neg")
<ide>
<ide> if self.PRINT_TRAIN:
<del> # print()
<del> # self._test_dev(train_inst, train_pos, train_neg, train_doc, print_string="train_post", avg=False)
<del> self._test_dev(train_inst, train_pos, train_neg, train_doc, print_string="train_post_avg", avg=True)
<del> # self._test_dev(dev_inst, dev_pos, dev_neg, dev_doc, print_string="dev_post", avg=False)
<del> # self._test_dev(dev_inst, dev_pos, dev_neg, dev_doc, print_string="dev_post_avg", avg=True)
<add> self._test_dev(train_inst, train_pos, train_neg, train_texts, print_string="train_post_avg", avg=True)
<ide>
<del> def _test_dev(self, instances, pos, neg, doc, print_string, avg=False, calc_random=False):
<add> def _test_dev(self, instances, pos, neg, texts_by_id, print_string, avg=False, calc_random=False):
<ide> predictions = list()
<ide> golds = list()
<ide>
<ide> def _test_dev(self, instances, pos, neg, doc, print_string, avg=False, calc_rand
<ide>
<ide> article = inst_cluster.split(sep="_")[0]
<ide> entity_id = inst_cluster.split(sep="_")[1]
<del> article_doc = doc[article]
<del>
<del> if calc_random:
<del> prediction = self._predict_random(entity=pos_ex)
<del> else:
<del> prediction = self._predict(article_doc=article_doc, entity=pos_ex, avg=avg)
<del> predictions.append(prediction)
<add> article_doc = self.nlp(texts_by_id[article])
<add> entities = [self.nlp(pos_ex)]
<ide> golds.append(float(1.0))
<del>
<ide> for neg_ex in neg_exs:
<del> if calc_random:
<del> prediction = self._predict_random(entity=neg_ex)
<del> else:
<del> prediction = self._predict(article_doc=article_doc, entity=neg_ex, avg=avg)
<del> predictions.append(prediction)
<add> entities.append(self.nlp(neg_ex))
<ide> golds.append(float(0.0))
<ide>
<add> if calc_random:
<add> preds = self._predict_random(entities=entities)
<add> else:
<add> preds = self._predict(article_doc=article_doc, entities=entities, avg=avg)
<add> predictions.extend(preds)
<add>
<ide> # TODO: combine with prior probability
<ide> p, r, f = run_el.evaluate(predictions, golds, to_print=False)
<ide> if self.PRINT_F:
<ide> def _test_dev(self, instances, pos, neg, doc, print_string, avg=False, calc_rand
<ide>
<ide> return loss, p, r, f
<ide>
<del> def _predict(self, article_doc, entity, avg=False, apply_threshold=True):
<add> def _predict(self, article_doc, entities, avg=False, apply_threshold=True):
<ide> if avg:
<ide> with self.article_encoder.use_params(self.sgd_article.averages) \
<ide> and self.entity_encoder.use_params(self.sgd_entity.averages):
<ide> doc_encoding = self.article_encoder([article_doc])[0]
<del> entity_encoding = self.entity_encoder([entity])[0]
<add> entity_encodings = self.entity_encoder(entities)
<ide>
<ide> else:
<ide> doc_encoding = self.article_encoder([article_doc])[0]
<del> entity_encoding = self.entity_encoder([entity])[0]
<add> entity_encodings = self.entity_encoder(entities)
<ide>
<del> concat_encoding = list(entity_encoding) + list(doc_encoding)
<del> np_array = np.asarray([concat_encoding])
<add> concat_encodings = [list(entity_encodings[i]) + list(doc_encoding) for i in range(len(entities))]
<add> np_array_list = np.asarray(concat_encodings)
<ide>
<ide> if avg:
<del> with self.model.use_params(self.sgd.averages):
<del> prediction = self.model(np_array)
<add> with self.model.use_params(self.sgd.averages):
<add> predictions = self.model(np_array_list)
<ide> else:
<del> prediction = self.model(np_array)
<add> predictions = self.model(np_array_list)
<ide>
<del> if not apply_threshold:
<del> return float(prediction)
<del> if prediction > self.CUTOFF:
<del> return float(1.0)
<del> return float(0.0)
<add> predictions = self.model.ops.flatten(predictions)
<add> predictions = [float(p) for p in predictions]
<add> if apply_threshold:
<add> predictions = [float(1.0) if p > self.CUTOFF else float(0.0) for p in predictions]
<add>
<add> return predictions
<ide>
<del> def _predict_random(self, entity, apply_threshold=True):
<del> r = random.uniform(0, 1)
<add> def _predict_random(self, entities, apply_threshold=True):
<ide> if not apply_threshold:
<del> return r
<del> if r > self.CUTOFF:
<del> return float(1.0)
<del> return float(0.0)
<add> return [float(random.uniform(0,1)) for e in entities]
<add> else:
<add> return [float(1.0) if random.uniform(0,1) > self.CUTOFF else float(0.0) for e in entities]
<ide>
<ide> def _build_cnn(self, hidden_entity_width, hidden_article_width):
<ide> with Model.define_operators({">>": chain, "|": concatenate, "**": clone}):
<ide> def get_loss(predictions, golds):
<ide> loss = (d_scores ** 2).sum()
<ide> return loss, d_scores
<ide>
<del> def update(self, article_docs, entities, golds, apply_threshold=True):
<del> doc_encodings, bp_doc = self.article_encoder.begin_update(article_docs, drop=self.DROP)
<del> # print("doc_encodings", len(doc_encodings), doc_encodings)
<add> # TODO: multiple docs/articles
<add> def update(self, article_text, entities, golds, apply_threshold=True):
<add> article_doc = self.nlp(article_text)
<add> doc_encodings, bp_doc = self.article_encoder.begin_update([article_doc], drop=self.DROP)
<add> doc_encoding = doc_encodings[0]
<add>
<add> entity_docs = list(self.nlp.pipe(entities))
<add> # print("entity_docs", type(entity_docs))
<ide>
<del> entity_encodings, bp_entity = self.entity_encoder.begin_update(entities, drop=self.DROP)
<add> entity_encodings, bp_entity = self.entity_encoder.begin_update(entity_docs, drop=self.DROP)
<ide> # print("entity_encodings", len(entity_encodings), entity_encodings)
<ide>
<del> concat_encodings = [list(entity_encodings[i]) + list(doc_encodings[i]) for i in range(len(entities))]
<add> concat_encodings = [list(entity_encodings[i]) + list(doc_encoding) for i in range(len(entities))]
<ide> # print("concat_encodings", len(concat_encodings), concat_encodings)
<ide>
<ide> predictions, bp_model = self.model.begin_update(np.asarray(concat_encodings), drop=self.DROP)
<ide> predictions = self.model.ops.flatten(predictions)
<add>
<ide> # print("predictions", predictions)
<ide> golds = self.model.ops.asarray(golds)
<add> # print("golds", golds)
<ide>
<ide> loss, d_scores = self.get_loss(predictions, golds)
<ide>
<ide> def update(self, article_docs, entities, golds, apply_threshold=True):
<ide> if self.PRINT_F and self.PRINT_TRAIN:
<ide> predictions_f = [x for x in predictions]
<ide> if apply_threshold:
<del> predictions_f = [1.0 if x > self.CUTOFF else 0.0 for x in predictions_f]
<add> predictions_f = [float(1.0) if x > self.CUTOFF else float(0.0) for x in predictions_f]
<ide> p, r, f = run_el.evaluate(predictions_f, golds, to_print=False)
<ide> print("p/r/F train", round(p, 1), round(r, 1), round(f, 1))
<ide>
<ide> def update(self, article_docs, entities, golds, apply_threshold=True):
<ide> model_gradient = bp_model(d_scores, sgd=self.sgd)
<ide> # print("model_gradient", model_gradient)
<ide>
<del> doc_gradient = list()
<del> entity_gradient = list()
<add> # concat = entity + doc, but doc is the same within this function (TODO: multiple docs/articles)
<add> doc_gradient = model_gradient[0][self.ENTITY_WIDTH:]
<add> entity_gradients = list()
<ide> for x in model_gradient:
<del> doc_gradient.append(list(x[0:self.ARTICLE_WIDTH]))
<del> entity_gradient.append(list(x[self.ARTICLE_WIDTH:]))
<add> entity_gradients.append(list(x[0:self.ENTITY_WIDTH]))
<ide>
<ide> # print("doc_gradient", doc_gradient)
<del> # print("entity_gradient", entity_gradient)
<add> # print("entity_gradients", entity_gradients)
<ide>
<del> bp_doc(doc_gradient, sgd=self.sgd_article)
<del> bp_entity(entity_gradient, sgd=self.sgd_entity)
<add> bp_doc([doc_gradient], sgd=self.sgd_article)
<add> bp_entity(entity_gradients, sgd=self.sgd_entity)
<ide>
<ide> def _get_training_data(self, training_dir, entity_descr_output, dev, limit, to_print):
<ide> id_to_descr = kb_creator._get_id_to_description(entity_descr_output)
<ide> def _get_training_data(self, training_dir, entity_descr_output, dev, limit, to_p
<ide> collect_correct=True,
<ide> collect_incorrect=True)
<ide>
<del> instance_by_doc = dict()
<add> instance_by_article = dict()
<ide> local_vectors = list() # TODO: local vectors
<del> doc_by_article = dict()
<add> text_by_article = dict()
<ide> pos_entities = dict()
<ide> neg_entities = dict()
<ide>
<ide> def _get_training_data(self, training_dir, entity_descr_output, dev, limit, to_p
<ide> if cnt % 500 == 0 and to_print:
<ide> print(datetime.datetime.now(), "processed", cnt, "files in the training dataset")
<ide> cnt += 1
<del> if article_id not in doc_by_article:
<add> if article_id not in text_by_article:
<ide> with open(os.path.join(training_dir, f), mode="r", encoding='utf8') as file:
<ide> text = file.read()
<del> doc = self.nlp(text)
<del> doc_by_article[article_id] = doc
<del> instance_by_doc[article_id] = set()
<add> text_by_article[article_id] = text
<add> instance_by_article[article_id] = set()
<ide>
<ide> for mention, entity_pos in correct_entries[article_id].items():
<ide> descr = id_to_descr.get(entity_pos)
<ide> if descr:
<del> instance_by_doc[article_id].add(article_id + "_" + mention)
<del> doc_descr = self.nlp(descr)
<del> doc_descr._.entity_id = entity_pos
<del> pos_entities[article_id + "_" + mention] = doc_descr
<add> instance_by_article[article_id].add(article_id + "_" + mention)
<add> pos_entities[article_id + "_" + mention] = descr
<ide>
<ide> for mention, entity_negs in incorrect_entries[article_id].items():
<ide> for entity_neg in entity_negs:
<ide> descr = id_to_descr.get(entity_neg)
<ide> if descr:
<del> doc_descr = self.nlp(descr)
<del> doc_descr._.entity_id = entity_neg
<ide> descr_list = neg_entities.get(article_id + "_" + mention, [])
<del> descr_list.append(doc_descr)
<add> descr_list.append(descr)
<ide> neg_entities[article_id + "_" + mention] = descr_list
<ide>
<ide> if to_print:
<ide> print()
<ide> print("Processed", cnt, "training articles, dev=" + str(dev))
<ide> print()
<del> return instance_by_doc, pos_entities, neg_entities, doc_by_article
<add> return instance_by_article, pos_entities, neg_entities, text_by_article
<ide><path>examples/pipeline/wiki_entity_linking/wiki_nel_pipeline.py
<ide> print("STEP 6: training", datetime.datetime.now())
<ide> my_nlp = spacy.load('en_core_web_md')
<ide> trainer = EL_Model(kb=my_kb, nlp=my_nlp)
<del> trainer.train_model(training_dir=TRAINING_DIR, entity_descr_output=ENTITY_DESCR, trainlimit=1000, devlimit=200)
<add> trainer.train_model(training_dir=TRAINING_DIR, entity_descr_output=ENTITY_DESCR, trainlimit=10, devlimit=10)
<ide> print()
<ide>
<ide> # STEP 7: apply the EL algorithm on the dev dataset
| 2
|
Javascript
|
Javascript
|
fix jshint warning
|
1363cbd6b49410a82a969dc3aeae094becc44997
|
<ide><path>test/auto/injectorSpec.js
<ide> describe('injector', function() {
<ide> // Only Chrome and Firefox support this syntax.
<ide> if (/chrome|firefox/i.test(navigator.userAgent)) {
<ide> it('should be possible to annotate functions that are declared using ES6 syntax', function() {
<add> /*jshint -W061 */
<ide> // The function is generated using `eval` as just having the ES6 syntax can break some browsers.
<ide> expect(annotate(eval('({ fn(x) { return; } })').fn)).toEqual(['x']);
<add> /*jshint +W061 */
<ide> });
<ide> }
<ide>
| 1
|
Go
|
Go
|
persist container to disk after rename
|
c5c72cf151b21482b2f27417322342c6d781108c
|
<ide><path>daemon/rename.go
<ide> package daemon
<ide>
<del>import (
<del> "github.com/docker/docker/engine"
<del>)
<add>import "github.com/docker/docker/engine"
<ide>
<ide> func (daemon *Daemon) ContainerRename(job *engine.Job) engine.Status {
<ide> if len(job.Args) != 2 {
<ide> func (daemon *Daemon) ContainerRename(job *engine.Job) engine.Status {
<ide>
<ide> container.Name = newName
<ide>
<add> undo := func() {
<add> container.Name = oldName
<add> daemon.reserveName(container.ID, oldName)
<add> daemon.containerGraph.Delete(newName)
<add> }
<add>
<ide> if err := daemon.containerGraph.Delete(oldName); err != nil {
<add> undo()
<ide> return job.Errorf("Failed to delete container %q: %v", oldName, err)
<ide> }
<ide>
<add> if err := container.toDisk(); err != nil {
<add> undo()
<add> return job.Error(err)
<add> }
<add>
<ide> return engine.StatusOK
<ide> }
<ide><path>integration-cli/docker_cli_daemon_test.go
<ide> func TestDaemonUlimitDefaults(t *testing.T) {
<ide>
<ide> logDone("daemon - default ulimits are applied")
<ide> }
<add>
<add>// #11315
<add>func TestDaemonRestartRenameContainer(t *testing.T) {
<add> d := NewDaemon(t)
<add> if err := d.StartWithBusybox(); err != nil {
<add> t.Fatal(err)
<add> }
<add>
<add> if out, err := d.Cmd("run", "--name=test", "busybox"); err != nil {
<add> t.Fatal(err, out)
<add> }
<add>
<add> if out, err := d.Cmd("rename", "test", "test2"); err != nil {
<add> t.Fatal(err, out)
<add> }
<add>
<add> if err := d.Restart(); err != nil {
<add> t.Fatal(err)
<add> }
<add>
<add> if out, err := d.Cmd("start", "test2"); err != nil {
<add> t.Fatal(err, out)
<add> }
<add>
<add> logDone("daemon - rename persists through daemon restart")
<add>}
| 2
|
Javascript
|
Javascript
|
hold maxmipmaplevel in textureproperties
|
33ecb129e2193b9f56c93cacabe20dc01b00167d
|
<ide><path>src/renderers/WebGLRenderer.js
<ide> function WebGLRenderer( parameters ) {
<ide> uniforms.reflectivity.value = material.reflectivity;
<ide> uniforms.refractionRatio.value = material.refractionRatio;
<ide>
<del> uniforms.maxMipLevel.value = material.envMap.maxMipLevel;
<add> uniforms.maxMipLevel.value = properties.get( material.envMap ).__maxMipLevel;
<ide>
<ide> }
<ide>
<ide><path>src/renderers/webgl/WebGLTextures.js
<ide> function WebGLTextures( _gl, extensions, state, properties, capabilities, utils,
<ide> _gl.generateMipmap( target );
<ide>
<ide> var image = Array.isArray( texture.image ) ? texture.image[ 0 ] : texture.image;
<del> texture.maxMipLevel = Math.max( Math.log2( Math.max( image.width, image.height ) ), texture.maxMipLevel );
<add> var textureProperties = properties.get( texture );
<add> textureProperties.__maxMipLevel = Math.max( Math.log2( Math.max( image.width, image.height ) ), textureProperties.__maxMipLevel );
<ide>
<ide> }
<ide>
<ide> function WebGLTextures( _gl, extensions, state, properties, capabilities, utils,
<ide>
<ide> if ( ! isCompressed ) {
<ide>
<del> texture.maxMipLevel = 0;
<add> textureProperties.__maxMipLevel = 0;
<ide>
<ide> } else {
<ide>
<del> texture.maxMipLevel = mipmaps.length - 1;
<add> textureProperties.__maxMipLevel = mipmaps.length - 1;
<ide>
<ide> }
<ide>
<ide> function WebGLTextures( _gl, extensions, state, properties, capabilities, utils,
<ide> }
<ide>
<ide> texture.generateMipmaps = false;
<del> texture.maxMipLevel = mipmaps.length - 1;
<add> textureProperties.__maxMipLevel = mipmaps.length - 1;
<ide>
<ide> } else {
<ide>
<ide> state.texImage2D( _gl.TEXTURE_2D, 0, glFormat, image.width, image.height, 0, glFormat, glType, image.data );
<del> texture.maxMipLevel = 0;
<add> textureProperties.__maxMipLevel = 0;
<ide>
<ide> }
<ide>
<ide> function WebGLTextures( _gl, extensions, state, properties, capabilities, utils,
<ide>
<ide> }
<ide>
<del> texture.maxMipLevel = mipmaps.length - 1;
<add> textureProperties.__maxMipLevel = mipmaps.length - 1;
<ide>
<ide> } else {
<ide>
<ide> function WebGLTextures( _gl, extensions, state, properties, capabilities, utils,
<ide> }
<ide>
<ide> texture.generateMipmaps = false;
<del> texture.maxMipLevel = mipmaps.length - 1;
<add> textureProperties.__maxMipLevel = mipmaps.length - 1;
<ide>
<ide> } else {
<ide>
<ide> state.texImage2D( _gl.TEXTURE_2D, 0, glFormat, glFormat, glType, image );
<del> texture.maxMipLevel = 0;
<add> textureProperties.__maxMipLevel = 0;
<ide>
<ide> }
<ide>
| 2
|
Go
|
Go
|
delete a useless variable
|
87cada6d7f62589ce77dc0557ad97f4140b49c70
|
<ide><path>volume/drivers/adapter.go
<ide> import (
<ide> )
<ide>
<ide> var (
<del> errInvalidScope = errors.New("invalid scope")
<ide> errNoSuchVolume = errors.New("no such volume")
<ide> )
<ide>
| 1
|
Go
|
Go
|
add more tests for pkg/chrootarchive
|
a08048d5c835f1558fbdbac2f7d833552e13d979
|
<ide><path>pkg/chrootarchive/archive_test.go
<ide> package chrootarchive
<ide>
<ide> import (
<add> "bytes"
<add> "fmt"
<ide> "io"
<ide> "io/ioutil"
<ide> "os"
<add> "path"
<ide> "path/filepath"
<ide> "testing"
<ide> "time"
<ide> func TestChrootTarUntar(t *testing.T) {
<ide> }
<ide> }
<ide>
<add>func TestChrootUntarEmptyArchive(t *testing.T) {
<add> tmpdir, err := ioutil.TempDir("", "docker-TestChrootUntarEmptyArchive")
<add> if err != nil {
<add> t.Fatal(err)
<add> }
<add> defer os.RemoveAll(tmpdir)
<add> if err := Untar(nil, tmpdir, nil); err == nil {
<add> t.Fatal("expected error on empty archive")
<add> }
<add>}
<add>
<add>func prepareSourceDirectory(numberOfFiles int, targetPath string, makeLinks bool) (int, error) {
<add> fileData := []byte("fooo")
<add> for n := 0; n < numberOfFiles; n++ {
<add> fileName := fmt.Sprintf("file-%d", n)
<add> if err := ioutil.WriteFile(path.Join(targetPath, fileName), fileData, 0700); err != nil {
<add> return 0, err
<add> }
<add> if makeLinks {
<add> if err := os.Link(path.Join(targetPath, fileName), path.Join(targetPath, fileName+"-link")); err != nil {
<add> return 0, err
<add> }
<add> }
<add> }
<add> totalSize := numberOfFiles * len(fileData)
<add> return totalSize, nil
<add>}
<add>
<add>func TestChrootTarUntarWithSoftLink(t *testing.T) {
<add> tmpdir, err := ioutil.TempDir("", "docker-TestChrootTarUntarWithSoftLink")
<add> if err != nil {
<add> t.Fatal(err)
<add> }
<add> defer os.RemoveAll(tmpdir)
<add> src := filepath.Join(tmpdir, "src")
<add> if err := os.MkdirAll(src, 0700); err != nil {
<add> t.Fatal(err)
<add> }
<add> if _, err := prepareSourceDirectory(10, src, true); err != nil {
<add> t.Fatal(err)
<add> }
<add> dest := filepath.Join(tmpdir, "dest")
<add> if err := TarUntar(src, dest); err != nil {
<add> t.Fatal(err)
<add> }
<add>}
<add>
<add>func TestChrootCopyWithTar(t *testing.T) {
<add> tmpdir, err := ioutil.TempDir("", "docker-TestChrootCopyWithTar")
<add> if err != nil {
<add> t.Fatal(err)
<add> }
<add> defer os.RemoveAll(tmpdir)
<add> src := filepath.Join(tmpdir, "src")
<add> if err := os.MkdirAll(src, 0700); err != nil {
<add> t.Fatal(err)
<add> }
<add> if _, err := prepareSourceDirectory(10, src, true); err != nil {
<add> t.Fatal(err)
<add> }
<add> dest := filepath.Join(tmpdir, "dest")
<add> // Copy directory
<add> if err := CopyWithTar(src, dest); err != nil {
<add> t.Fatal(err)
<add> }
<add> // Copy file
<add> srcfile := filepath.Join(src, "file-1")
<add> if err := CopyWithTar(srcfile, dest); err != nil {
<add> t.Fatal(err)
<add> }
<add> // Copy symbolic link
<add> linkfile := filepath.Join(src, "file-1-link")
<add> if err := CopyWithTar(linkfile, dest); err != nil {
<add> t.Fatal(err)
<add> }
<add>}
<add>
<add>func TestChrootCopyFileWithTar(t *testing.T) {
<add> tmpdir, err := ioutil.TempDir("", "docker-TestChrootCopyFileWithTar")
<add> if err != nil {
<add> t.Fatal(err)
<add> }
<add> defer os.RemoveAll(tmpdir)
<add> src := filepath.Join(tmpdir, "src")
<add> if err := os.MkdirAll(src, 0700); err != nil {
<add> t.Fatal(err)
<add> }
<add> if _, err := prepareSourceDirectory(10, src, true); err != nil {
<add> t.Fatal(err)
<add> }
<add> dest := filepath.Join(tmpdir, "dest")
<add> // Copy directory
<add> if err := CopyFileWithTar(src, dest); err == nil {
<add> t.Fatal("Expected error on copying directory")
<add> }
<add> // Copy file
<add> srcfile := filepath.Join(src, "file-1")
<add> if err := CopyFileWithTar(srcfile, dest); err != nil {
<add> t.Fatal(err)
<add> }
<add> // Copy symbolic link
<add> linkfile := filepath.Join(src, "file-1-link")
<add> if err := CopyFileWithTar(linkfile, dest); err != nil {
<add> t.Fatal(err)
<add> }
<add>}
<add>
<add>func TestChrootUntarPath(t *testing.T) {
<add> tmpdir, err := ioutil.TempDir("", "docker-TestChrootUntarPath")
<add> if err != nil {
<add> t.Fatal(err)
<add> }
<add> defer os.RemoveAll(tmpdir)
<add> src := filepath.Join(tmpdir, "src")
<add> if err := os.MkdirAll(src, 0700); err != nil {
<add> t.Fatal(err)
<add> }
<add> if _, err := prepareSourceDirectory(10, src, true); err != nil {
<add> t.Fatal(err)
<add> }
<add> dest := filepath.Join(tmpdir, "dest")
<add> // Untar a directory
<add> if err := UntarPath(src, dest); err == nil {
<add> t.Fatal("Expected error on untaring a directory")
<add> }
<add>
<add> // Untar a tar file
<add> stream, err := archive.Tar(src, archive.Uncompressed)
<add> if err != nil {
<add> t.Fatal(err)
<add> }
<add> buf := new(bytes.Buffer)
<add> buf.ReadFrom(stream)
<add> tarfile := filepath.Join(tmpdir, "src.tar")
<add> if err := ioutil.WriteFile(tarfile, buf.Bytes(), 0644); err != nil {
<add> t.Fatal(err)
<add> }
<add> if err := UntarPath(tarfile, dest); err != nil {
<add> t.Fatal(err)
<add> }
<add>}
<add>
<ide> type slowEmptyTarReader struct {
<ide> size int
<ide> offset int
| 1
|
Java
|
Java
|
introduce "synthesizable" cache in annotationutils
|
bd787769be49fb935648330daa68bece2775eeea
|
<ide><path>spring-core/src/main/java/org/springframework/core/annotation/AnnotationUtils.java
<ide> public abstract class AnnotationUtils {
<ide> private static final Map<Class<?>, Boolean> annotatedInterfaceCache =
<ide> new ConcurrentReferenceHashMap<Class<?>, Boolean>(256);
<ide>
<add> private static final Map<Class<? extends Annotation>, Boolean> synthesizableCache =
<add> new ConcurrentReferenceHashMap<Class<? extends Annotation>, Boolean>(256);
<add>
<ide> private static transient Log logger;
<ide>
<ide>
<ide> private static Map<String, String> getAliasMap(Class<? extends Annotation> annot
<ide> @SuppressWarnings("unchecked")
<ide> private static boolean isSynthesizable(Class<? extends Annotation> annotationType) {
<ide>
<add> Boolean flag = synthesizableCache.get(annotationType);
<add> if (flag != null) {
<add> return flag.booleanValue();
<add> }
<add>
<add> Boolean synthesizable = Boolean.FALSE;
<add>
<ide> for (Method attribute : getAttributeMethods(annotationType)) {
<ide> if (getAliasedAttributeName(attribute) != null) {
<del> return true;
<add> synthesizable = Boolean.TRUE;
<add> break;
<ide> }
<ide>
<ide> Class<?> returnType = attribute.getReturnType();
<ide>
<ide> if (Annotation[].class.isAssignableFrom(returnType)) {
<ide> Class<? extends Annotation> nestedAnnotationType = (Class<? extends Annotation>) returnType.getComponentType();
<ide> if (isSynthesizable(nestedAnnotationType)) {
<del> return true;
<add> synthesizable = Boolean.TRUE;
<add> break;
<ide> }
<ide> }
<ide> else if (Annotation.class.isAssignableFrom(returnType)) {
<ide> Class<? extends Annotation> nestedAnnotationType = (Class<? extends Annotation>) returnType;
<ide> if (isSynthesizable(nestedAnnotationType)) {
<del> return true;
<add> synthesizable = Boolean.TRUE;
<add> break;
<ide> }
<ide> }
<ide> }
<ide>
<del> return false;
<add> synthesizableCache.put(annotationType, synthesizable);
<add>
<add> return synthesizable.booleanValue();
<ide> }
<ide>
<ide> /**
| 1
|
Text
|
Text
|
fix a function name in repl.md
|
224cbf8bd8600c5ac9d3241fad4de74a72d0419f
|
<ide><path>doc/api/repl.md
<ide> multi-line input, the eval function can return an instance of `repl.Recoverable`
<ide> to the provided callback function:
<ide>
<ide> ```js
<del>function eval(cmd, context, filename, callback) {
<add>function myEval(cmd, context, filename, callback) {
<ide> let result;
<ide> try {
<ide> result = vm.runInThisContext(cmd);
| 1
|
Python
|
Python
|
update pavement.py to work with versioneer
|
ec90d3c175e6cb1efd0ab8172b909f6fde8c277d
|
<ide><path>pavement.py
<ide> import sys
<ide> import shutil
<ide> import hashlib
<add>import textwrap
<ide>
<ide> # The paver package needs to be installed to run tasks
<ide> import paver
<ide> installersdir=os.path.join("release", "installers")),)
<ide>
<ide>
<del>#-----------------------------
<del># Generate the release version
<del>#-----------------------------
<add>#------------------------
<add># Get the release version
<add>#------------------------
<ide>
<ide> sys.path.insert(0, os.path.dirname(__file__))
<ide> try:
<del> setup_py = __import__("setup")
<del> FULLVERSION = setup_py.VERSION
<del> # This is duplicated from setup.py
<del> if os.path.exists('.git'):
<del> GIT_REVISION = setup_py.git_version()
<del> elif os.path.exists('numpy/version.py'):
<del> # must be a source distribution, use existing version file
<del> from numpy.version import git_revision as GIT_REVISION
<del> else:
<del> GIT_REVISION = "Unknown"
<del>
<del> if not setup_py.ISRELEASED:
<del> FULLVERSION += '.dev0+' + GIT_REVISION[:7]
<add> from setup import FULLVERSION
<ide> finally:
<ide> sys.path.pop(0)
<ide>
<ide> def write_release_task(options, filename='README'):
<ide> with open(notes) as fnotes:
<ide> freadme.write(fnotes.read())
<ide>
<del> freadme.writelines("""
<del>Checksums
<del>=========
<add> freadme.writelines(textwrap.dedent(
<add> """
<add> Checksums
<add> =========
<ide>
<del>MD5
<del>---
<del>::
<add> MD5
<add> ---
<add> ::
<ide>
<del>""")
<add> """))
<ide> freadme.writelines([f' {c}\n' for c in compute_md5(idirs)])
<del> freadme.writelines("""
<del>SHA256
<del>------
<del>::
<ide>
<del>""")
<add> freadme.writelines(textwrap.dedent(
<add> """
<add> SHA256
<add> ------
<add> ::
<add>
<add> """))
<ide> freadme.writelines([f' {c}\n' for c in compute_sha256(idirs)])
<ide>
<ide> # generate md file using pandoc before signing
| 1
|
Ruby
|
Ruby
|
remove redundant variable
|
da4db35acda825828809902e9bcccae020454ccb
|
<ide><path>activesupport/lib/active_support/callbacks.rb
<ide> def next_id
<ide> end
<ide>
<ide> def matches?(_kind, _filter)
<del> _filter_matches = false
<del>
<ide> if @_is_object_filter
<ide> _filter_matches = @filter.to_s.start_with?(_method_name_for_object_filter(_kind, _filter, false))
<ide> else
| 1
|
Python
|
Python
|
add missing args to `airflow clear`
|
6706babfd404173768b00f6add1b27d66d31a7e0
|
<ide><path>airflow/bin/cli.py
<ide> class CLIFactory(object):
<ide> 'help': "Clear a set of task instance, as if they never ran",
<ide> 'args': (
<ide> 'dag_id', 'task_regex', 'start_date', 'end_date', 'subdir',
<del> 'upstream', 'downstream', 'no_confirm'),
<add> 'upstream', 'downstream', 'no_confirm', 'only_failed',
<add> 'only_running'),
<ide> }, {
<ide> 'func': pause,
<ide> 'help': "Pause a DAG",
| 1
|
Javascript
|
Javascript
|
use consistent coding style in assert/*
|
34189563496a7d418382ac402241d4970b32b78b
|
<ide><path>benchmark/assert/deepequal-buffer.js
<ide> const bench = common.createBenchmark(main, {
<ide> n: [2e4],
<ide> len: [1e2, 1e3],
<ide> strict: [0, 1],
<del> method: [
<del> 'deepEqual',
<del> 'notDeepEqual'
<del> ]
<add> method: [ 'deepEqual', 'notDeepEqual' ],
<ide> });
<ide>
<ide> function main({ len, n, method, strict }) {
<ide><path>benchmark/assert/deepequal-map.js
<ide> const bench = common.createBenchmark(main, {
<ide> 'deepEqual_mixed',
<ide> 'notDeepEqual_primitiveOnly',
<ide> 'notDeepEqual_objectOnly',
<del> 'notDeepEqual_mixed'
<del> ]
<add> 'notDeepEqual_mixed',
<add> ],
<ide> });
<ide>
<ide> function benchmark(method, n, values, values2) {
<ide><path>benchmark/assert/deepequal-object.js
<ide> const bench = common.createBenchmark(main, {
<ide> n: [5e3],
<ide> size: [1e2, 1e3, 5e4],
<ide> strict: [0, 1],
<del> method: [
<del> 'deepEqual',
<del> 'notDeepEqual'
<del> ]
<add> method: [ 'deepEqual', 'notDeepEqual' ],
<ide> });
<ide>
<ide> function createObj(source, add = '') {
<ide> function createObj(source, add = '') {
<ide> a: [1, 2, 3],
<ide> baz: n,
<ide> c: {},
<del> b: []
<del> }
<add> b: [],
<add> },
<ide> }));
<ide> }
<ide>
<ide><path>benchmark/assert/deepequal-prims-and-objs-big-array-set.js
<ide> const primValues = {
<ide> 'string': 'a',
<ide> 'number': 1,
<ide> 'object': { 0: 'a' },
<del> 'array': [1, 2, 3]
<add> 'array': [1, 2, 3],
<ide> };
<ide>
<ide> const bench = common.createBenchmark(main, {
<ide> const bench = common.createBenchmark(main, {
<ide> 'deepEqual_Array',
<ide> 'notDeepEqual_Array',
<ide> 'deepEqual_Set',
<del> 'notDeepEqual_Set'
<del> ]
<add> 'notDeepEqual_Set',
<add> ],
<ide> });
<ide>
<ide> function run(fn, n, actual, expected) {
<ide><path>benchmark/assert/deepequal-prims-and-objs-big-loop.js
<ide> const primValues = {
<ide> 'string': 'a',
<ide> 'number': 1,
<ide> 'object': { 0: 'a' },
<del> 'array': [1, 2, 3]
<add> 'array': [1, 2, 3],
<ide> };
<ide>
<ide> const bench = common.createBenchmark(main, {
<ide> primitive: Object.keys(primValues),
<ide> n: [2e4],
<ide> strict: [0, 1],
<del> method: [
<del> 'deepEqual',
<del> 'notDeepEqual',
<del> ]
<add> method: [ 'deepEqual', 'notDeepEqual' ],
<ide> });
<ide>
<ide> function main({ n, primitive, method, strict }) {
<ide><path>benchmark/assert/deepequal-set.js
<ide> const bench = common.createBenchmark(main, {
<ide> 'deepEqual_mixed',
<ide> 'notDeepEqual_primitiveOnly',
<ide> 'notDeepEqual_objectOnly',
<del> 'notDeepEqual_mixed'
<del> ]
<add> 'notDeepEqual_mixed',
<add> ],
<ide> });
<ide>
<ide> function benchmark(method, n, values, values2) {
<ide><path>benchmark/assert/deepequal-typedarrays.js
<ide> const bench = common.createBenchmark(main, {
<ide> 'deepEqual',
<ide> 'notDeepEqual',
<ide> ],
<del> len: [1e2, 5e3]
<add> len: [1e2, 5e3],
<ide> });
<ide>
<ide> function main({ type, n, len, method, strict }) {
<ide><path>benchmark/assert/ok.js
<ide> const common = require('../common.js');
<ide> const assert = require('assert');
<ide>
<del>const bench = common.createBenchmark(main, {
<del> n: [1e5]
<del>});
<add>const bench = common.createBenchmark(main, { n: [1e5] });
<ide>
<ide> function main({ n }) {
<ide> var i;
<ide><path>benchmark/assert/throws.js
<ide> const { throws, doesNotThrow } = require('assert');
<ide>
<ide> const bench = common.createBenchmark(main, {
<ide> n: [1e4],
<del> method: [
<del> 'doesNotThrow',
<del> 'throws_TypeError',
<del> 'throws_RegExp'
<del> ]
<add> method: [ 'doesNotThrow', 'throws_TypeError', 'throws_RegExp' ],
<ide> });
<ide>
<ide> function main({ n, method }) {
| 9
|
Text
|
Text
|
add 2.14.0-beta.2 to changelog
|
1e5b8a90f82d1e81cb673c4f1f6e3a8fdd5a6a2c
|
<ide><path>CHANGELOG.md
<ide> # Ember Changelog
<ide>
<add>### 2.14.0-beta.2 (May 10, 2017)
<add>
<add>- [#15138](https://github.com/emberjs/ember.js/pull/15138) [BUGFIX] Fix mocha blueprint service test filename
<add>- [#15193](https://github.com/emberjs/ember.js/pull/15193) [BUGFIX] Ensure `factoryFor` does validation.
<add>- [#15207](https://github.com/emberjs/ember.js/pull/15207) [BUGFIX] Ensure that an engines container is only destroyed once.
<add>- [#15218](https://github.com/emberjs/ember.js/pull/15218) [BUGFIX] Update route-recognizer to v0.3.3.
<add>
<add>
<ide> ### 2.14.0-beta.1 (April 27, 2017)
<ide>
<ide> - [#15015](https://github.com/emberjs/ember.js/pull/15015) Allow mounting routeless engines with a bound engine name
| 1
|
PHP
|
PHP
|
add assertion on setmatching return type
|
d06e4d9a453a570e1846e442d4541bc35e48168b
|
<ide><path>tests/TestCase/ORM/EagerLoaderTest.php
<ide> protected function _quoteArray($elements)
<ide>
<ide> return $elements;
<ide> }
<add>
<add> /**
<add> * Assert that matching('something') and SetMatching('something') return consistent type
<add> */
<add> public function testSetMatchingReturnType()
<add> {
<add> $loader = new EagerLoader();
<add> $result = $loader->setMatching('clients');
<add> $this->assertArrayHasKey('clients', $result);
<add> $this->assertArrayHasKey('clients', $loader->getMatching());
<add>
<add> $result = $loader->matching('customers');
<add> $this->assertArrayHasKey('customers', $result);
<add> $this->assertArrayHasKey('customers', $loader->getMatching());
<add> }
<ide> }
| 1
|
Go
|
Go
|
fix cross compile
|
ba52130873395a44d637fc57f98ed174f0ac87bb
|
<ide><path>archive/changes.go
<ide> func ExportChanges(dir string, changes []Change) (Archive, error) {
<ide> Mode: int64(stat.Mode & 07777),
<ide> Uid: int(stat.Uid),
<ide> Gid: int(stat.Gid),
<del> ModTime: time.Unix(mtim.Sec, mtim.Nsec),
<del> AccessTime: time.Unix(atim.Sec, atim.Nsec),
<add> ModTime: time.Unix(int64(mtim.Sec), int64(mtim.Nsec)),
<add> AccessTime: time.Unix(int64(atim.Sec), int64(atim.Nsec)),
<ide> }
<ide>
<ide> if stat.Mode&syscall.S_IFDIR == syscall.S_IFDIR {
<ide> func ExportChanges(dir string, changes []Change) (Archive, error) {
<ide> } else {
<ide> hdr.Typeflag = tar.TypeChar
<ide> }
<del> hdr.Devmajor = int64(major(stat.Rdev))
<del> hdr.Devminor = int64(minor(stat.Rdev))
<add> hdr.Devmajor = int64(major(uint64(stat.Rdev)))
<add> hdr.Devminor = int64(minor(uint64(stat.Rdev)))
<ide> } else if stat.Mode&syscall.S_IFIFO == syscall.S_IFIFO ||
<ide> stat.Mode&syscall.S_IFSOCK == syscall.S_IFSOCK {
<ide> hdr.Typeflag = tar.TypeFifo
| 1
|
Ruby
|
Ruby
|
add skylake [linux]
|
ad7323bd61b2c2ccd3a5c4ad1478f55fceb17576
|
<ide><path>Library/Homebrew/extend/os/linux/hardware/cpu.rb
<ide> def family
<ide> return :dunno unless intel?
<ide>
<ide> # See https://software.intel.com/en-us/articles/intel-architecture-and-processor-identification-with-cpuid-model-and-family-numbers
<add> # and https://github.com/llvm-mirror/llvm/blob/master/lib/Support/Host.cpp
<add> # and https://en.wikipedia.org/wiki/List_of_Intel_CPU_microarchitectures#Roadmap
<ide> cpu_family = cpuinfo[/^cpu family\s*: ([0-9]+)/, 1].to_i
<ide> cpu_model = cpuinfo[/^model\s*: ([0-9]+)/, 1].to_i
<add> unknown = :"unknown_0x#{cpu_family.to_s(16)}_0x#{cpu_model.to_s(16)}"
<ide> case cpu_family
<ide> when 0x06
<ide> case cpu_model
<ide> def family
<ide> :haswell
<ide> when 0x3d, 0x47, 0x4f, 0x56
<ide> :broadwell
<del> when 0x5e
<add> when 0x4e, 0x55, 0x5e, 0x8e, 0x9e
<ide> :skylake
<del> when 0x8e
<del> :kabylake
<ide> else
<del> :dunno
<add> unknown
<ide> end
<ide> when 0x0f
<ide> case cpu_model
<ide> def family
<ide> when 0x03, 0x04
<ide> :prescott
<ide> else
<del> :dunno
<add> unknown
<ide> end
<ide> else
<del> :dunno
<add> unknown
<ide> end
<ide> end
<ide>
| 1
|
PHP
|
PHP
|
add logoutotherdevices to trait
|
200fdc60015987c67cea815c70241338aa2f5755
|
<ide><path>src/Illuminate/Auth/Authenticatable.php
<ide>
<ide> namespace Illuminate\Auth;
<ide>
<add>use Illuminate\Support\Facades\Hash;
<add>
<ide> trait Authenticatable
<ide> {
<ide> /**
<ide> trait Authenticatable
<ide> */
<ide> protected $rememberTokenName = 'remember_token';
<ide>
<add> /**
<add> * Invalid other sessions for the current user.
<add> *
<add> * The application must be using the AuthenticateSession middleware.
<add> *
<add> * @param string $password
<add> * @param string $attribute
<add> * @return $this
<add> */
<add> public function logoutOtherDevices($password, $attribute = 'password')
<add> {
<add> return tap($this->forceFill([
<add> $attribute => Hash::make($password)
<add> ]))->save();
<add> }
<add>
<ide> /**
<ide> * Get the name of the unique identifier for the user.
<ide> *
| 1
|
PHP
|
PHP
|
add redis rate limiting job middleware
|
de94375ece99b258dc71ab905283cf09636a464d
|
<ide><path>src/Illuminate/Queue/Middleware/RateLimitsJobsWithRedis.php
<add><?php
<add>
<add>namespace Illuminate\Queue\Middleware;
<add>
<add>use Illuminate\Container\Container;
<add>use Illuminate\Contracts\Redis\Factory as Redis;
<add>use Illuminate\Redis\Limiters\DurationLimiter;
<add>use Illuminate\Support\InteractsWithTime;
<add>
<add>class RateLimitsJobsWithRedis extends RateLimitsJobs
<add>{
<add> use InteractsWithTime;
<add>
<add> /**
<add> * The Redis factory implementation.
<add> *
<add> * @var \Illuminate\Contracts\Redis\Factory
<add> */
<add> protected $redis;
<add>
<add> /**
<add> * The timestamp of the end of the current duration by key.
<add> *
<add> * @var array
<add> */
<add> public $decaysAt = [];
<add>
<add> /**
<add> * Create a new rate limiter middleware instance.
<add> *
<add> * @param string $limiterName
<add> *
<add> * @return void
<add> */
<add> public function __construct($limiterName)
<add> {
<add> parent::__construct($limiterName);
<add>
<add> $this->redis = Container::getInstance()->make(Redis::class);
<add> }
<add>
<add> /**
<add> * Handle a rate limited job.
<add> *
<add> * @param mixed $job
<add> * @param callable $next
<add> * @param array $limits
<add> * @return mixed
<add> */
<add> protected function handleJob($job, $next, array $limits)
<add> {
<add> foreach ($limits as $limit) {
<add> if ($this->tooManyAttempts($limit->key, $limit->maxAttempts, $limit->decayMinutes)) {
<add> return $job->release($this->getTimeUntilNextRetry($limit->key));
<add> }
<add> }
<add>
<add> return $next($job);
<add> }
<add>
<add> /**
<add> * Determine if the given key has been "accessed" too many times.
<add> *
<add> * @param string $key
<add> * @param int $maxAttempts
<add> * @param int $decayMinutes
<add> * @return bool
<add> */
<add> protected function tooManyAttempts($key, $maxAttempts, $decayMinutes)
<add> {
<add> $limiter = new DurationLimiter(
<add> $this->redis, $key, $maxAttempts, $decayMinutes * 60
<add> );
<add>
<add> return tap(! $limiter->acquire(), function () use ($key, $limiter) {
<add> $this->decaysAt[$key] = $limiter->decaysAt;
<add> });
<add> }
<add>
<add> /**
<add> * Get the number of seconds until the lock is released.
<add> *
<add> * @param string $key
<add> * @return int
<add> */
<add> protected function getTimeUntilNextRetry($key)
<add> {
<add> return $this->decaysAt[$key] - $this->currentTime();
<add> }
<add>}
<ide><path>tests/Integration/Queue/RateLimitsJobsWithRedisTest.php
<add><?php
<add>
<add>namespace Illuminate\Tests\Integration\Queue;
<add>
<add>use Illuminate\Bus\Dispatcher;
<add>use Illuminate\Bus\Queueable;
<add>use Illuminate\Cache\RateLimiter;
<add>use Illuminate\Cache\RateLimiting\Limit;
<add>use Illuminate\Contracts\Queue\Job;
<add>use Illuminate\Foundation\Testing\Concerns\InteractsWithRedis;
<add>use Illuminate\Queue\CallQueuedHandler;
<add>use Illuminate\Queue\InteractsWithQueue;
<add>use Illuminate\Queue\Middleware\RateLimitsJobsWithRedis;
<add>use Illuminate\Support\Str;
<add>use Mockery as m;
<add>use Orchestra\Testbench\TestCase;
<add>
<add>/**
<add> * @group integration
<add> */
<add>class RateLimitsJobsWithRedisTest extends TestCase
<add>{
<add> use InteractsWithRedis;
<add>
<add> protected function setUp(): void
<add> {
<add> parent::setUp();
<add>
<add> $this->setUpRedis();
<add> }
<add>
<add> protected function tearDown(): void
<add> {
<add> parent::tearDown();
<add>
<add> $this->tearDownRedis();
<add>
<add> m::close();
<add> }
<add>
<add> public function testUnlimitedJobsAreExecuted()
<add> {
<add> $rateLimiter = $this->app->make(RateLimiter::class);
<add>
<add> $testJob = new RateLimitedTestJob;
<add>
<add> $rateLimiter->for($testJob->key, function ($job) {
<add> return Limit::none();
<add> });
<add>
<add> $this->assertJobRanSuccessfully($testJob);
<add> $this->assertJobRanSuccessfully($testJob);
<add> }
<add>
<add> public function testRateLimitedJobsAreNotExecutedOnLimitReached()
<add> {
<add> $rateLimiter = $this->app->make(RateLimiter::class);
<add>
<add> $testJob = new RateLimitedTestJob;
<add>
<add> $rateLimiter->for($testJob->key, function ($job) {
<add> return Limit::perMinute(1);
<add> });
<add>
<add> $this->assertJobRanSuccessfully($testJob);
<add> $this->assertJobWasReleased($testJob);
<add> }
<add>
<add> public function testJobsCanHaveConditionalRateLimits()
<add> {
<add> $rateLimiter = $this->app->make(RateLimiter::class);
<add>
<add> $adminJob = new AdminTestJob;
<add>
<add> $rateLimiter->for($adminJob->key, function ($job) {
<add> if ($job->isAdmin()) {
<add> return Limit::none();
<add> }
<add>
<add> return Limit::perMinute(1);
<add> });
<add>
<add> $this->assertJobRanSuccessfully($adminJob);
<add> $this->assertJobRanSuccessfully($adminJob);
<add>
<add> $nonAdminJob = new NonAdminTestJob;
<add>
<add> $rateLimiter->for($nonAdminJob->key, function ($job) {
<add> if ($job->isAdmin()) {
<add> return Limit::none();
<add> }
<add>
<add> return Limit::perMinute(1);
<add> });
<add>
<add> $this->assertJobRanSuccessfully($nonAdminJob);
<add> $this->assertJobWasReleased($nonAdminJob);
<add> }
<add>
<add> protected function assertJobRanSuccessfully($testJob)
<add> {
<add> $testJob::$handled = false;
<add> $instance = new CallQueuedHandler(new Dispatcher($this->app), $this->app);
<add>
<add> $job = m::mock(Job::class);
<add>
<add> $job->shouldReceive('hasFailed')->once()->andReturn(false);
<add> $job->shouldReceive('isReleased')->once()->andReturn(false);
<add> $job->shouldReceive('isDeletedOrReleased')->once()->andReturn(false);
<add> $job->shouldReceive('delete')->once();
<add>
<add> $instance->call($job, [
<add> 'command' => serialize($testJob),
<add> ]);
<add>
<add> $this->assertTrue($testJob::$handled);
<add> }
<add>
<add> protected function assertJobWasReleased($testJob)
<add> {
<add> $testJob::$handled = false;
<add> $instance = new CallQueuedHandler(new Dispatcher($this->app), $this->app);
<add>
<add> $job = m::mock(Job::class);
<add>
<add> $job->shouldReceive('hasFailed')->once()->andReturn(false);
<add> $job->shouldReceive('release')->once();
<add> $job->shouldReceive('isReleased')->once()->andReturn(true);
<add> $job->shouldReceive('isDeletedOrReleased')->once()->andReturn(true);
<add>
<add> $instance->call($job, [
<add> 'command' => serialize($testJob),
<add> ]);
<add>
<add> $this->assertFalse($testJob::$handled);
<add> }
<add>}
<add>
<add>class RateLimitedTestJob
<add>{
<add> use InteractsWithQueue, Queueable;
<add>
<add> public $key;
<add>
<add> public static $handled = false;
<add>
<add> public function __construct() {
<add> $this->key = Str::random(10);
<add> }
<add>
<add> public function handle()
<add> {
<add> static::$handled = true;
<add> }
<add>
<add> public function middleware()
<add> {
<add> return [new RateLimitsJobsWithRedis($this->key)];
<add> }
<add>}
<add>
<add>class AdminTestJob extends RateLimitedTestJob
<add>{
<add> public function isAdmin()
<add> {
<add> return true;
<add> }
<add>}
<add>
<add>class NonAdminTestJob extends RateLimitedTestJob
<add>{
<add> public function isAdmin()
<add> {
<add> return false;
<add> }
<add>}
| 2
|
PHP
|
PHP
|
improve getbehavior() nullable check
|
97eb855eb6b7671b1d7cec2495ee0d81039a04a3
|
<ide><path>src/ORM/Table.php
<ide> public function behaviors()
<ide> */
<ide> public function getBehavior($name)
<ide> {
<del> if ($this->hasBehavior($name) === false) {
<add> $behavior = $this->_behaviors->get($name);
<add> if ($behavior === null) {
<ide> throw new InvalidArgumentException(sprintf(
<ide> 'The %s behavior is not defined on %s.',
<ide> $name,
<ide> get_class($this)
<ide> ));
<ide> }
<ide>
<del> return $this->_behaviors->get($name);
<add> return $behavior;
<ide> }
<ide>
<ide> /**
| 1
|
Javascript
|
Javascript
|
replace var with let in lib/assert.js
|
cb34358dd9ceb0b2a7ddcf14120000cb060d8181
|
<ide><path>lib/assert.js
<ide> function getCode(fd, line, column) {
<ide> buffer = lines < line ? buffer : Buffer.allocUnsafe(bytesPerRead);
<ide> bytesRead = readSync(fd, buffer, 0, bytesPerRead);
<ide> // Read the buffer until the required code line is found.
<del> for (var i = 0; i < bytesRead; i++) {
<add> for (let i = 0; i < bytesRead; i++) {
<ide> if (buffer[i] === 10 && ++lines === line) {
<ide> // If the end of file is reached, directly parse the code and return.
<ide> if (bytesRead < bytesPerRead) {
<ide> assert.ifError = function ifError(err) {
<ide> tmp2.shift();
<ide> // Filter all frames existing in err.stack.
<ide> let tmp1 = newErr.stack.split('\n');
<del> for (var i = 0; i < tmp2.length; i++) {
<add> for (let i = 0; i < tmp2.length; i++) {
<ide> // Find the first occurrence of the frame.
<ide> const pos = tmp1.indexOf(tmp2[i]);
<ide> if (pos !== -1) {
| 1
|
Mixed
|
Text
|
require pg~>0.18 to ensure ruby 2.2 compatibility
|
ba7532700f194aae2f9968924f979d77bc84a21e
|
<ide><path>activerecord/CHANGELOG.md
<add>* Increase pg gem version requirement to `~> 0.18`. Earlier versions of the
<add> pg gem are known to have problems with Ruby 2.2.
<add>
<add> *Matt Brictson*
<add>
<ide> * Correctly dump `serial` and `bigserial`.
<ide>
<ide> *Ryuta Kamizono*
<ide><path>activerecord/lib/active_record/connection_adapters/postgresql_adapter.rb
<ide>
<ide> require 'arel/visitors/bind_visitor'
<ide>
<del># Make sure we're using pg high enough for PGResult#values
<del>gem 'pg', '~> 0.15'
<add># Make sure we're using pg high enough for Ruby 2.2+ compatibility
<add>gem 'pg', '~> 0.18'
<ide> require 'pg'
<ide>
<ide> require 'ipaddr'
| 2
|
Go
|
Go
|
fix filepath.walk args
|
d8ec1ee57db37386228e96c136be708d7ded4245
|
<ide><path>builder/dockerfile/copy_unix.go
<ide> func fixPermissions(source, destination string, identity idtools.Identity, overr
<ide>
<ide> // We Walk on the source rather than on the destination because we don't
<ide> // want to change permissions on things we haven't created or modified.
<del> return filepath.Walk(source, func(fullpath string, info os.FileInfo, err error) error {
<add> return filepath.Walk(source, func(fullpath string, _ os.FileInfo, _ error) error {
<ide> // Do not alter the walk root iff. it existed before, as it doesn't fall under
<ide> // the domain of "things we should chown".
<ide> if skipChownRoot && source == fullpath {
| 1
|
Javascript
|
Javascript
|
fix undefined stepper
|
ebf26a5e29ee17c602d7f15413dcd85c099e5738
|
<ide><path>src/canvas.js
<ide> var CanvasGraphics = (function CanvasGraphicsClosure() {
<ide> var slowCommands = this.slowCommands;
<ide>
<ide> while (true) {
<del> if (stepper !== null && i === stepper.nextBreakPoint) {
<add> if (stepper && i === stepper.nextBreakPoint) {
<ide> stepper.breakIt(i, continueCallback);
<ide> return i;
<ide> }
| 1
|
Ruby
|
Ruby
|
use new download strategy for local bottles
|
95f9c6227a99a2cfa3f9768ab867d3c775693b56
|
<ide><path>Library/Homebrew/download_strategy.rb
<ide> require 'vendor/multi_json'
<ide>
<ide> class AbstractDownloadStrategy
<add> attr_accessor :local_bottle_path
<add>
<ide> def initialize name, package
<ide> @url = package.url
<ide> specs = package.specs
<ide> def cached_location; end
<ide> end
<ide>
<ide> class CurlDownloadStrategy < AbstractDownloadStrategy
<del> attr_accessor :local_bottle_path
<del>
<ide> def initialize name, package
<ide> super
<ide>
<ide> def initialize name, package
<ide>
<ide> @mirrors = package.mirrors
<ide> @temporary_path = Pathname.new("#@tarball_path.incomplete")
<del> @local_bottle_path = nil
<ide> end
<ide>
<ide> def cached_location
<ide> def _fetch
<ide> end
<ide>
<ide> def fetch
<del> if @local_bottle_path
<del> @tarball_path = @local_bottle_path
<del> return @local_bottle_path
<del> end
<del>
<ide> ohai "Downloading #{@url}"
<ide> unless @tarball_path.exist?
<ide> had_incomplete_download = @temporary_path.exist?
<ide> def initialize name, package
<ide> end
<ide> end
<ide>
<add># This strategy extracts local binary packages.
<add>class LocalBottleDownloadStrategy < CurlDownloadStrategy
<add> def initialize formula, local_bottle_path
<add> super formula.name, formula.active_spec
<add> @tarball_path = local_bottle_path
<add> end
<add>end
<add>
<ide> class SubversionDownloadStrategy < AbstractDownloadStrategy
<ide> def initialize name, package
<ide> super
<ide><path>Library/Homebrew/formula_installer.rb
<ide> def clean
<ide> end
<ide>
<ide> def pour
<del> fetched, downloader = f.fetch, f.downloader
<del> f.verify_download_integrity(fetched) unless downloader.local_bottle_path
<add> downloader = f.downloader
<add> if downloader.local_bottle_path
<add> downloader = LocalBottleDownloadStrategy.new f,
<add> downloader.local_bottle_path
<add> else
<add> fetched = f.fetch
<add> f.verify_download_integrity fetched
<add> end
<ide> HOMEBREW_CELLAR.cd do
<ide> downloader.stage
<ide> end
| 2
|
Java
|
Java
|
expose csslayoutsetlogger to java
|
867ea1bfaf7f409f905889179de90c7e21e08ad2
|
<ide><path>ReactAndroid/src/main/java/com/facebook/csslayout/CSSLogger.java
<add>/**
<add> * Copyright (c) 2014-present, Facebook, Inc.
<add> * All rights reserved.
<add> *
<add> * This source code is licensed under the BSD-style license found in the
<add> * LICENSE file in the root directory of this source tree. An additional grant
<add> * of patent rights can be found in the PATENTS file in the same directory.
<add> */
<add>
<add>package com.facebook.csslayout;
<add>
<add>import com.facebook.proguard.annotations.DoNotStrip;
<add>
<add>/**
<add> * Inteface for recieving logs from native layer. Use by setting CSSNode.setLogger(myLogger);
<add> * LOG_LEVEL_ERROR indicated a fatal error.
<add> */
<add>public interface CSSLogger {
<add> public final int LOG_LEVEL_ERROR = 0;
<add> public final int LOG_LEVEL_WARN = 1;
<add> public final int LOG_LEVEL_INFO = 2;
<add> public final int LOG_LEVEL_DEBUG = 3;
<add> public final int LOG_LEVEL_VERBOSE = 4;
<add>
<add> @DoNotStrip
<add> void log(int level, String message);
<add>}
<ide><path>ReactAndroid/src/main/java/com/facebook/csslayout/CSSNode.java
<ide> public class CSSNode implements CSSNodeAPI<CSSNode> {
<ide> * Get native instance count. Useful for testing only.
<ide> */
<ide> static native int jni_CSSNodeGetInstanceCount();
<add> static native void jni_CSSLog(int level, String message);
<add>
<add> private static native void jni_CSSLayoutSetLogger(Object logger);
<add> public static void setLogger(CSSLogger logger) {
<add> jni_CSSLayoutSetLogger(logger);
<add> }
<ide>
<ide> private CSSNode mParent;
<ide> private List<CSSNode> mChildren;
| 2
|
Ruby
|
Ruby
|
remove deprecated option from docs [ci skip]
|
d9496c19c07d56bb200acd7312bf5d6355d515f4
|
<ide><path>activerecord/lib/active_record/associations.rb
<ide> def association_instance_set(name, association)
<ide> # | | belongs_to |
<ide> # generated methods | belongs_to | :polymorphic | has_one
<ide> # ----------------------------------+------------+--------------+---------
<del> # other(force_reload=false) | X | X | X
<add> # other | X | X | X
<ide> # other=(other) | X | X | X
<ide> # build_other(attributes={}) | X | | X
<ide> # create_other(attributes={}) | X | | X
<ide> def association_instance_set(name, association)
<ide> # | | | has_many
<ide> # generated methods | habtm | has_many | :through
<ide> # ----------------------------------+-------+----------+----------
<del> # others(force_reload=false) | X | X | X
<add> # others | X | X | X
<ide> # others=(other,other,...) | X | X | X
<ide> # other_ids | X | X | X
<ide> # other_ids=(id,id,...) | X | X | X
<ide> module ClassMethods
<ide> # +collection+ is a placeholder for the symbol passed as the +name+ argument, so
<ide> # <tt>has_many :clients</tt> would add among others <tt>clients.empty?</tt>.
<ide> #
<del> # [collection(force_reload = false)]
<add> # [collection]
<ide> # Returns an array of all the associated objects.
<ide> # An empty array is returned if none are found.
<ide> # [collection<<(object, ...)]
<ide> def has_many(name, scope = nil, options = {}, &extension)
<ide> # +association+ is a placeholder for the symbol passed as the +name+ argument, so
<ide> # <tt>has_one :manager</tt> would add among others <tt>manager.nil?</tt>.
<ide> #
<del> # [association(force_reload = false)]
<add> # [association]
<ide> # Returns the associated object. +nil+ is returned if none is found.
<ide> # [association=(associate)]
<ide> # Assigns the associate object, extracts the primary key, sets it as the foreign key,
<ide> def has_one(name, scope = nil, options = {})
<ide> # +association+ is a placeholder for the symbol passed as the +name+ argument, so
<ide> # <tt>belongs_to :author</tt> would add among others <tt>author.nil?</tt>.
<ide> #
<del> # [association(force_reload = false)]
<add> # [association]
<ide> # Returns the associated object. +nil+ is returned if none is found.
<ide> # [association=(associate)]
<ide> # Assigns the associate object, extracts the primary key, and sets it as the foreign key.
<ide> def belongs_to(name, scope = nil, options = {})
<ide> # +collection+ is a placeholder for the symbol passed as the +name+ argument, so
<ide> # <tt>has_and_belongs_to_many :categories</tt> would add among others <tt>categories.empty?</tt>.
<ide> #
<del> # [collection(force_reload = false)]
<add> # [collection]
<ide> # Returns an array of all the associated objects.
<ide> # An empty array is returned if none are found.
<ide> # [collection<<(object, ...)]
| 1
|
Javascript
|
Javascript
|
fix nits in tools/doc/type-parser.js
|
5d387e9403bc67604b5556538fcddc52857638a6
|
<ide><path>tools/doc/type-parser.js
<ide> 'use strict';
<del>const nodeDocUrl = '';
<add>
<ide> const jsDocPrefix = 'https://developer.mozilla.org/en-US/docs/Web/JavaScript/';
<del>const jsDocUrl = `${jsDocPrefix}Reference/Global_Objects/`;
<add>
<ide> const jsPrimitiveUrl = `${jsDocPrefix}Data_structures`;
<ide> const jsPrimitives = {
<ide> 'boolean': 'Boolean',
<ide> const jsPrimitives = {
<ide> 'symbol': 'Symbol',
<ide> 'undefined': 'Undefined'
<ide> };
<add>
<add>const jsGlobalObjectsUrl = `${jsDocPrefix}Reference/Global_Objects/`;
<ide> const jsGlobalTypes = [
<ide> 'Array', 'ArrayBuffer', 'AsyncFunction', 'DataView', 'Date', 'Error',
<ide> 'EvalError', 'Float32Array', 'Float64Array', 'Function', 'Generator',
<ide> const jsGlobalTypes = [
<ide> 'Uint16Array', 'Uint32Array', 'Uint8Array', 'Uint8ClampedArray', 'WeakMap',
<ide> 'WeakSet'
<ide> ];
<del>const typeMap = {
<add>
<add>const customTypesMap = {
<ide> 'Iterable':
<ide> `${jsDocPrefix}Reference/Iteration_protocols#The_iterable_protocol`,
<ide> 'Iterator':
<ide> const typeMap = {
<ide>
<ide> const arrayPart = /(?:\[])+$/;
<ide>
<del>module.exports = {
<del> toLink: function(typeInput) {
<del> const typeLinks = [];
<del> typeInput = typeInput.replace('{', '').replace('}', '');
<del> const typeTexts = typeInput.split('|');
<del>
<del> typeTexts.forEach(function(typeText) {
<del> typeText = typeText.trim();
<del> if (typeText) {
<del> let typeUrl = null;
<del>
<del> // To support type[], type[][] etc., we store the full string
<del> // and use the bracket-less version to lookup the type URL
<del> const typeTextFull = typeText;
<del> typeText = typeText.replace(arrayPart, '');
<del>
<del> const primitive = jsPrimitives[typeText.toLowerCase()];
<del>
<del> if (primitive !== undefined) {
<del> typeUrl = `${jsPrimitiveUrl}#${primitive}_type`;
<del> } else if (jsGlobalTypes.indexOf(typeText) !== -1) {
<del> typeUrl = jsDocUrl + typeText;
<del> } else if (typeMap[typeText]) {
<del> typeUrl = nodeDocUrl + typeMap[typeText];
<del> }
<del>
<del> if (typeUrl) {
<del> typeLinks.push(`
<del> <a href="${typeUrl}" class="type"><${typeTextFull}></a>`);
<del> } else {
<del> typeLinks.push(`<span class="type"><${typeTextFull}></span>`);
<del> }
<add>function toLink(typeInput) {
<add> const typeLinks = [];
<add> typeInput = typeInput.replace('{', '').replace('}', '');
<add> const typeTexts = typeInput.split('|');
<add>
<add> typeTexts.forEach((typeText) => {
<add> typeText = typeText.trim();
<add> if (typeText) {
<add> let typeUrl = null;
<add>
<add> // To support type[], type[][] etc., we store the full string
<add> // and use the bracket-less version to lookup the type URL
<add> const typeTextFull = typeText;
<add> typeText = typeText.replace(arrayPart, '');
<add>
<add> const primitive = jsPrimitives[typeText.toLowerCase()];
<add>
<add> if (primitive !== undefined) {
<add> typeUrl = `${jsPrimitiveUrl}#${primitive}_type`;
<add> } else if (jsGlobalTypes.includes(typeText)) {
<add> typeUrl = `${jsGlobalObjectsUrl}${typeText}`;
<add> } else if (customTypesMap[typeText]) {
<add> typeUrl = customTypesMap[typeText];
<ide> }
<del> });
<ide>
<del> return typeLinks.length ? typeLinks.join(' | ') : typeInput;
<del> }
<del>};
<add> if (typeUrl) {
<add> typeLinks.push(
<add> `<a href="${typeUrl}" class="type"><${typeTextFull}></a>`);
<add> } else {
<add> typeLinks.push(`<span class="type"><${typeTextFull}></span>`);
<add> }
<add> } else {
<add> throw new Error(`Empty type slot: ${typeInput}`);
<add> }
<add> });
<add>
<add> return typeLinks.length ? typeLinks.join(' | ') : typeInput;
<add>}
<add>
<add>module.exports = { toLink };
| 1
|
Ruby
|
Ruby
|
remove evals from the association
|
0d22947e00820b4e011775cfb4ede109650db070
|
<ide><path>activerecord/lib/active_record/associations/builder/association.rb
<ide> def configure_dependency
<ide> )
<ide> end
<ide>
<del> mixin.class_eval <<-CODE, __FILE__, __LINE__ + 1
<del> def #{macro}_dependent_for_#{name}
<del> association(:#{name}).handle_dependency
<del> end
<del> CODE
<del>
<del> method = "#{macro}_dependent_for_#{name}"
<del> model.before_destroy lambda { |o| o.public_send method }
<add> n = name
<add> model.before_destroy lambda { |o| o.association(n).handle_dependency }
<ide> end
<ide>
<ide> def valid_dependent_options
| 1
|
Text
|
Text
|
add es6 syntax to challenge' solution
|
58e70055737ae6b96cea582ef40247ae6b97225b
|
<ide><path>guide/english/certifications/javascript-algorithms-and-data-structures/object-oriented-programming/use-an-iife-to-create-a-module/index.md
<ide> let funModule = (function() {
<ide> })();
<ide>
<ide> ```
<add>
<add>### Solution 2
<add>
<add>If using ES6, the same can be rewritten as:
<add>
<add>```javascript
<add>let funModule = ( () => {
<add> return {
<add> isCuteMixin: (obj) => {
<add> obj.isCute = () => { true; };
<add> },
<add> singMixin: (obj) => {
<add> obj.sing = () => { console.log("Singing to an awesome tune"); }
<add> }
<add>
<add> }
<add>})();
<add>```
<ide><path>guide/english/certifications/javascript-algorithms-and-data-structures/object-oriented-programming/use-closure-to-protect-properties-within-an-object-from-being-modified-externally/index.md
<ide> function Bird() {
<ide> }
<ide>
<ide> ```
<add>
<add>### Solution 2
<add>
<add>In ES6 syntax we can make the function a bit less verbose:
<add>
<add>```
<add>function Bird() {
<add> let weight = 15;
<add> this.getWeight = () => weight;
<add>}
<add>```
| 2
|
Python
|
Python
|
set version to 2.0.13.dev4
|
c3ddf98b1e19f2583e3e7598b74689eb3858a27f
|
<ide><path>spacy/about.py
<ide> # https://github.com/pypa/warehouse/blob/master/warehouse/__about__.py
<ide>
<ide> __title__ = 'spacy'
<del>__version__ = '2.0.13.dev3'
<add>__version__ = '2.0.13.dev4'
<ide> __summary__ = 'Industrial-strength Natural Language Processing (NLP) with Python and Cython'
<ide> __uri__ = 'https://spacy.io'
<ide> __author__ = 'Explosion AI'
| 1
|
PHP
|
PHP
|
add failover driver to default mail config file
|
3399464a7452ac8067cef8fb224b2413e8991a4b
|
<ide><path>config/mail.php
<ide> 'array' => [
<ide> 'transport' => 'array',
<ide> ],
<add>
<add> 'failover' => [
<add> 'transport' => 'failover',
<add> 'mailers' => [
<add> 'smtp',
<add> 'log',
<add> ],
<add> ],
<ide> ],
<ide>
<ide> /*
| 1
|
PHP
|
PHP
|
extract duplicate code which handles job failing
|
55afe12977b55dbafda940e18102bb52276ca569
|
<ide><path>src/Illuminate/Queue/FailingJob.php
<add><?php
<add>
<add>namespace Illuminate\Queue;
<add>
<add>use Illuminate\Container\Container;
<add>use Illuminate\Queue\Events\JobFailed;
<add>use Illuminate\Contracts\Events\Dispatcher;
<add>
<add>class FailingJob
<add>{
<add> /**
<add> * Delete the given job, call the "failed" method, and raise the failed job event.
<add> *
<add> * @param string $connectionName
<add> * @param \Illuminate\Queue\Jobs\Job $job
<add> * @param \Exception $e
<add> * @return void
<add> */
<add> public static function handle($connectionName, $job, $e = null)
<add> {
<add> if ($job->isDeleted()) {
<add> return;
<add> }
<add>
<add> try {
<add> // If the job has failed, we will delete it, call the "failed" method and then call
<add> // an event indicating the job has failed so it can be logged if needed. This is
<add> // to allow every developer to better keep monitor of their failed queue jobs.
<add> $job->delete();
<add>
<add> $job->failed($e);
<add> } finally {
<add> static::events()->fire(new JobFailed(
<add> $connectionName, $job, $e ?: new ManuallyFailedException
<add> ));
<add> }
<add> }
<add>
<add> /**
<add> * Get the event dispatcher instance.
<add> *
<add> * @return \Illuminate\Contracts\Events\Dispatcher
<add> */
<add> protected static function events()
<add> {
<add> return Container::getInstance()->make(Dispatcher::class);
<add> }
<add>}
<ide><path>src/Illuminate/Queue/InteractsWithQueue.php
<ide> public function delete()
<ide> */
<ide> public function fail($exception = null)
<ide> {
<del> if (! $this->job || $this->job->isDeleted()) {
<del> return;
<del> }
<del>
<del> try {
<del> $this->job->delete();
<del>
<del> $this->job->failed($e);
<del> } finally {
<del> Container::getInstance()->make(Dispatcher::class)->fire(new Events\JobFailed(
<del> $this->job->getConnectionName(), $this->job, $exception ?: new ManuallyFailedException
<del> ));
<add> if ($this->job) {
<add> FailingJob::handle($this->job->getConnectionName(), $this->job, $exception);
<ide> }
<ide> }
<ide>
<ide><path>src/Illuminate/Queue/Worker.php
<ide> protected function markJobAsFailedIfHasExceededMaxAttempts(
<ide> /**
<ide> * Mark the given job as failed and raise the relevant event.
<ide> *
<del> * Note: Any change to this method should also be made to InteractsWithQueue.
<del> *
<ide> * @param string $connectionName
<ide> * @param \Illuminate\Contracts\Queue\Job $job
<ide> * @param \Exception $e
<ide> * @return void
<ide> */
<ide> protected function failJob($connectionName, $job, $e)
<ide> {
<del> if ($job->isDeleted()) {
<del> return;
<del> }
<del>
<del> try {
<del> // If the job has failed, we will delete it, call the "failed" method and then call
<del> // an event indicating the job has failed so it can be logged if needed. This is
<del> // to allow every developer to better keep monitor of their failed queue jobs.
<del> $job->delete();
<del>
<del> $job->failed($e);
<del> } finally {
<del> $this->raiseFailedJobEvent($connectionName, $job, $e);
<del> }
<add> return FailingJob::handle($connectionName, $job, $e);
<ide> }
<ide>
<ide> /**
<ide><path>tests/Queue/QueueWorkerTest.php
<ide> <?php
<ide>
<add>use Illuminate\Container\Container;
<ide> use Illuminate\Queue\WorkerOptions;
<ide> use Illuminate\Queue\Events\JobFailed;
<ide> use Illuminate\Queue\Events\JobProcessed;
<ide> class QueueWorkerTest extends PHPUnit_Framework_TestCase
<ide> public $events;
<ide> public $exceptionHandler;
<ide>
<del> public function __construct()
<add> public function setUp()
<ide> {
<ide> $this->events = Mockery::spy(Dispatcher::class);
<ide> $this->exceptionHandler = Mockery::spy(ExceptionHandler::class);
<add>
<add> Container::setInstance($container = new Container);
<add>
<add> $container->instance(Dispatcher::class, $this->events);
<add> $container->instance(ExceptionHandler::class, $this->exceptionHandler);
<add> }
<add>
<add> public function tearDown()
<add> {
<add> Container::setInstance();
<ide> }
<ide>
<ide> public function test_job_can_be_fired()
<ide> public function test_job_is_failed_if_it_has_already_exceeded_max_attempts()
<ide> $job = new WorkerFakeJob(function ($job) {
<ide> $job->attempts++;
<ide> });
<add>
<ide> $job->attempts = 2;
<ide>
<ide> $worker = $this->getWorker('default', ['queue' => [$job]]);
<ide> private function workerDependencies($connectionName = 'default', $jobs = [])
<ide> private function workerOptions(array $overrides = [])
<ide> {
<ide> $options = new WorkerOptions;
<add>
<ide> foreach ($overrides as $key => $value) {
<ide> $options->{$key} = $value;
<ide> }
| 4
|
PHP
|
PHP
|
add unit test for using subquery() in join clause
|
9f85c81428f80483e8a074df4f0cb704f0055a0f
|
<ide><path>tests/TestCase/ORM/QueryTest.php
<ide> public function testSubqueryAliasing()
<ide> $this->assertEqualsSql('SELECT id, title, body, published FROM articles Articles', $subquery->sql());
<ide> }
<ide>
<del> public function testSubquerySelect()
<add> /**
<add> * Tests subquery() in where clause.
<add> *
<add> * @return void
<add> */
<add> public function testSubqueryWhereClause()
<ide> {
<ide> $subquery = Query::subquery($this->getTableLocator()->get('Authors'))
<ide> ->select(['Authors.id'])
<ide> public function testSubquerySelect()
<ide> $this->assertCount(2, $results);
<ide> $this->assertEquals([1, 3], array_column($results, 'id'));
<ide> }
<add>
<add> /**
<add> * Tests subquery() in join clause.
<add> *
<add> * @return void
<add> */
<add> public function testSubqueryJoinClause()
<add> {
<add> $subquery = Query::subquery($this->getTableLocator()->get('Articles'))
<add> ->select(['author_id']);
<add>
<add> $query = $this->getTableLocator()->get('Authors')->find();
<add> $query
<add> ->select(['Authors.id', 'total_articles' => $query->func()->count('Articles.author_id')])
<add> ->leftJoin(['Articles' => $subquery], ['Articles.author_id = Authors.id'])
<add> ->group(['Authors.id'])
<add> ->order(['Authors.id' => 'ASC']);
<add>
<add> $results = $query->all()->toList();
<add> $this->assertEquals(1, $results[0]->id);
<add> $this->assertEquals(2, $results[0]->total_articles);
<add> }
<ide> }
| 1
|
Javascript
|
Javascript
|
fix lint errors
|
fda78a18717102fa7fa6cd1a7a73e1b75ca7fa02
|
<ide><path>spec/update-process-env-spec.js
<ide> import {
<ide> updateProcessEnv,
<ide> shouldGetEnvFromShell
<ide> } from '../src/update-process-env';
<del>import dedent from 'dedent';
<ide> import mockSpawn from 'mock-spawn';
<ide> const temp = require('temp').track();
<ide>
<ide> describe('updateProcessEnv(launchEnv)', function() {
<ide> await updateProcessEnv(process.env);
<ide> expect(spawn.calls.length).toBe(1);
<ide> expect(spawn.calls[0].command).toBe('/my/custom/bash');
<del> expect(spawn.calls[0].args).toEqual(['-ilc', 'command awk \'BEGIN{for(v in ENVIRON) printf("%s=%s%c", v, ENVIRON[v], 0)}\'']);
<add> expect(spawn.calls[0].args).toEqual([
<add> '-ilc',
<add> 'command awk \'BEGIN{for(v in ENVIRON) printf("%s=%s%c", v, ENVIRON[v], 0)}\''
<add> ]);
<ide> expect(process.env).toEqual({
<ide> FOO: 'BAR=BAZ=QUUX',
<ide> 'MULTILINE\nNAME': 'multiline\nvalue',
<ide> describe('updateProcessEnv(launchEnv)', function() {
<ide> await updateProcessEnv(process.env);
<ide> expect(spawn.calls.length).toBe(1);
<ide> expect(spawn.calls[0].command).toBe('/my/custom/bash');
<del> expect(spawn.calls[0].args).toEqual(['-ilc', 'command awk \'BEGIN{for(v in ENVIRON) printf("%s=%s%c", v, ENVIRON[v], 0)}\'']);
<add> expect(spawn.calls[0].args).toEqual([
<add> '-ilc',
<add> 'command awk \'BEGIN{for(v in ENVIRON) printf("%s=%s%c", v, ENVIRON[v], 0)}\''
<add> ]);
<ide> expect(process.env).toEqual({
<ide> FOO: 'BAR=BAZ=QUUX',
<ide> 'MULTILINE\nNAME': 'multiline\nvalue',
| 1
|
Javascript
|
Javascript
|
add comments in _http_incoming
|
5ae96908696b7a017df3efb48ec1f8c8be5c24b8
|
<ide><path>lib/_http_incoming.js
<ide> IncomingMessage.prototype._destroy = function _destroy(err, cb) {
<ide>
<ide> // If aborted and the underlying socket not already destroyed,
<ide> // destroy it.
<add> // We have to check if the socket is already destroyed because finished
<add> // does not call the callback when this methdod is invoked from `_http_client`
<add> // in `test/parallel/test-http-client-spurious-aborted.js`
<ide> if (this.socket && !this.socket.destroyed && this.aborted) {
<ide> this.socket.destroy(err);
<ide> const cleanup = finished(this.socket, (e) => {
<ide> IncomingMessage.prototype._dump = function _dump() {
<ide> };
<ide>
<ide> function onError(instance, cb, error) {
<add> // This is to keep backward compatible behavior.
<add> // An error is emitted only if there are listeners attached to
<add> // the event.
<ide> if (instance.listenerCount('error') > 0) {
<ide> cb(error);
<ide> } else {
| 1
|
Text
|
Text
|
fix broken links to guides in the faq
|
58f23493025f3490257655919c91d0442a39d338
|
<ide><path>docs/faq.md
<ide> Yes! Please [submit an issue or open a pull request][pr-issue-question] if this
<ide>
<ide> Yes! Please [submit an issue or open a pull request][pr-issue-question] if this does not work.
<ide>
<del>[plugin-guide]: plugins.md
<add>[plugin-guide]: ./guides/plugins.md
<ide>
<ide> [install-guide]: http://videojs.com/getting-started/
<ide>
<del>[troubleshooting]: troubleshooting.md
<add>[troubleshooting]: ./guides/troubleshooting.md
<ide>
<del>[video-tracks]: video-tracks.md
<add>[video-tracks]: ./guides/video-tracks.md
<ide>
<del>[audio-tracks]: audio-tracks.md
<add>[audio-tracks]: ./guides/audio-tracks.md
<ide>
<del>[text-tracks]: text-tracks.md
<add>[text-tracks]: ./guides/text-tracks.md
<ide>
<ide> [pr-issue-question]: #q-i-think-i-found-a-bug-with-videojs-or-i-want-to-add-a-feature-what-should-i-do
<ide>
| 1
|
Mixed
|
Javascript
|
add compressionstream and decompressionstream
|
09ad64d66de6222e5d029ef40a93287b7f5d8275
|
<ide><path>doc/api/webstreams.md
<ide> added: REPLACEME
<ide>
<ide> * Type: {WritableStream}
<ide>
<add>### Class: `CompressionStream`
<add><!-- YAML
<add>added: REPLACEME
<add>-->
<add>#### `new CompressionStream(format)`
<add><!-- YAML
<add>added: REPLACEME
<add>-->
<add>
<add>* `format` {string} One of either `'deflate'` or `'gzip'`.
<add>
<add>#### `compressionStream.readable`
<add><!-- YAML
<add>added: REPLACEME
<add>-->
<add>
<add>* Type: {ReadableStream}
<add>
<add>#### `compressionStream.writable`
<add><!-- YAML
<add>added: REPLACEME
<add>-->
<add>
<add>* Type: {WritableStream}
<add>
<add>### Class: `DecompressionStream`
<add><!-- YAML
<add>added: REPLACEME
<add>-->
<add>
<add>#### `new DecompressionStream(format)`
<add><!-- YAML
<add>added: REPLACEME
<add>-->
<add>
<add>* `format` {string} One of either `'deflate'` or `'gzip'`.
<add>
<add>#### `decompressionStream.readable`
<add><!-- YAML
<add>added: REPLACEME
<add>-->
<add>
<add>* Type: {ReadableStream}
<add>
<add>#### `deccompressionStream.writable`
<add><!-- YAML
<add>added: REPLACEME
<add>-->
<add>
<add>* Type: {WritableStream}
<add>
<ide> [Streams]: stream.md
<ide> [WHATWG Streams Standard]: https://streams.spec.whatwg.org/
<ide><path>lib/internal/webstreams/compression.js
<add>'use strict';
<add>
<add>const {
<add> ObjectDefineProperties,
<add> Symbol,
<add>} = primordials;
<add>
<add>const {
<add> codes: {
<add> ERR_INVALID_ARG_VALUE,
<add> ERR_INVALID_THIS,
<add> },
<add>} = require('internal/errors');
<add>
<add>const {
<add> newReadableWritablePairFromDuplex,
<add>} = require('internal/webstreams/adapters');
<add>
<add>const {
<add> customInspect,
<add> kEnumerableProperty,
<add>} = require('internal/webstreams/util');
<add>
<add>const {
<add> customInspectSymbol: kInspect,
<add>} = require('internal/util');
<add>
<add>let zlib;
<add>function lazyZlib() {
<add> zlib ??= require('zlib');
<add> return zlib;
<add>}
<add>
<add>const kHandle = Symbol('kHandle');
<add>const kTransform = Symbol('kTransform');
<add>const kType = Symbol('kType');
<add>
<add>/**
<add> * @typedef {import('./readablestream').ReadableStream} ReadableStream
<add> * @typedef {import('./writablestream').WritableStream} WritableStream
<add> */
<add>
<add>function isCompressionStream(value) {
<add> return typeof value?.[kHandle] === 'object' &&
<add> value?.[kType] === 'CompressionStream';
<add>}
<add>
<add>function isDecompressionStream(value) {
<add> return typeof value?.[kHandle] === 'object' &&
<add> value?.[kType] === 'DecompressionStream';
<add>}
<add>
<add>class CompressionStream {
<add> /**
<add> * @param {'deflate'|'gzip'} format
<add> */
<add> constructor(format) {
<add> this[kType] = 'CompressionStream';
<add> switch (format) {
<add> case 'deflate':
<add> this[kHandle] = lazyZlib().createDeflate();
<add> break;
<add> case 'gzip':
<add> this[kHandle] = lazyZlib().createGzip();
<add> break;
<add> default:
<add> throw new ERR_INVALID_ARG_VALUE('format', format);
<add> }
<add> this[kTransform] = newReadableWritablePairFromDuplex(this[kHandle]);
<add> }
<add>
<add> /**
<add> * @readonly
<add> * @type {ReadableStream}
<add> */
<add> get readable() {
<add> if (!isCompressionStream(this))
<add> throw new ERR_INVALID_THIS('CompressionStream');
<add> return this[kTransform].readable;
<add> }
<add>
<add> /**
<add> * @readonly
<add> * @type {WritableStream}
<add> */
<add> get writable() {
<add> if (!isCompressionStream(this))
<add> throw new ERR_INVALID_THIS('CompressionStream');
<add> return this[kTransform].writable;
<add> }
<add>
<add> [kInspect](depth, options) {
<add> if (!isCompressionStream(this))
<add> throw new ERR_INVALID_THIS('CompressionStream');
<add> customInspect(depth, options, 'CompressionStream', {
<add> readable: this[kTransform].readable,
<add> writable: this[kTransform].writable,
<add> });
<add> }
<add>}
<add>
<add>class DecompressionStream {
<add> /**
<add> * @param {'deflate'|'gzip'} format
<add> */
<add> constructor(format) {
<add> this[kType] = 'DecompressionStream';
<add> switch (format) {
<add> case 'deflate':
<add> this[kHandle] = lazyZlib().createInflate();
<add> break;
<add> case 'gzip':
<add> this[kHandle] = lazyZlib().createGunzip();
<add> break;
<add> default:
<add> throw new ERR_INVALID_ARG_VALUE('format', format);
<add> }
<add> this[kTransform] = newReadableWritablePairFromDuplex(this[kHandle]);
<add> }
<add>
<add> /**
<add> * @readonly
<add> * @type {ReadableStream}
<add> */
<add> get readable() {
<add> if (!isDecompressionStream(this))
<add> throw new ERR_INVALID_THIS('DecompressionStream');
<add> return this[kTransform].readable;
<add> }
<add>
<add> /**
<add> * @readonly
<add> * @type {WritableStream}
<add> */
<add> get writable() {
<add> if (!isDecompressionStream(this))
<add> throw new ERR_INVALID_THIS('DecompressionStream');
<add> return this[kTransform].writable;
<add> }
<add>
<add> [kInspect](depth, options) {
<add> if (!isDecompressionStream(this))
<add> throw new ERR_INVALID_THIS('DecompressionStream');
<add> customInspect(depth, options, 'DecompressionStream', {
<add> readable: this[kTransform].readable,
<add> writable: this[kTransform].writable,
<add> });
<add> }
<add>}
<add>
<add>ObjectDefineProperties(CompressionStream.prototype, {
<add> readable: kEnumerableProperty,
<add> writable: kEnumerableProperty,
<add>});
<add>
<add>ObjectDefineProperties(DecompressionStream.prototype, {
<add> readable: kEnumerableProperty,
<add> writable: kEnumerableProperty,
<add>});
<add>
<add>module.exports = {
<add> CompressionStream,
<add> DecompressionStream,
<add>};
<ide><path>lib/stream/web.js
<ide> const {
<ide> TextDecoderStream,
<ide> } = require('internal/webstreams/encoding');
<ide>
<add>const {
<add> CompressionStream,
<add> DecompressionStream,
<add>} = require('internal/webstreams/compression');
<add>
<ide> module.exports = {
<ide> ReadableStream,
<ide> ReadableStreamDefaultReader,
<ide> module.exports = {
<ide> CountQueuingStrategy,
<ide> TextEncoderStream,
<ide> TextDecoderStream,
<add> CompressionStream,
<add> DecompressionStream,
<ide> };
<ide><path>test/parallel/test-whatwg-webstreams-compression.js
<add>// Flags: --no-warnings
<add>'use strict';
<add>
<add>const common = require('../common');
<add>
<add>const {
<add> CompressionStream,
<add> DecompressionStream,
<add>} = require('stream/web');
<add>
<add>const assert = require('assert');
<add>const dec = new TextDecoder();
<add>
<add>async function test(format) {
<add> const gzip = new CompressionStream(format);
<add> const gunzip = new DecompressionStream(format);
<add>
<add> gzip.readable.pipeTo(gunzip.writable).then(common.mustCall());
<add>
<add> const reader = gunzip.readable.getReader();
<add> const writer = gzip.writable.getWriter();
<add>
<add> await Promise.all([
<add> reader.read().then(({ value, done }) => {
<add> assert.strictEqual(dec.decode(value), 'hello');
<add> }),
<add> reader.read().then(({ done }) => assert(done)),
<add> writer.write('hello'),
<add> writer.close(),
<add> ]);
<add>}
<add>
<add>Promise.all(['gzip', 'deflate'].map((i) => test(i))).then(common.mustCall());
<add>
<add>[1, 'hello', false, {}].forEach((i) => {
<add> assert.throws(() => new CompressionStream(i), {
<add> code: 'ERR_INVALID_ARG_VALUE',
<add> });
<add> assert.throws(() => new DecompressionStream(i), {
<add> code: 'ERR_INVALID_ARG_VALUE',
<add> });
<add>});
<add>
<add>assert.throws(
<add> () => Reflect.get(CompressionStream.prototype, 'readable', {}), {
<add> code: 'ERR_INVALID_THIS',
<add> });
<add>assert.throws(
<add> () => Reflect.get(CompressionStream.prototype, 'writable', {}), {
<add> code: 'ERR_INVALID_THIS',
<add> });
<add>assert.throws(
<add> () => Reflect.get(DecompressionStream.prototype, 'readable', {}), {
<add> code: 'ERR_INVALID_THIS',
<add> });
<add>assert.throws(
<add> () => Reflect.get(DecompressionStream.prototype, 'writable', {}), {
<add> code: 'ERR_INVALID_THIS',
<add> });
| 4
|
Python
|
Python
|
fix linter error
|
80f0015c53f25d7b53d362d8b76356f221924c5d
|
<ide><path>benchmarks/benchmarks/bench_ufunc_strides.py
<ide>
<ide> stride = [1, 2, 4]
<ide> stride_out = [1, 2, 4]
<del>dtype = ['e', 'f', 'd']
<add>dtype = ['e', 'f', 'd']
<ide>
<ide> class Unary(Benchmark):
<ide> params = [UNARY_OBJECT_UFUNCS, stride, stride_out, dtype]
| 1
|
Text
|
Text
|
add info on fixup to security release process
|
5be8a30cc34adc1dab07f4fc1ed43274e38e5e52
|
<ide><path>doc/contributing/security-release-process.md
<ide> out a better way, forward the email you receive to
<ide> [Security release stewards](https://github.com/nodejs/node/blob/HEAD/doc/contributing/security-release-process.md#security-release-stewards).
<ide> If necessary add the next rotation of the steward rotation.
<ide>
<add>## When things go wrong
<add>
<add>### Incomplete fixes
<add>
<add>When a CVE is reported as fixed in a security release and it turns out that the
<add>fix was incomplete, a new CVE should be used to cover subsequent fix. This
<add>is best practice and avoids confusion that might occur if people believe
<add>they have patched the original CVE by updating their Node.js version and
<add>then we later change the `fixed in` value for the CVE.
<add>
<add>### Updating CVEs
<add>
<add>The steps to correct CVE information are:
<add>
<add>* Go to the “CVE IDs” section in your program
<add> sections (<https://hackerone.com/nodejs/cve_requests>)
<add>* Click the “Request a CVE ID” button
<add>* Enter the CVE ID that needs to be updated
<add>* Include all the details that need updating within the form
<add>* Submit the request
<add>
<ide> [H1 CVE requests]: https://hackerone.com/nodejs/cve_requests
<ide> [docker-node]: https://github.com/nodejs/docker-node/issues
<ide> [email]: https://groups.google.com/forum/#!forum/nodejs-sec
| 1
|
Python
|
Python
|
add comments to savetxt
|
cdc4c0abf4ed941ccfd46f785bfe44ca4aeedad5
|
<ide><path>numpy/lib/io.py
<ide> def savetxt(fname, X, fmt='%.18e',delimiter=' '):
<ide> raise ValueError('fname must be a string or file handle')
<ide>
<ide> X = np.asarray(X)
<add>
<add> # Handle 1-dimensional arrays
<ide> if X.ndim == 1:
<add> # Common case -- 1d array of numbers
<ide> if X.dtype.names is None:
<ide> X = np.atleast_2d(X).T
<ide> ncol = 1
<add>
<add> # Complex dtype -- each field indicates a separate column
<ide> else:
<ide> ncol = len(X.dtype.descr)
<ide> else:
<ide> ncol = X.shape[1]
<ide>
<del> # Fmt can be a string with multiple insertion points or a list of formats.
<add> # `fmt` can be a string with multiple insertion points or a list of formats.
<ide> # E.g. '%10.5f\t%10d' or ('%10.5f', '$10d')
<ide> if type(fmt) in (list, tuple):
<ide> if len(fmt) != ncol:
<del> raise AttributeError, 'fmt has wrong shape. '+ str(fmt)
<add> raise AttributeError('fmt has wrong shape. %s' % str(fmt))
<ide> format = delimiter.join(fmt)
<ide> elif type(fmt) is str:
<ide> if fmt.count('%') == 1:
<ide> fmt = [fmt,]*ncol
<ide> format = delimiter.join(fmt)
<ide> elif fmt.count('%') != ncol:
<del> raise AttributeError, 'fmt has wrong number of % formats. ' + fmt
<add> raise AttributeError('fmt has wrong number of %% formats. %s'
<add> % fmt)
<ide> else:
<ide> format = fmt
<ide>
<ide> for row in X:
<del> fh.write(format%tuple(row) + '\n')
<add> fh.write(format % tuple(row) + '\n')
<ide>
<ide> import re
<ide> def fromregex(file, regexp, dtype):
| 1
|
PHP
|
PHP
|
add more typing to log
|
a7a19f5e6b5a3320477c3b1335b1de4d79f79282
|
<ide><path>src/Log/Engine/BaseLog.php
<ide> <?php
<add>declare(strict_types=1);
<ide> /**
<ide> * CakePHP(tm) : Rapid Development Framework (https://cakephp.org)
<ide> * Copyright (c) Cake Software Foundation, Inc. (https://cakefoundation.org)
<ide> public function __construct(array $config = [])
<ide> *
<ide> * @return array
<ide> */
<del> public function levels()
<add> public function levels(): array
<ide> {
<ide> return $this->_config['levels'];
<ide> }
<ide>
<ide> /**
<ide> * Get the scopes this logger is interested in.
<ide> *
<del> * @return array
<add> * @return array|bool
<ide> */
<ide> public function scopes()
<ide> {
<ide> public function scopes()
<ide> * @param array $context Additional logging information for the message.
<ide> * @return string
<ide> */
<del> protected function _format($data, array $context = [])
<add> protected function _format($data, array $context = []): string
<ide> {
<ide> if (is_string($data)) {
<ide> return $data;
<ide><path>src/Log/Engine/ConsoleLog.php
<ide> <?php
<add>declare(strict_types=1);
<ide> /**
<ide> * CakePHP(tm) : Rapid Development Framework (https://cakephp.org)
<ide> * Copyright (c) Cake Software Foundation, Inc. (https://cakefoundation.org)
<ide> public function __construct(array $config = [])
<ide> * @param array $context Additional information about the logged message
<ide> * @return bool success of write.
<ide> */
<del> public function log($level, $message, array $context = [])
<add> public function log($level, $message, array $context = []): bool
<ide> {
<ide> $message = $this->_format($message, $context);
<ide> $output = date('Y-m-d H:i:s') . ' ' . ucfirst($level) . ': ' . $message;
<ide><path>src/Log/Engine/FileLog.php
<ide> <?php
<add>declare(strict_types=1);
<ide> /**
<ide> * CakePHP(tm) : Rapid Development Framework (https://cakephp.org)
<ide> * Copyright (c) Cake Software Foundation, Inc. (https://cakefoundation.org)
<ide> public function __construct(array $config = [])
<ide> * @param array $context Additional information about the logged message
<ide> * @return bool success of write.
<ide> */
<del> public function log($level, $message, array $context = [])
<add> public function log($level, $message, array $context = []): bool
<ide> {
<ide> $message = $this->_format($message, $context);
<ide> $output = date('Y-m-d H:i:s') . ' ' . ucfirst($level) . ': ' . $message . "\n";
<ide> public function log($level, $message, array $context = [])
<ide> $pathname = $this->_path . $filename;
<ide> $mask = $this->_config['mask'];
<ide> if (!$mask) {
<del> return file_put_contents($pathname, $output, FILE_APPEND);
<add> return file_put_contents($pathname, $output, FILE_APPEND) > 0;
<ide> }
<ide>
<ide> $exists = file_exists($pathname);
<ide> public function log($level, $message, array $context = [])
<ide> $selfError = false;
<ide> }
<ide>
<del> return $result;
<add> return $result > 0;
<ide> }
<ide>
<ide> /**
<ide> public function log($level, $message, array $context = [])
<ide> * @param string $level The level of log.
<ide> * @return string File name
<ide> */
<del> protected function _getFilename($level)
<add> protected function _getFilename(string $level): string
<ide> {
<ide> $debugTypes = ['notice', 'info', 'debug'];
<ide>
<ide> protected function _getFilename($level)
<ide> * @return bool|null True if rotated successfully or false in case of error.
<ide> * Null if file doesn't need to be rotated.
<ide> */
<del> protected function _rotateFile($filename)
<add> protected function _rotateFile($filename): ?bool
<ide> {
<ide> $filePath = $this->_path . $filename;
<ide> clearstatcache(true, $filePath);
<ide><path>src/Log/Engine/SyslogLog.php
<ide> <?php
<add>declare(strict_types=1);
<ide> /**
<ide> * CakePHP(tm) : Rapid Development Framework (https://cakephp.org)
<ide> * Copyright (c) Cake Software Foundation, Inc. (https://cakefoundation.org)
<ide> class SyslogLog extends BaseLog
<ide> * @param array $context Additional information about the logged message
<ide> * @return bool success of write.
<ide> */
<del> public function log($level, $message, array $context = [])
<add> public function log($level, $message, array $context = []): bool
<ide> {
<ide> if (!$this->_open) {
<ide> $config = $this->_config;
<ide> public function log($level, $message, array $context = [])
<ide> * @param int $facility the stream or facility to log to
<ide> * @return void
<ide> */
<del> protected function _open($ident, $options, $facility)
<add> protected function _open($ident, $options, $facility): void
<ide> {
<ide> openlog($ident, $options, $facility);
<ide> }
<ide> protected function _open($ident, $options, $facility)
<ide> * @param string $message Message to log.
<ide> * @return bool
<ide> */
<del> protected function _write($priority, $message)
<add> protected function _write($priority, $message): bool
<ide> {
<ide> return syslog($priority, $message);
<ide> }
<ide><path>src/Log/Log.php
<ide> <?php
<add>declare(strict_types=1);
<ide> /**
<ide> * CakePHP(tm) : Rapid Development Framework (https://cakephp.org)
<ide> * Copyright (c) Cake Software Foundation, Inc. (https://cakefoundation.org)
<ide> use Cake\Core\StaticConfigTrait;
<ide> use Cake\Log\Engine\BaseLog;
<ide> use InvalidArgumentException;
<add>use Psr\Log\LoggerInterface;
<ide>
<ide> /**
<ide> * Logs messages to configured Log adapters. One or more adapters
<ide> class Log
<ide> *
<ide> * @return void
<ide> */
<del> protected static function _init()
<add> protected static function _init(): void
<ide> {
<ide> if (empty(static::$_registry)) {
<ide> static::$_registry = new LogEngineRegistry();
<ide> protected static function _init()
<ide> *
<ide> * @return void
<ide> */
<del> protected static function _loadConfig()
<add> protected static function _loadConfig(): void
<ide> {
<ide> foreach (static::$_config as $name => $properties) {
<ide> if (isset($properties['engine'])) {
<ide> protected static function _loadConfig()
<ide> *
<ide> * @return void
<ide> */
<del> public static function reset()
<add> public static function reset(): void
<ide> {
<ide> static::$_registry = null;
<ide> static::$_config = [];
<ide> public static function reset()
<ide> *
<ide> * @return array active log levels
<ide> */
<del> public static function levels()
<add> public static function levels(): array
<ide> {
<ide> return static::$_levels;
<ide> }
<ide> public static function levels()
<ide> * @return void
<ide> * @throws \BadMethodCallException When trying to modify an existing config.
<ide> */
<del> public static function setConfig($key, $config = null)
<add> public static function setConfig($key, $config = null): void
<ide> {
<ide> static::_setConfig($key, $config);
<ide> static::$_dirtyConfig = true;
<ide> public static function setConfig($key, $config = null)
<ide> * Get a logging engine.
<ide> *
<ide> * @param string $name Key name of a configured adapter to get.
<del> * @return \Cake\Log\Engine\BaseLog|false Instance of BaseLog or false if not found
<add> * @return \Psr\Log\LoggerInterface|null Instance of LoggerInterface or false if not found
<ide> */
<del> public static function engine($name)
<add> public static function engine($name): ?LoggerInterface
<ide> {
<ide> static::_init();
<ide> if (static::$_registry->{$name}) {
<ide> return static::$_registry->{$name};
<ide> }
<ide>
<del> return false;
<add> return null;
<ide> }
<ide>
<ide> /**
<ide> public static function engine($name)
<ide> * @return bool Success
<ide> * @throws \InvalidArgumentException If invalid level is passed.
<ide> */
<del> public static function write($level, $message, $context = [])
<add> public static function write($level, $message, $context = []): bool
<ide> {
<ide> static::_init();
<ide> if (is_int($level) && in_array($level, static::$_levelMap)) {
<ide> public static function write($level, $message, $context = [])
<ide> * See Cake\Log\Log::setConfig() for more information on logging scopes.
<ide> * @return bool Success
<ide> */
<del> public static function emergency($message, $context = [])
<add> public static function emergency($message, $context = []): bool
<ide> {
<ide> return static::write(__FUNCTION__, $message, $context);
<ide> }
<ide> public static function emergency($message, $context = [])
<ide> * See Cake\Log\Log::setConfig() for more information on logging scopes.
<ide> * @return bool Success
<ide> */
<del> public static function alert($message, $context = [])
<add> public static function alert($message, $context = []): bool
<ide> {
<ide> return static::write(__FUNCTION__, $message, $context);
<ide> }
<ide> public static function alert($message, $context = [])
<ide> * See Cake\Log\Log::setConfig() for more information on logging scopes.
<ide> * @return bool Success
<ide> */
<del> public static function critical($message, $context = [])
<add> public static function critical($message, $context = []): bool
<ide> {
<ide> return static::write(__FUNCTION__, $message, $context);
<ide> }
<ide> public static function critical($message, $context = [])
<ide> * See Cake\Log\Log::setConfig() for more information on logging scopes.
<ide> * @return bool Success
<ide> */
<del> public static function error($message, $context = [])
<add> public static function error($message, $context = []): bool
<ide> {
<ide> return static::write(__FUNCTION__, $message, $context);
<ide> }
<ide> public static function error($message, $context = [])
<ide> * See Cake\Log\Log::setConfig() for more information on logging scopes.
<ide> * @return bool Success
<ide> */
<del> public static function warning($message, $context = [])
<add> public static function warning($message, $context = []): bool
<ide> {
<ide> return static::write(__FUNCTION__, $message, $context);
<ide> }
<ide> public static function warning($message, $context = [])
<ide> * See Cake\Log\Log::setConfig() for more information on logging scopes.
<ide> * @return bool Success
<ide> */
<del> public static function notice($message, $context = [])
<add> public static function notice($message, $context = []): bool
<ide> {
<ide> return static::write(__FUNCTION__, $message, $context);
<ide> }
<ide> public static function notice($message, $context = [])
<ide> * See Cake\Log\Log::setConfig() for more information on logging scopes.
<ide> * @return bool Success
<ide> */
<del> public static function debug($message, $context = [])
<add> public static function debug($message, $context = []): bool
<ide> {
<ide> return static::write(__FUNCTION__, $message, $context);
<ide> }
<ide> public static function debug($message, $context = [])
<ide> * See Cake\Log\Log::setConfig() for more information on logging scopes.
<ide> * @return bool Success
<ide> */
<del> public static function info($message, $context = [])
<add> public static function info($message, $context = []): bool
<ide> {
<ide> return static::write(__FUNCTION__, $message, $context);
<ide> }
<ide><path>src/Log/LogEngineRegistry.php
<ide> <?php
<add>declare(strict_types=1);
<ide> /**
<ide> * CakePHP(tm) : Rapid Development Framework (https://cakephp.org)
<ide> * Copyright (c) Cake Software Foundation, Inc. (https://cakefoundation.org)
<ide> protected function _create($class, $alias, $settings)
<ide> * @param string $name The logger name.
<ide> * @return void
<ide> */
<del> public function unload($name)
<add> public function unload($name): void
<ide> {
<ide> unset($this->_loaded[$name]);
<ide> }
<ide><path>src/Log/LogTrait.php
<ide> <?php
<add>declare(strict_types=1);
<ide> /**
<ide> * CakePHP(tm) : Rapid Development Framework (https://cakephp.org)
<ide> * Copyright (c) Cake Software Foundation, Inc. (https://cakefoundation.org)
<ide> trait LogTrait
<ide> * @param string|array $context Additional log data relevant to this message.
<ide> * @return bool Success of log write.
<ide> */
<del> public function log($msg, $level = LogLevel::ERROR, $context = [])
<add> public function log($msg, $level = LogLevel::ERROR, $context = []): bool
<ide> {
<ide> return Log::write($level, $msg, $context);
<ide> }
<ide><path>tests/TestCase/Log/Engine/BaseLogTest.php
<ide> <?php
<add>declare(strict_types=1);
<ide> /**
<ide> * CakePHP(tm) : Rapid Development Framework (http://cakephp.org)
<ide> * Copyright (c) Cake Software Foundation, Inc. (http://cakefoundation.org)
<ide><path>tests/TestCase/Log/Engine/ConsoleLogTest.php
<ide> <?php
<add>declare(strict_types=1);
<ide> /**
<ide> * CakePHP(tm) : Rapid Development Framework (https://cakephp.org)
<ide> * Copyright (c) Cake Software Foundation, Inc. (https://cakefoundation.org)
<ide><path>tests/TestCase/Log/Engine/FileLogTest.php
<ide> <?php
<add>declare(strict_types=1);
<ide> /**
<ide> * FileLogTest file
<ide> *
<ide><path>tests/TestCase/Log/Engine/SyslogLogTest.php
<ide> <?php
<add>declare(strict_types=1);
<ide> /**
<ide> * CakePHP(tm) : Rapid Development Framework (https://cakephp.org)
<ide> * Copyright (c) Cake Software Foundation, Inc. (https://cakefoundation.org)
<ide><path>tests/TestCase/Log/LogTest.php
<ide> <?php
<add>declare(strict_types=1);
<ide> /**
<ide> * CakePHP(tm) <https://book.cakephp.org/3.0/en/development/testing.html>
<ide> * Copyright (c) Cake Software Foundation, Inc. (https://cakefoundation.org)
<ide><path>tests/TestCase/Log/LogTraitTest.php
<ide> <?php
<add>declare(strict_types=1);
<ide> /**
<ide> * CakePHP(tm) : Rapid Development Framework (https://cakephp.org)
<ide> * Copyright (c) Cake Software Foundation, Inc. (https://cakefoundation.org)
| 13
|
Javascript
|
Javascript
|
fix dev-only redbox in polyfillfunctions
|
ab97b9f6021d2b31b7155970c2be0c83f7e43f04
|
<ide><path>Libraries/Utilities/PolyfillFunctions.js
<ide> function polyfillObjectProperty<T>(
<ide> const descriptor = Object.getOwnPropertyDescriptor(object, name);
<ide> if (__DEV__ && descriptor) {
<ide> const backupName = `original${name[0].toUpperCase()}${name.substr(1)}`;
<del> Object.defineProperty(object, backupName, {
<del> ...descriptor,
<del> value: object[name],
<del> });
<add> Object.defineProperty(object, backupName, descriptor);
<ide> }
<ide>
<ide> const {enumerable, writable, configurable} = descriptor || {};
| 1
|
Javascript
|
Javascript
|
freeze hooks to prevent deopts
|
fc4f30329dcbc2ad63416286ff2d42cd07df1348
|
<ide><path>lib/ChunkTemplate.js
<ide> const { SyncWaterfallHook, SyncHook } = require("tapable");
<ide> module.exports = class ChunkTemplate {
<ide> constructor(outputOptions) {
<ide> this.outputOptions = outputOptions || {};
<del> this.hooks = {
<add> this.hooks = Object.freeze({
<ide> /** @type {SyncWaterfallHook<TODO[], RenderManifestOptions>} */
<ide> renderManifest: new SyncWaterfallHook(["result", "options"]),
<ide> modules: new SyncWaterfallHook([
<ide> module.exports = class ChunkTemplate {
<ide> renderWithEntry: new SyncWaterfallHook(["source", "chunk"]),
<ide> hash: new SyncHook(["hash"]),
<ide> hashForChunk: new SyncHook(["hash", "chunk"])
<del> };
<add> });
<ide> }
<ide>
<ide> /**
<ide><path>lib/Compilation.js
<ide> class Compilation {
<ide> * @param {Compiler} compiler the compiler which created the compilation
<ide> */
<ide> constructor(compiler) {
<del> this.hooks = {
<add> this.hooks = Object.freeze({
<ide> /** @type {SyncHook<Module>} */
<ide> buildModule: new SyncHook(["module"]),
<ide> /** @type {SyncHook<Module>} */
<ide> class Compilation {
<ide> // TODO move them for webpack 5
<ide> /** @type {SyncHook<object, Module>} */
<ide> normalModuleLoader: new SyncHook(["loaderContext", "module"])
<del> };
<add> });
<ide> /** @type {string=} */
<ide> this.name = undefined;
<ide> /** @type {Compiler} */
<ide><path>lib/Compiler.js
<ide> const ConcurrentCompilationError = require("./ConcurrentCompilationError");
<ide>
<ide> class Compiler {
<ide> constructor(context) {
<del> this.hooks = {
<add> this.hooks = Object.freeze({
<ide> /** @type {SyncBailHook<Compilation>} */
<ide> shouldEmit: new SyncBailHook(["compilation"]),
<ide> /** @type {AsyncSeriesHook<Stats>} */
<ide> class Compiler {
<ide> afterResolvers: new SyncHook(["compiler"]),
<ide> /** @type {SyncBailHook<string, EntryOptions>} */
<ide> entryOption: new SyncBailHook(["context", "entry"])
<del> };
<add> });
<ide>
<ide> /** @type {string=} */
<ide> this.name = undefined;
<ide><path>lib/ContextModuleFactory.js
<ide> const EMPTY_RESOLVE_OPTIONS = {};
<ide>
<ide> module.exports = class ContextModuleFactory {
<ide> constructor(resolverFactory) {
<del> this.hooks = {
<add> this.hooks = Object.freeze({
<ide> /** @type {AsyncSeriesWaterfallHook<TODO>} */
<ide> beforeResolve: new AsyncSeriesWaterfallHook(["data"]),
<ide> /** @type {AsyncSeriesWaterfallHook<TODO>} */
<ide> module.exports = class ContextModuleFactory {
<ide> contextModuleFiles: new SyncWaterfallHook(["files"]),
<ide> /** @type {SyncWaterfallHook<TODO[]>} */
<ide> alternatives: new AsyncSeriesWaterfallHook(["modules"])
<del> };
<add> });
<ide> this.resolverFactory = resolverFactory;
<ide> }
<ide>
<ide><path>lib/DllModuleFactory.js
<ide> const DllModule = require("./DllModule");
<ide>
<ide> class DllModuleFactory {
<ide> constructor() {
<del> this.hooks = {};
<add> this.hooks = Object.freeze({});
<ide> }
<ide> create(data, callback) {
<ide> const dependency = data.dependencies[0];
<ide><path>lib/HotUpdateChunkTemplate.js
<ide> const { SyncWaterfallHook, SyncHook } = require("tapable");
<ide> module.exports = class HotUpdateChunkTemplate {
<ide> constructor(outputOptions) {
<ide> this.outputOptions = outputOptions || {};
<del> this.hooks = {
<add> this.hooks = Object.freeze({
<ide> modules: new SyncWaterfallHook([
<ide> "source",
<ide> "modules",
<ide> module.exports = class HotUpdateChunkTemplate {
<ide> "dependencyTemplates"
<ide> ]),
<ide> hash: new SyncHook(["hash"])
<del> };
<add> });
<ide> }
<ide>
<ide> render(
<ide><path>lib/JavascriptParser.js
<ide> const EMPTY_COMMENT_OPTIONS = {
<ide>
<ide> class JavascriptParser {
<ide> constructor(options, sourceType = "auto") {
<del> this.hooks = {
<add> this.hooks = Object.freeze({
<ide> evaluateTypeof: new HookMap(() => new SyncBailHook(["expression"])),
<ide> evaluate: new HookMap(() => new SyncBailHook(["expression"])),
<ide> evaluateIdentifier: new HookMap(() => new SyncBailHook(["expression"])),
<ide> class JavascriptParser {
<ide> expressionAnyMember: new HookMap(() => new SyncBailHook(["expression"])),
<ide> expressionConditionalOperator: new SyncBailHook(["expression"]),
<ide> program: new SyncBailHook(["ast", "comments"])
<del> };
<add> });
<ide> this.options = options;
<ide> this.sourceType = sourceType;
<ide> this.scope = undefined;
<ide><path>lib/MainTemplate.js
<ide> module.exports = class MainTemplate {
<ide> constructor(outputOptions) {
<ide> /** @type {TODO?} */
<ide> this.outputOptions = outputOptions || {};
<del> this.hooks = {
<add> this.hooks = Object.freeze({
<ide> /** @type {SyncWaterfallHook<TODO[], RenderManifestOptions>} */
<ide> renderManifest: new SyncWaterfallHook(["result", "options"]),
<ide> modules: new SyncWaterfallHook([
<ide> module.exports = class MainTemplate {
<ide> // TODO this should be moved somewhere else
<ide> // It's weird here
<ide> hotBootstrap: new SyncWaterfallHook(["source", "chunk", "hash"])
<del> };
<add> });
<ide> this.hooks.startup.tap("MainTemplate", (source, chunk, hash) => {
<ide> /** @type {string[]} */
<ide> const buf = [];
<ide><path>lib/ModuleTemplate.js
<ide> module.exports = class ModuleTemplate {
<ide> constructor(runtimeTemplate, type) {
<ide> this.runtimeTemplate = runtimeTemplate;
<ide> this.type = type;
<del> this.hooks = {
<add> this.hooks = Object.freeze({
<ide> content: new SyncWaterfallHook([
<ide> "source",
<ide> "module",
<ide> module.exports = class ModuleTemplate {
<ide> "dependencyTemplates"
<ide> ]),
<ide> hash: new SyncHook(["hash"])
<del> };
<add> });
<ide> }
<ide>
<ide> /**
<ide><path>lib/MultiCompiler.js
<ide> const ConcurrentCompilationError = require("./ConcurrentCompilationError");
<ide>
<ide> module.exports = class MultiCompiler {
<ide> constructor(compilers) {
<del> this.hooks = {
<add> this.hooks = Object.freeze({
<ide> done: new SyncHook(["stats"]),
<ide> invalid: new MultiHook(compilers.map(c => c.hooks.invalid)),
<ide> run: new MultiHook(compilers.map(c => c.hooks.run)),
<ide> watchClose: new SyncHook([]),
<ide> watchRun: new MultiHook(compilers.map(c => c.hooks.watchRun))
<del> };
<add> });
<ide> if (!Array.isArray(compilers)) {
<ide> compilers = Object.keys(compilers).map(name => {
<ide> compilers[name].name = name;
<ide><path>lib/MultiModuleFactory.js
<ide> const MultiModule = require("./MultiModule");
<ide>
<ide> module.exports = class MultiModuleFactory {
<ide> constructor() {
<del> this.hooks = {};
<add> this.hooks = Object.freeze({});
<ide> }
<ide>
<ide> create(data, callback) {
<ide><path>lib/NormalModuleFactory.js
<ide> const dependencyCache = new WeakMap();
<ide>
<ide> class NormalModuleFactory {
<ide> constructor(context, resolverFactory, options) {
<del> this.hooks = {
<add> this.hooks = Object.freeze({
<ide> resolver: new SyncWaterfallHook(["resolver"]),
<ide> factory: new SyncWaterfallHook(["factory"]),
<ide> beforeResolve: new AsyncSeriesWaterfallHook(["data"]),
<ide> class NormalModuleFactory {
<ide> generator: new HookMap(
<ide> () => new SyncHook(["generator", "generatorOptions"])
<ide> )
<del> };
<add> });
<ide> this.resolverFactory = resolverFactory;
<ide> this.ruleSet = new RuleSet(options.defaultRules.concat(options.rules));
<ide> this.cachePredicate =
<ide><path>lib/ResolverFactory.js
<ide> const Factory = require("enhanced-resolve").ResolverFactory;
<ide>
<ide> module.exports = class ResolverFactory {
<ide> constructor() {
<del> this.hooks = {
<add> this.hooks = Object.freeze({
<ide> resolveOptions: new HookMap(
<ide> () => new SyncWaterfallHook(["resolveOptions"])
<ide> ),
<ide> resolver: new HookMap(() => new SyncHook(["resolver", "resolveOptions"]))
<del> };
<add> });
<ide> this.cache1 = new WeakMap();
<ide> this.cache2 = new Map();
<ide> }
<ide><path>lib/wasm/WebAssemblyParser.js
<ide> const decoderOpts = {
<ide>
<ide> class WebAssemblyParser {
<ide> constructor(options) {
<del> this.hooks = {};
<add> this.hooks = Object.freeze({});
<ide> this.options = options;
<ide> }
<ide>
| 14
|
Python
|
Python
|
report length of dev dataset correctly
|
e55fa1899aa8bae311064004d0edaed8b37979e5
|
<ide><path>spacy/cli/debug_data.py
<ide> def debug_data(
<ide> else:
<ide> msg.text(f"Starting with blank model '{lang}'")
<ide> msg.text(f"{len(train_dataset)} training docs")
<del> msg.text(f"{len(gold_dev_data)} evaluation docs")
<add> msg.text(f"{len(dev_dataset)} evaluation docs")
<ide>
<ide> if not len(gold_dev_data):
<ide> msg.fail("No evaluation docs")
| 1
|
Javascript
|
Javascript
|
keep the ignored files during the upgrade process
|
7492860ffb3a010ff2273abf45c7414c098bdc37
|
<ide><path>react-native-git-upgrade/cliEntry.js
<ide> function configureGitEnv(tmpDir) {
<ide> process.env.GIT_WORK_TREE = '.';
<ide> }
<ide>
<add>function copyCurrentGitIgnoreFile(tmpDir) {
<add> /*
<add> * The user may have added new files or directories in the .gitignore file.
<add> * We need to keep those files ignored during the process, otherwise they
<add> * will be deleted.
<add> * See https://github.com/facebook/react-native/issues/12237
<add> */
<add> try {
<add> const gitignorePath = path.resolve(process.cwd(), '.gitignore');
<add> const repoExcludePath = path.resolve(tmpDir, process.env.GIT_DIR, 'info/exclude');
<add> const content = fs.readFileSync(gitignorePath, 'utf8');
<add> fs.appendFileSync(repoExcludePath, content);
<add> } catch (err) {
<add> if (err.code === 'ENOENT') {
<add> log.info('No .gitignore file found, this step is a no-op');
<add> return;
<add> }
<add>
<add> throw err;
<add> }
<add>}
<add>
<ide> function generateTemplates(generatorDir, appName, verbose) {
<ide> try {
<ide> const yeomanGeneratorEntryPoint = path.resolve(generatorDir, 'index.js');
<ide> async function run(requestedVersion, cliArgs) {
<ide> log.info('Configure Git environment');
<ide> configureGitEnv(tmpDir);
<ide>
<del> log.info('Init Git repository');
<add> log.info('Init temporary Git repository');
<ide> await exec('git init', verbose);
<ide>
<add> log.info('Save current .gitignore file');
<add> copyCurrentGitIgnoreFile(tmpDir);
<add>
<ide> log.info('Add all files to commit');
<ide> await exec('git add .', verbose);
<ide>
| 1
|
Python
|
Python
|
add references to docs pages
|
7e810cced64dee089fecc987194b182800006b6d
|
<ide><path>spacy/lookups.py
<ide> def __init__(self):
<ide> """Initialize the Lookups object.
<ide>
<ide> RETURNS (Lookups): The newly created object.
<add>
<add> DOCS: https://spacy.io/api/lookups#init
<ide> """
<ide> self._tables = OrderedDict()
<ide>
<ide> def add_table(self, name, data=SimpleFrozenDict()):
<ide> name (unicode): Unique name of table.
<ide> data (dict): Optional data to add to the table.
<ide> RETURNS (Table): The newly added table.
<add>
<add> DOCS: https://spacy.io/api/lookups#add_table
<ide> """
<ide> if name in self.tables:
<ide> raise ValueError(Errors.E158.format(name=name))
<ide> def get_table(self, name):
<ide>
<ide> name (unicode): Name of the table.
<ide> RETURNS (Table): The table.
<add>
<add> DOCS: https://spacy.io/api/lookups#get_table
<ide> """
<ide> if name not in self._tables:
<ide> raise KeyError(Errors.E159.format(name=name, tables=self.tables))
<ide> def remove_table(self, name):
<ide>
<ide> name (unicode): Name of the table to remove.
<ide> RETURNS (Table): The removed table.
<add>
<add> DOCS: https://spacy.io/api/lookups#remove_table
<ide> """
<ide> if name not in self._tables:
<ide> raise KeyError(Errors.E159.format(name=name, tables=self.tables))
<ide> def has_table(self, name):
<ide>
<ide> name (unicode): Name of the table.
<ide> RETURNS (bool): Whether a table of that name exists.
<add>
<add> DOCS: https://spacy.io/api/lookups#has_table
<ide> """
<ide> return name in self._tables
<ide>
<ide> def to_bytes(self, **kwargs):
<ide> """Serialize the lookups to a bytestring.
<ide>
<ide> RETURNS (bytes): The serialized Lookups.
<add>
<add> DOCS: https://spacy.io/api/lookups#to_bytes
<ide> """
<ide> return srsly.msgpack_dumps(self._tables)
<ide>
<ide> def from_bytes(self, bytes_data, **kwargs):
<ide>
<ide> bytes_data (bytes): The data to load.
<ide> RETURNS (Lookups): The loaded Lookups.
<add>
<add> DOCS: https://spacy.io/api/lookups#from_bytes
<ide> """
<ide> for key, value in srsly.msgpack_loads(bytes_data).items():
<ide> self._tables[key] = Table(key)
<ide> def to_disk(self, path, **kwargs):
<ide> directory, which will be created if it doesn't exist.
<ide>
<ide> path (unicode / Path): The file path.
<add>
<add> DOCS: https://spacy.io/api/lookups#to_disk
<ide> """
<ide> if len(self._tables):
<ide> path = ensure_path(path)
<ide> def from_disk(self, path, **kwargs):
<ide>
<ide> path (unicode / Path): The directory path.
<ide> RETURNS (Lookups): The loaded lookups.
<add>
<add> DOCS: https://spacy.io/api/lookups#from_disk
<ide> """
<ide> path = ensure_path(path)
<ide> filepath = path / "lookups.bin"
<ide> def from_dict(cls, data, name=None):
<ide> data (dict): The dictionary.
<ide> name (unicode): Optional table name for reference.
<ide> RETURNS (Table): The newly created object.
<add>
<add> DOCS: https://spacy.io/api/lookups#table.from_dict
<ide> """
<ide> self = cls(name=name)
<ide> self.update(data)
<ide> def __init__(self, name=None, data=None):
<ide> name (unicode): Optional table name for reference.
<ide> data (dict): Initial data, used to hint Bloom Filter.
<ide> RETURNS (Table): The newly created object.
<add>
<add> DOCS: https://spacy.io/api/lookups#table.init
<ide> """
<ide> OrderedDict.__init__(self)
<ide> self.name = name
<ide> def to_bytes(self):
<ide> """Serialize table to a bytestring.
<ide>
<ide> RETURNS (bytes): The serialized table.
<add>
<add> DOCS: https://spacy.io/api/lookups#table.to_bytes
<ide> """
<ide> data = [
<ide> ("name", self.name),
<ide> def from_bytes(self, bytes_data):
<ide>
<ide> bytes_data (bytes): The data to load.
<ide> RETURNS (Table): The loaded table.
<add>
<add> DOCS: https://spacy.io/api/lookups#table.from_bytes
<ide> """
<ide> loaded = srsly.msgpack_loads(bytes_data)
<ide> data = loaded.get("dict", {})
| 1
|
PHP
|
PHP
|
avoid use of hash class
|
29731cdef99bc433789e4f73b72c9577c3cb0a03
|
<ide><path>src/Datasource/Paginator.php
<ide>
<ide> use Cake\Core\InstanceConfigTrait;
<ide> use Cake\Datasource\Exception\PageOutOfBoundsException;
<del>use Cake\Utility\Hash;
<ide>
<ide> /**
<ide> * This class is used to handle automatic model data pagination.
<ide> public function getPagingParams()
<ide> */
<ide> public function mergeOptions($params, $settings)
<ide> {
<del> $scope = Hash::get($settings, 'scope', null);
<del> if ($scope) {
<del> $params = Hash::get($params, $scope, []);
<add> if (!empty($settings['scope'])) {
<add> $scope = $settings['scope'];
<add> $params = !empty($params[$scope]) ? (array)$params[$scope] : [];
<ide> }
<ide> $params = array_intersect_key($params, array_flip($this->config('whitelist')));
<ide>
| 1
|
Javascript
|
Javascript
|
fix variabledeclaration visitors
|
e44ba306a097f20a71b4f53163cdc8cbfd02fcbd
|
<ide><path>lib/Parser.js
<ide> class Parser extends Tapable {
<ide> }
<ide>
<ide> prewalkVariableDeclaration(statement) {
<del> if(statement.declarations)
<del> this.prewalkVariableDeclarators(statement.declarations);
<del> }
<del>
<del> walkVariableDeclaration(statement) {
<del> if(statement.declarations)
<del> this.walkVariableDeclarators(statement.declarations);
<del> }
<del>
<del> prewalkClassDeclaration(statement) {
<del> if(statement.id) {
<del> this.scope.renames.set(statement.id.name, null);
<del> this.scope.definitions.add(statement.id.name);
<del> }
<del> }
<del>
<del> walkClassDeclaration(statement) {
<del> this.walkClass(statement);
<del> }
<del>
<del> prewalkSwitchCases(switchCases) {
<del> for(let index = 0, len = switchCases.length; index < len; index++) {
<del> const switchCase = switchCases[index];
<del> this.prewalkStatements(switchCase.consequent);
<del> }
<del> }
<del>
<del> walkSwitchCases(switchCases) {
<del> for(let index = 0, len = switchCases.length; index < len; index++) {
<del> const switchCase = switchCases[index];
<del>
<del> if(switchCase.test) {
<del> this.walkExpression(switchCase.test);
<del> }
<del> this.walkStatements(switchCase.consequent);
<del> }
<del> }
<del>
<del> walkCatchClause(catchClause) {
<del> this.inScope([catchClause.param], () => {
<del> this.prewalkStatement(catchClause.body);
<del> this.walkStatement(catchClause.body);
<del> });
<del> }
<del>
<del> prewalkVariableDeclarators(declarators) {
<del> for(const declarator of declarators) {
<add> const hookMap = statement.kind === "const" ? this.hooks.varDeclarationConst :
<add> statement.kind === "let" ? this.hooks.varDeclarationLet :
<add> this.hooks.varDeclarationVar;
<add> for(const declarator of statement.declarations) {
<ide> switch(declarator.type) {
<ide> case "VariableDeclarator":
<ide> {
<ide> this.enterPattern(declarator.id, (name, decl) => {
<del> const hookMap = declarator.kind === "const" ? this.hooks.varDeclarationConst :
<del> declarator.kind === "let" ? this.hooks.varDeclarationLet :
<del> this.hooks.varDeclarationVar;
<del> const hook = hookMap.get(name);
<add> let hook = hookMap.get(name);
<ide> if(hook === undefined || !hook.call(decl)) {
<del> const hook = this.hooks.varDeclaration.get(name);
<add> hook = this.hooks.varDeclaration.get(name);
<ide> if(hook === undefined || !hook.call(decl)) {
<ide> this.scope.renames.set(name, null);
<ide> this.scope.definitions.add(name);
<ide> class Parser extends Tapable {
<ide> }
<ide> }
<ide>
<del> walkVariableDeclarators(declarators) {
<del> for(const declarator of declarators) {
<add> walkVariableDeclaration(statement) {
<add> for(const declarator of statement.declarations) {
<ide> switch(declarator.type) {
<ide> case "VariableDeclarator":
<ide> {
<ide> class Parser extends Tapable {
<ide> }
<ide> }
<ide>
<add> prewalkClassDeclaration(statement) {
<add> if(statement.id) {
<add> this.scope.renames.set(statement.id.name, null);
<add> this.scope.definitions.add(statement.id.name);
<add> }
<add> }
<add>
<add> walkClassDeclaration(statement) {
<add> this.walkClass(statement);
<add> }
<add>
<add> prewalkSwitchCases(switchCases) {
<add> for(let index = 0, len = switchCases.length; index < len; index++) {
<add> const switchCase = switchCases[index];
<add> this.prewalkStatements(switchCase.consequent);
<add> }
<add> }
<add>
<add> walkSwitchCases(switchCases) {
<add> for(let index = 0, len = switchCases.length; index < len; index++) {
<add> const switchCase = switchCases[index];
<add>
<add> if(switchCase.test) {
<add> this.walkExpression(switchCase.test);
<add> }
<add> this.walkStatements(switchCase.consequent);
<add> }
<add> }
<add>
<add> walkCatchClause(catchClause) {
<add> this.inScope([catchClause.param], () => {
<add> this.prewalkStatement(catchClause.body);
<add> this.walkStatement(catchClause.body);
<add> });
<add> }
<add>
<ide> walkPattern(pattern) {
<ide> switch(pattern.type) {
<ide> case "ArrayPattern":
| 1
|
Ruby
|
Ruby
|
use tablealias nodes for aliasing subselects
|
e8563a6234b4f60f8d756d89b3b35026a467694e
|
<ide><path>lib/arel/factory_methods.rb
<ide> module Arel
<ide> ###
<ide> # Methods for creating various nodes
<ide> module FactoryMethods
<add> def create_table_alias relation, name
<add> Nodes::TableAlias.new(relation, name)
<add> end
<add>
<ide> def create_join to, constraint = nil, klass = Nodes::InnerJoin
<ide> klass.new(to, constraint)
<ide> end
<ide><path>lib/arel/select_manager.rb
<ide> def exists
<ide> end
<ide>
<ide> def as other
<del> Nodes::As.new grouping(@ast), Nodes::SqlLiteral.new(other)
<add> create_table_alias grouping(@ast), Nodes::SqlLiteral.new(other)
<ide> end
<ide>
<ide> def where_clauses
<ide><path>lib/arel/visitors/to_sql.rb
<ide> def quote value, column = nil
<ide> end
<ide>
<ide> def quote_table_name name
<del> @quoted_tables[name] ||= @connection.quote_table_name(name)
<add> @quoted_tables[name] ||= Arel::Nodes::SqlLiteral === name ? name : @connection.quote_table_name(name)
<ide> end
<ide>
<ide> def quote_column_name name
<ide><path>test/test_select_manager.rb
<ide> def test_join_sources
<ide> manager1.from(as)
<ide>
<ide> manager1.to_sql.must_be_like %{
<del> SELECT lol FROM (SELECT * FROM "users" ) AS omg
<add> SELECT lol FROM (SELECT * FROM "users" ) omg
<ide> }
<ide> end
<ide> end
| 4
|
Text
|
Text
|
change "为程序员阅读的书籍" to "程序员书单"
|
c405e55ea11579ae66a8c63a7f447d4c0ea0274a
|
<ide><path>guide/chinese/book-recommendations/index.md
<ide> ---
<ide> title: Books to Read for Programmers
<del>localeTitle: 为程序员阅读的书籍
<add>localeTitle: 程序员书单
<ide> ---
<ide> ### 书籍清单
<ide>
<ide> _软件工程的事实与谬误_
<ide>
<ide> 此列表是从Reddit和Stackoverflow上的多个建议线程编译而来的。
<ide>
<del>请随意添加您认为有用的更多内容!
<ide>\ No newline at end of file
<add>请随意添加您认为有用的更多内容!
| 1
|
Ruby
|
Ruby
|
remove nil check
|
1678e959e973de32287b65c52ebc6cce87148951
|
<ide><path>activerecord/lib/active_record/associations/belongs_to_association.rb
<ide> def replace(record)
<ide> raise_on_type_mismatch!(record) if record
<ide>
<ide> update_counters(record)
<del> replace_keys(record)
<add> if record
<add> replace_keys(record)
<add> else
<add> remove_keys
<add> end
<ide> set_inverse_instance(record) if record
<ide>
<ide> @updated = true if record
<ide> def different_target?(record)
<ide> end
<ide>
<ide> def replace_keys(record)
<del> if record
<del> owner[reflection.foreign_key] = record[reflection.association_primary_key(record.class)]
<del> else
<del> owner[reflection.foreign_key] = nil
<del> end
<add> owner[reflection.foreign_key] = record[reflection.association_primary_key(record.class)]
<add> end
<add>
<add> def remove_keys
<add> owner[reflection.foreign_key] = nil
<ide> end
<ide>
<ide> def foreign_key_present?
<ide><path>activerecord/lib/active_record/associations/belongs_to_polymorphic_association.rb
<ide> def klass
<ide>
<ide> def replace_keys(record)
<ide> super
<del> owner[reflection.foreign_type] = record && record.class.base_class.name
<add> owner[reflection.foreign_type] = record.class.base_class.name
<ide> end
<ide>
<ide> def different_target?(record)
| 2
|
Python
|
Python
|
fix typo in `secrets_manager.py` docstring
|
0f327788b5b0887c463cb83dd8f732245da96577
|
<ide><path>airflow/providers/amazon/aws/secrets/secrets_manager.py
<ide> class SecretsManagerBackend(BaseSecretsBackend, LoggingMixin):
<ide> :param variables_prefix: Specifies the prefix of the secret to read to get Variables.
<ide> If set to None (null), requests for variables will not be sent to AWS Secrets Manager
<ide> :type variables_prefix: str
<del> :param config_prefix: Specifies the prefix of the secret to read to get Variables.
<add> :param config_prefix: Specifies the prefix of the secret to read to get Configurations.
<ide> If set to None (null), requests for configurations will not be sent to AWS Secrets Manager
<ide> :type config_prefix: str
<ide> :param profile_name: The name of a profile to use. If not given, then the default profile is used.
| 1
|
Python
|
Python
|
add support for pandas dataframe
|
c173b764c768a686139636da24c54b30734f5c83
|
<ide><path>keras/engine/training.py
<ide> def _standardize_input_data(data, names, shapes=None,
<ide> if data is None:
<ide> return [None for _ in range(len(names))]
<ide> if isinstance(data, dict):
<add> for key, value in data.items():
<add> if value.__class__.__name__ == 'DataFrame':
<add> data[key] = value.values
<ide> arrays = []
<ide> for name in names:
<ide> if name not in data:
<ide> def _standardize_input_data(data, names, shapes=None,
<ide> str(names))
<ide> arrays.append(data[name])
<ide> elif isinstance(data, list):
<add> for key, value in enumerate(data):
<add> if value.__class__.__name__ == 'DataFrame':
<add> data[key] = value.values
<ide> if len(data) != len(names):
<ide> if data and hasattr(data[0], 'shape'):
<ide> raise ValueError('Error when checking model ' +
<ide> def _standardize_input_data(data, names, shapes=None,
<ide> 'The list you passed was: ' +
<ide> str(data)[:200])
<ide> arrays = data
<add> elif data.__class__.__name__ == 'DataFrame':
<add> # test if data is a DataFrame, without pandas installed
<add> data = data.values
<ide> else:
<ide> if not hasattr(data, 'shape'):
<ide> raise TypeError('Error when checking model ' +
<ide><path>setup.py
<ide> 'tests': ['pytest',
<ide> 'pytest-pep8',
<ide> 'pytest-xdist',
<del> 'pytest-cov'],
<add> 'pytest-cov',
<add> 'pandas'],
<ide> },
<ide> classifiers=[
<ide> 'Development Status :: 5 - Production/Stable',
<ide><path>tests/keras/engine/test_training.py
<ide> import pytest
<ide> import numpy as np
<add>import pandas as pd
<ide> from numpy.testing import assert_allclose
<ide> import sys
<ide> import scipy.sparse as sparse
<ide> def test_model_methods():
<ide>
<ide> input_a_np = np.random.random((10, 3))
<ide> input_b_np = np.random.random((10, 3))
<add> input_a_df = pd.DataFrame(input_a_np)
<add> input_b_df = pd.DataFrame(input_b_np)
<ide>
<ide> output_a_np = np.random.random((10, 4))
<ide> output_b_np = np.random.random((10, 3))
<add> output_a_df = pd.DataFrame(output_a_np)
<add> output_b_df = pd.DataFrame(output_b_np)
<ide>
<ide> # training/testing doesn't work before compiling.
<ide> with pytest.raises(RuntimeError):
<ide> def test_model_methods():
<ide> [output_a_np, output_b_np])
<ide> out = model.train_on_batch({'input_a': input_a_np, 'input_b': input_b_np},
<ide> {'dense_1': output_a_np, 'dropout': output_b_np})
<add> out = model.train_on_batch([input_a_df, input_b_df],
<add> [output_a_df, output_b_df])
<ide>
<ide> # test fit
<ide> out = model.fit([input_a_np, input_b_np],
<ide> def test_model_methods():
<ide> out = model.fit({'input_a': input_a_np, 'input_b': input_b_np},
<ide> {'dense_1': output_a_np, 'dropout': output_b_np},
<ide> epochs=1, batch_size=4)
<add> out = model.fit([input_a_df, input_b_df],
<add> [output_a_df, output_b_df], epochs=1, batch_size=4)
<ide>
<ide> # test validation_split
<ide> out = model.fit([input_a_np, input_b_np],
<ide> def test_model_methods():
<ide> [output_a_np, output_b_np])
<ide> out = model.test_on_batch({'input_a': input_a_np, 'input_b': input_b_np},
<ide> {'dense_1': output_a_np, 'dropout': output_b_np})
<add> out = model.test_on_batch([input_a_df, input_b_df],
<add> [output_a_df, output_b_df])
<ide>
<ide> # predict_on_batch
<ide> out = model.predict_on_batch([input_a_np, input_b_np])
<ide> out = model.predict_on_batch({'input_a': input_a_np, 'input_b': input_b_np})
<add> out = model.predict_on_batch([input_a_df, input_b_df])
<ide>
<ide> # predict, evaluate
<ide> input_a_np = np.random.random((10, 3))
<ide> def test_model_methods():
<ide> output_b_np = np.random.random((10, 3))
<ide>
<ide> out = model.evaluate([input_a_np, input_b_np], [output_a_np, output_b_np], batch_size=4)
<add> out = model.evaluate([input_a_df, input_b_df], [output_a_df, output_b_df], batch_size=4)
<ide> out = model.predict([input_a_np, input_b_np], batch_size=4)
<add> out = model.predict([input_a_df, input_b_df], batch_size=4)
<ide>
<ide> # with sample_weight
<ide> input_a_np = np.random.random((10, 3))
| 3
|
Javascript
|
Javascript
|
fix circular dependencies in mockmodal
|
507b05f4c02b46109f483a2b79c924a775fd7bd3
|
<ide><path>jest/mockModal.js
<ide> 'use strict';
<ide>
<ide> const React = require('react');
<del>const Modal = require('../Libraries/Modal/Modal');
<add>import typeof Modal from '../Libraries/Modal/Modal';
<ide>
<ide> function mockModal(BaseComponent: $FlowFixMe) {
<ide> class ModalMock extends BaseComponent {
<del> render(): React.Element<typeof Modal> {
<add> render(): React.Element<Modal> {
<ide> return (
<ide> <BaseComponent {...this.props}>
<ide> {this.props.visible !== true ? null : this.props.children}
| 1
|
Ruby
|
Ruby
|
remove stray backtick
|
69799d97b1e7314912b2ee234dec2c179c5fb969
|
<ide><path>Library/Homebrew/extend/os/mac/formula_cellar_checks.rb
<ide> def check_linkage
<ide> if tab.poured_from_bottle
<ide> output += <<-EOS.undent
<ide> Rebuild this from source with:
<del> brew reinstall --build-from-source #{formula}`
<add> brew reinstall --build-from-source #{formula}
<ide> If that's successful, file an issue#{formula.tap ? " here:\n #{formula.tap.issues_url}" : "."}
<ide> EOS
<ide> end
| 1
|
Text
|
Text
|
improve explanation of package.json "type" field
|
76d4a234680d96b4284a063c1c69da4b7c2b1ced
|
<ide><path>doc/api/esm.md
<ide> or when referenced by `import` statements within ES module code:
<ide>
<ide> ### `package.json` `"type"` field
<ide>
<del>Files ending with `.js` or `.mjs`, or lacking any extension,
<del>will be loaded as ES modules when the nearest parent `package.json` file
<del>contains a top-level field `"type"` with a value of `"module"`.
<add>Files ending with `.js` or lacking any extension will be loaded as ES modules
<add>when the nearest parent `package.json` file contains a top-level field `"type"`
<add>with a value of `"module"`.
<ide>
<ide> The nearest parent `package.json` is defined as the first `package.json` found
<ide> when searching in the current folder, that folder’s parent, and so on up
<ide> future-proof the package in case the default type of Node.js ever changes, and
<ide> it will also make things easier for build tools and loaders to determine how the
<ide> files in the package should be interpreted.
<ide>
<add>Regardless of the value of the `"type"` field, `.mjs` files are always treated
<add>as ES modules and `.cjs` files are always treated as CommonJS.
<add>
<ide> ### Package Scope and File Extensions
<ide>
<ide> A folder containing a `package.json` file, and all subfolders below that
| 1
|
Javascript
|
Javascript
|
remove duplicate words in comments
|
47ee340a9eb0cd7c73e9f2db8896440881d9caf1
|
<ide><path>lib/internal/http2/core.js
<ide> function onFrameError(id, type, code) {
<ide>
<ide> // Receiving a GOAWAY frame from the connected peer is a signal that no
<ide> // new streams should be created. If the code === NGHTTP2_NO_ERROR, we
<del>// are going to send our our close, but allow existing frames to close
<add>// are going to send our close, but allow existing frames to close
<ide> // normally. If code !== NGHTTP2_NO_ERROR, we are going to send our own
<ide> // close using the same code then destroy the session with an error.
<ide> // The goaway event will be emitted on next tick.
<ide> function requestOnConnect(headers, options) {
<ide> if (session.destroyed)
<ide> return;
<ide>
<del> // If the session was closed while waiting for for the connect, destroy
<add> // If the session was closed while waiting for the connect, destroy
<ide> // the stream and do not continue with the request.
<ide> if (session.closed) {
<ide> const err = new errors.Error('ERR_HTTP2_GOAWAY_SESSION');
| 1
|
Javascript
|
Javascript
|
use long property names in share scope
|
ab0ea4a4ce9d727cf1ce01843fb3867fb537eb7d
|
<ide><path>lib/sharing/ConsumeSharedRuntimeModule.js
<ide> class ConsumeSharedRuntimeModule extends RuntimeModule {
<ide> "}"
<ide> ]),
<ide> "}",
<del> "function load(scopeName, key) {",
<del> Template.indent([
<add> `var get = ${runtimeTemplate.returningFunction(
<add> "(sharedModule.loaded = 1, sharedModule.get())",
<add> "sharedModule"
<add> )};`,
<add> `var load = ${runtimeTemplate.basicFunction("scopeName, key", [
<ide> `${RuntimeGlobals.initializeSharing}(scopeName);`,
<ide> `var scope = ${RuntimeGlobals.shareScopeMap}[scopeName];`,
<ide> `ensureExistence(scope, scopeName, key);`,
<del> "return scope[key].l = 1, scope[key].g();"
<del> ]),
<del> "}",
<del> "function loadFallback(scopeName, key, fallback) {",
<del> Template.indent([
<del> `${RuntimeGlobals.initializeSharing}(scopeName);`,
<del> `var scope = ${RuntimeGlobals.shareScopeMap}[scopeName];`,
<del> `return scope && ${RuntimeGlobals.hasOwnProperty}(scope, key) ? (scope[key].l = 1, scope[key].g()) : fallback();`
<del> ]),
<del> "}",
<del> "function loadVersionCheck(scopeName, key, version) {",
<del> Template.indent([
<del> `${RuntimeGlobals.initializeSharing}(scopeName);`,
<del> `var scope = ${RuntimeGlobals.shareScopeMap}[scopeName];`,
<del> "ensureExistence(scope, scopeName, key);",
<del> "checkVersion(scope, key, version);",
<del> "return scope[key].l = 1, scope[key].g();"
<del> ]),
<del> "}",
<del> "function loadStrictVersionCheck(scopeName, key, version) {",
<del> Template.indent([
<del> `${RuntimeGlobals.initializeSharing}(scopeName);`,
<del> `var scope = ${RuntimeGlobals.shareScopeMap}[scopeName];`,
<del> "ensureExistence(scope, scopeName, key);",
<del> "checkVersion(scope, key, version, 1);",
<del> "return scope[key].l = 1, scope[key].g();"
<del> ]),
<del> "}",
<del> "function loadStrictVersionCheckFallback(scopeName, key, version, fallback) {",
<del> Template.indent([
<del> `${RuntimeGlobals.initializeSharing}(scopeName);`,
<del> `var scope = ${RuntimeGlobals.shareScopeMap}[scopeName];`,
<del> `return scope && ${RuntimeGlobals.hasOwnProperty}(scope, key) && !checkVersion(scope, key, version) ? (scope[key].l = 1, scope[key].g()) : fallback();`
<del> ]),
<del> "}",
<del> "function loadVersionCheckFallback(scopeName, key, version, fallback) {",
<del> Template.indent([
<del> `${RuntimeGlobals.initializeSharing}(scopeName);`,
<del> `var scope = ${RuntimeGlobals.shareScopeMap}[scopeName];`,
<del> `if(!scope || !${RuntimeGlobals.hasOwnProperty}(scope, key)) return fallback();`,
<del> "checkVersion(scope, key, version);",
<del> "return scope[key].l = 1, scope[key].g();"
<del> ]),
<del> "}",
<add> "return get(scope[key]);"
<add> ])};`,
<add> `var loadFallback = ${runtimeTemplate.basicFunction(
<add> "scopeName, key, fallback",
<add> [
<add> `${RuntimeGlobals.initializeSharing}(scopeName);`,
<add> `var scope = ${RuntimeGlobals.shareScopeMap}[scopeName];`,
<add> `return scope && ${RuntimeGlobals.hasOwnProperty}(scope, key) ? get(scope[key]) : fallback();`
<add> ]
<add> )};`,
<add> `var loadVersionCheck = ${runtimeTemplate.basicFunction(
<add> "scopeName, key, version",
<add> [
<add> `${RuntimeGlobals.initializeSharing}(scopeName);`,
<add> `var scope = ${RuntimeGlobals.shareScopeMap}[scopeName];`,
<add> "ensureExistence(scope, scopeName, key);",
<add> "checkVersion(scope, key, version);",
<add> "return get(scope[key]);"
<add> ]
<add> )};`,
<add> `var loadStrictVersionCheck = ${runtimeTemplate.basicFunction(
<add> "scopeName, key, version",
<add> [
<add> `${RuntimeGlobals.initializeSharing}(scopeName);`,
<add> `var scope = ${RuntimeGlobals.shareScopeMap}[scopeName];`,
<add> "ensureExistence(scope, scopeName, key);",
<add> "checkVersion(scope, key, version, 1);",
<add> "return get(scope[key]);"
<add> ]
<add> )};`,
<add> `var loadStrictVersionCheckFallback = ${runtimeTemplate.basicFunction(
<add> "scopeName, key, version, fallback",
<add> [
<add> `${RuntimeGlobals.initializeSharing}(scopeName);`,
<add> `var scope = ${RuntimeGlobals.shareScopeMap}[scopeName];`,
<add> `return scope && ${RuntimeGlobals.hasOwnProperty}(scope, key) && !checkVersion(scope, key, version) ? get(scope[key]) : fallback();`
<add> ]
<add> )};`,
<add> `var loadVersionCheckFallback = ${runtimeTemplate.basicFunction(
<add> "scopeName, key, version, fallback",
<add> [
<add> `${RuntimeGlobals.initializeSharing}(scopeName);`,
<add> `var scope = ${RuntimeGlobals.shareScopeMap}[scopeName];`,
<add> `if(!scope || !${RuntimeGlobals.hasOwnProperty}(scope, key)) return fallback();`,
<add> "checkVersion(scope, key, version);",
<add> "return get(scope[key]);"
<add> ]
<add> )};`,
<ide> "var installedModules = {};",
<ide> "var moduleToHandlerMapping = {",
<ide> Template.indent(
<ide><path>lib/sharing/ShareRuntimeModule.js
<ide> class ShareRuntimeModule extends RuntimeModule {
<ide> )};`,
<ide> "if(scope[name]) {",
<ide> Template.indent([
<del> 'if(scope[name].l) return warn("Ignoring providing of already used shared module: " + name);',
<del> "var v = scope[name].v;",
<add> 'if(scope[name].loaded) return warn("Ignoring providing of already used shared module: " + name);',
<add> "var v = scope[name].version;",
<ide> "if(v && version) {",
<ide> Template.indent([
<ide> "for(var i = 0; i < version.length; i++) {",
<ide> class ShareRuntimeModule extends RuntimeModule {
<ide> "} else if(!version) return; else if(v && v !== version) return versionConflict()"
<ide> ]),
<ide> "}",
<del> "scope[name] = { g: factory, v: version };"
<add> "scope[name] = { get: factory, version: version };"
<ide> ]
<ide> )};`,
<ide> `var initExternal = ${runtimeTemplate.basicFunction("id", [
<ide><path>test/configCases/container/container-entry-overridables/index.js
<ide> it("should expose modules from the container", async () => {
<ide> expect(container.init).toBeTypeOf("function");
<ide> container.init({
<ide> value: {
<del> g: () =>
<add> get: () =>
<ide> new Promise(resolve => {
<ide> setTimeout(() => {
<ide> resolve(() => ({
<ide><path>test/configCases/container/container-reference-override/test.config.js
<ide> module.exports = {
<ide> let ss;
<ide> scope.ABC = {
<ide> async get(module) {
<del> const testFactory = await ss.test.g();
<add> const testFactory = await ss.test.get();
<ide> const test = testFactory();
<ide> return () => {
<ide> return test(module);
<ide><path>test/configCases/sharing/provide-module/index.js
<ide> it("should add provided modules to the share scope on init", async () => {
<ide> );
<ide>
<ide> {
<del> const factory = await __webpack_share_scopes__["test-scope"]["test1"].g();
<add> const factory = await __webpack_share_scopes__["test-scope"]["test1"].get();
<ide> expect(factory()).toBe("test1");
<ide> }
<ide>
<ide> {
<del> const factory = await __webpack_share_scopes__["other-scope"]["test2"].g();
<add> const factory = await __webpack_share_scopes__["other-scope"][
<add> "test2"
<add> ].get();
<ide> expect(factory()).toBe("test2");
<ide> }
<ide> });
| 5
|
Ruby
|
Ruby
|
remove redundant parenthesis
|
1e34ed7d82f8b6ee37be96fcbca259aafb671018
|
<ide><path>activerecord/test/cases/type/integer_test.rb
<ide> class IntegerTest < ActiveRecord::TestCase
<ide> type = Type::Integer.new
<ide> assert_nil type.cast([1,2])
<ide> assert_nil type.cast({1 => 2})
<del> assert_nil type.cast((1..2))
<add> assert_nil type.cast(1..2)
<ide> end
<ide>
<ide> test "casting ActiveRecord models" do
| 1
|
Python
|
Python
|
fix gpt/gpt-2 from pretrained
|
50e62a4cb4d503e3559b88838b8cf9f745fef516
|
<ide><path>pytorch_transformers/modeling_gpt2.py
<ide> def from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs):
<ide> """
<ide> num_special_tokens = kwargs.pop('num_special_tokens', None)
<ide>
<del> model = super(PreTrainedModel, cls).from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
<add> model = super().from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
<ide>
<ide> # Add additional embeddings for special tokens if needed
<ide> # This step also make sure we are still sharing the output and input embeddings after loading weights
| 1
|
Python
|
Python
|
fix install source path for openssl headers
|
ec7fbf2bb269f5ce5fa65ad7daf5a24241af97ef
|
<ide><path>tools/install.py
<ide> def files(action):
<ide> subdir_files('deps/uv/include', 'include/node/', action)
<ide>
<ide> if 'false' == variables.get('node_shared_openssl'):
<add> subdir_files('deps/openssl/openssl/include/openssl', 'include/node/openssl/', action)
<ide> action(['deps/openssl/config/opensslconf.h'], 'include/node/openssl/')
<del> subdir_files('deps/openssl/include/openssl', 'include/node/openssl/', action)
<ide>
<ide> if 'false' == variables.get('node_shared_v8'):
<ide> subdir_files('deps/v8/include', 'include/node/', action)
| 1
|
Python
|
Python
|
pass kwargs from vault hook to hvac client
|
1a3f7857f50170cbef767b91e2831fee8019e7b1
|
<ide><path>airflow/providers/hashicorp/hooks/vault.py
<ide> DEFAULT_KV_ENGINE_VERSION,
<ide> _VaultClient,
<ide> )
<add>from airflow.utils.helpers import merge_dicts
<ide>
<ide>
<ide> class VaultHook(BaseHook):
<ide> def __init__(
<ide> azure_resource: str | None = None,
<ide> radius_host: str | None = None,
<ide> radius_port: int | None = None,
<add> **kwargs,
<ide> ):
<ide> super().__init__()
<ide> self.connection = self.get_connection(vault_conn_id)
<ide> def __init__(
<ide> except ValueError:
<ide> raise VaultError(f"The version is not an int: {conn_version}. ")
<ide>
<add> client_kwargs = self.connection.extra_dejson.get("client_kwargs", {})
<add>
<add> if kwargs:
<add> client_kwargs = merge_dicts(client_kwargs, kwargs)
<add>
<ide> if auth_type == "approle":
<ide> if role_id:
<ide> warnings.warn(
<ide> def __init__(
<ide> else (None, None)
<ide> )
<ide>
<add> key_id = self.connection.extra_dejson.get("key_id")
<add> if not key_id:
<add> key_id = self.connection.login
<add>
<ide> if self.connection.conn_type == "vault":
<ide> conn_protocol = "http"
<ide> elif self.connection.conn_type == "vaults":
<ide> def __init__(
<ide> # Schema is really path in the Connection definition. This is pretty confusing because of URL schema
<ide> mount_point = self.connection.schema if self.connection.schema else "secret"
<ide>
<del> self.vault_client = _VaultClient(
<del> url=url,
<del> auth_type=auth_type,
<del> auth_mount_point=auth_mount_point,
<del> mount_point=mount_point,
<del> kv_engine_version=kv_engine_version,
<del> token=self.connection.password,
<del> token_path=token_path,
<del> username=self.connection.login,
<del> password=self.connection.password,
<del> key_id=self.connection.login,
<del> secret_id=self.connection.password,
<del> role_id=role_id,
<del> kubernetes_role=kubernetes_role,
<del> kubernetes_jwt_path=kubernetes_jwt_path,
<del> gcp_key_path=gcp_key_path,
<del> gcp_keyfile_dict=gcp_keyfile_dict,
<del> gcp_scopes=gcp_scopes,
<del> azure_tenant_id=azure_tenant_id,
<del> azure_resource=azure_resource,
<del> radius_host=radius_host,
<del> radius_secret=self.connection.password,
<del> radius_port=radius_port,
<add> client_kwargs.update(
<add> **dict(
<add> url=url,
<add> auth_type=auth_type,
<add> auth_mount_point=auth_mount_point,
<add> mount_point=mount_point,
<add> kv_engine_version=kv_engine_version,
<add> token=self.connection.password,
<add> token_path=token_path,
<add> username=self.connection.login,
<add> password=self.connection.password,
<add> key_id=self.connection.login,
<add> secret_id=self.connection.password,
<add> role_id=role_id,
<add> kubernetes_role=kubernetes_role,
<add> kubernetes_jwt_path=kubernetes_jwt_path,
<add> gcp_key_path=gcp_key_path,
<add> gcp_keyfile_dict=gcp_keyfile_dict,
<add> gcp_scopes=gcp_scopes,
<add> azure_tenant_id=azure_tenant_id,
<add> azure_resource=azure_resource,
<add> radius_host=radius_host,
<add> radius_secret=self.connection.password,
<add> radius_port=radius_port,
<add> )
<ide> )
<ide>
<add> self.vault_client = _VaultClient(**client_kwargs)
<add>
<ide> def _get_kubernetes_parameters_from_connection(
<ide> self, kubernetes_jwt_path: str | None, kubernetes_role: str | None
<ide> ) -> tuple[str, str | None]:
<ide><path>tests/providers/hashicorp/hooks/test_vault.py
<ide> def test_kubernetes_dejson(self, mock_kubernetes, mock_hvac, mock_get_connection
<ide> test_client.is_authenticated.assert_called_with()
<ide> assert 2 == test_hook.vault_client.kv_engine_version
<ide>
<add> @mock.patch("airflow.providers.hashicorp.hooks.vault.VaultHook.get_connection")
<add> @mock.patch("airflow.providers.hashicorp._internal_client.vault_client.hvac")
<add> def test_client_kwargs(self, mock_hvac, mock_get_connection):
<add> """This test checks that values in connection extras keyed with 'client_kwargs' will be
<add> consumed by the underlying Hashicorp Vault client init. The order of precedence should
<add> be kwargs (passed through the hook init) > client_kwargs (found in connection extras).
<add> """
<add> mock_client = mock.MagicMock()
<add> mock_hvac.Client.return_value = mock_client
<add> mock_connection = self.get_mock_connection()
<add> mock_get_connection.return_value = mock_connection
<add>
<add> connection_dict = {
<add> "client_kwargs": {"namespace": "name", "timeout": 50, "generic_arg": "generic_val1"}
<add> }
<add>
<add> mock_connection.extra_dejson.get.side_effect = connection_dict.get
<add> kwargs = {"vault_conn_id": "vault_conn_id", "generic_arg": "generic_val0"}
<add> test_hook = VaultHook(**kwargs)
<add> test_client = test_hook.get_conn()
<add> mock_get_connection.assert_called_with("vault_conn_id")
<add> mock_hvac.Client.assert_called_with(
<add> url="http://localhost:8180", namespace="name", timeout=50, generic_arg="generic_val0"
<add> )
<add> test_client.is_authenticated.assert_called_with()
<add> assert 2 == test_hook.vault_client.kv_engine_version
<add>
<ide> @mock.patch("airflow.providers.hashicorp.hooks.vault.VaultHook.get_connection")
<ide> @mock.patch("airflow.providers.hashicorp._internal_client.vault_client.hvac")
<ide> def test_ldap_init_params(self, mock_hvac, mock_get_connection):
| 2
|
Go
|
Go
|
move testrunexit to integration-cli
|
c19e0fe7e2c626218c854aa97fd3f23d29f11615
|
<ide><path>integration-cli/docker_cli_run_test.go
<ide> package main
<ide>
<ide> import (
<add> "bufio"
<ide> "fmt"
<ide> "io/ioutil"
<ide> "os"
<ide> import (
<ide> "strings"
<ide> "sync"
<ide> "testing"
<add> "time"
<ide>
<ide> "github.com/docker/docker/pkg/mount"
<ide> "github.com/docker/docker/pkg/networkfs/resolvconf"
<ide> func TestRunWorkdirExistsAndIsFile(t *testing.T) {
<ide> }
<ide> logDone("run - error on existing file for workdir")
<ide> }
<add>
<add>func TestRunExitOnStdinClose(t *testing.T) {
<add> name := "testrunexitonstdinclose"
<add> defer deleteAllContainers()
<add> runCmd := exec.Command(dockerBinary, "run", "--name", name, "-i", "busybox", "/bin/cat")
<add>
<add> stdin, err := runCmd.StdinPipe()
<add> if err != nil {
<add> t.Fatal(err)
<add> }
<add> stdout, err := runCmd.StdoutPipe()
<add> if err != nil {
<add> t.Fatal(err)
<add> }
<add>
<add> if err := runCmd.Start(); err != nil {
<add> t.Fatal(err)
<add> }
<add> if _, err := stdin.Write([]byte("hello\n")); err != nil {
<add> t.Fatal(err)
<add> }
<add>
<add> r := bufio.NewReader(stdout)
<add> line, err := r.ReadString('\n')
<add> if err != nil {
<add> t.Fatal(err)
<add> }
<add> line = strings.TrimSpace(line)
<add> if line != "hello" {
<add> t.Fatalf("Output should be 'hello', got '%q'", line)
<add> }
<add> if err := stdin.Close(); err != nil {
<add> t.Fatal(err)
<add> }
<add> finish := make(chan struct{})
<add> go func() {
<add> if err := runCmd.Wait(); err != nil {
<add> t.Fatal(err)
<add> }
<add> close(finish)
<add> }()
<add> select {
<add> case <-finish:
<add> case <-time.After(1 * time.Second):
<add> t.Fatal("docker run failed to exit on stdin close")
<add> }
<add> state, err := inspectField(name, "State.Running")
<add> if err != nil {
<add> t.Fatal(err)
<add> }
<add> if state != "false" {
<add> t.Fatal("Container must be stopped after stdin closing")
<add> }
<add> logDone("run - exit on stdin closing")
<add>}
<ide><path>integration/commands_test.go
<ide> func assertPipe(input, output string, r io.Reader, w io.Writer, count int) error
<ide> return nil
<ide> }
<ide>
<del>func TestRunExit(t *testing.T) {
<del> stdin, stdinPipe := io.Pipe()
<del> stdout, stdoutPipe := io.Pipe()
<del>
<del> cli := client.NewDockerCli(stdin, stdoutPipe, ioutil.Discard, testDaemonProto, testDaemonAddr, nil)
<del> defer cleanup(globalEngine, t)
<del>
<del> c1 := make(chan struct{})
<del> go func() {
<del> cli.CmdRun("-i", unitTestImageID, "/bin/cat")
<del> close(c1)
<del> }()
<del>
<del> setTimeout(t, "Read/Write assertion timed out", 2*time.Second, func() {
<del> if err := assertPipe("hello\n", "hello", stdout, stdinPipe, 150); err != nil {
<del> t.Fatal(err)
<del> }
<del> })
<del>
<del> container := globalDaemon.List()[0]
<del>
<del> // Closing /bin/cat stdin, expect it to exit
<del> if err := stdin.Close(); err != nil {
<del> t.Fatal(err)
<del> }
<del>
<del> // as the process exited, CmdRun must finish and unblock. Wait for it
<del> setTimeout(t, "Waiting for CmdRun timed out", 10*time.Second, func() {
<del> <-c1
<del>
<del> go func() {
<del> cli.CmdWait(container.ID)
<del> }()
<del>
<del> if _, err := bufio.NewReader(stdout).ReadString('\n'); err != nil {
<del> t.Fatal(err)
<del> }
<del> })
<del>
<del> // Make sure that the client has been disconnected
<del> setTimeout(t, "The client should have been disconnected once the remote process exited.", 2*time.Second, func() {
<del> // Expecting pipe i/o error, just check that read does not block
<del> stdin.Read([]byte{})
<del> })
<del>
<del> // Cleanup pipes
<del> if err := closeWrap(stdin, stdinPipe, stdout, stdoutPipe); err != nil {
<del> t.Fatal(err)
<del> }
<del>}
<del>
<ide> // Expected behaviour: the process dies when the client disconnects
<ide> func TestRunDisconnect(t *testing.T) {
<ide>
| 2
|
Python
|
Python
|
fix a typo
|
cf1930cf8c78260fbb70d6bd239e6bb5a38bfdb6
|
<ide><path>libcloud/common/cloudsigma.py
<ide> 'host': 'lvs.cloudsigma.com'
<ide> },
<ide> 'wdc': {
<del> 'name': 'Las Vegas',
<add> 'name': 'Washington DC',
<ide> 'country': 'United States',
<ide> 'host': 'wdc.cloudsigma.com'
<ide> }
| 1
|
Python
|
Python
|
add example_container.py file
|
8f010817d6519dee00e41c79fc0579cb62ede540
|
<ide><path>example_container.py
<add># Licensed to the Apache Software Foundation (ASF) under one or more
<add># contributor license agreements. See the NOTICE file distributed with
<add># this work for additional information regarding copyright ownership.
<add># The ASF licenses this file to You under the Apache License, Version 2.0
<add># (the "License"); you may not use this file except in compliance with
<add># the License. You may obtain a copy of the License at
<add>#
<add># http://www.apache.org/licenses/LICENSE-2.0
<add>#
<add># Unless required by applicable law or agreed to in writing, software
<add># distributed under the License is distributed on an "AS IS" BASIS,
<add># WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
<add># See the License for the specific language governing permissions and
<add># limitations under the License.
<add>
<add>from libcloud.container.types import Provider
<add>from libcloud.container.providers import get_driver
<add>
<add>cls = get_driver(Provider.KUBERNETES)
<add>
<add># You can retrieve cluster ip by running "minikube ip" command
<add>conn = cls(key='my_token',
<add> host='126.32.21.4',
<add> ex_token_bearer_auth=True)
<add>
<add>for cluster in conn.list_clusters():
<add> print(cluster.name)
<add>
<add>for container in conn.list_containers():
<add> print(container.name)
| 1
|
Javascript
|
Javascript
|
handle eexist error when starting the server
|
3cfac35fd8a3a4446c527fe7a9aceafd38d6ab0e
|
<ide><path>packager/react-packager/src/SocketInterface/SocketServer.js
<ide> class SocketServer {
<ide> process.send({ type: 'createdServer' });
<ide> },
<ide> error => {
<del> if (error.code === 'EADDRINUSE') {
<add> if (error.code === 'EADDRINUSE' || error.code === 'EEXIST') {
<ide> // Server already listening, this may happen if multiple
<ide> // clients where started in quick succussion (buck).
<ide> process.send({ type: 'createdServer' });
| 1
|
Text
|
Text
|
add links to openjsf slack
|
9d9d2f0cf7163e270df8727bb5a83a53df4336eb
|
<ide><path>.github/SUPPORT.md
<ide> If you didn't find an answer in the resources above, try these unofficial
<ide> resources:
<ide>
<ide> * [Questions tagged 'node.js' on Stack Overflow](https://stackoverflow.com/questions/tagged/node.js)
<add>* [#nodejs](https://openjs-foundation.slack.com/archives/CK9Q4MB53) channel on the OpenJS Foundation Slack ([join here](https://slack-invite.openjsf.org/))
<ide> * [#node.js channel on chat.freenode.net](https://webchat.freenode.net?channels=node.js&uio=d4)
<ide> * [Node.js Slack Community](https://node-js.slack.com/)
<ide> * To register: [nodeslackers.com](https://www.nodeslackers.com/)
<ide><path>doc/guides/contributing/pull-requests.md
<ide> Node.js. We cannot accept such patches.
<ide> In case of doubt, open an issue in the
<ide> [issue tracker](https://github.com/nodejs/node/issues/) or contact one of the
<ide> [project Collaborators](https://github.com/nodejs/node/#current-project-team-members).
<del>Node.js has two IRC channels:
<add>
<add>Node.js has many channels on the
<add>[OpenJS Foundation Slack](https://slack-invite.openjsf.org/). Interesting
<add>channels are:
<add>[#nodejs](https://openjs-foundation.slack.com/archives/CK9Q4MB53) for general
<add>help, questions and discussions.
<add>[#nodejs-dev](https://openjs-foundation.slack.com/archives/C019Y2T6STH) for
<add>development of Node.js core specifically.
<add>
<add>Node.js also has two IRC channels:
<ide> [#Node.js](https://webchat.freenode.net/?channels=node.js) for general help and
<ide> questions, and
<ide> [#node-dev](https://webchat.freenode.net/?channels=node-dev) for development of
<ide><path>onboarding.md
<ide> onboarding session.
<ide> * Watching the main repo will flood your inbox (several hundred notifications
<ide> on typical weekdays), so be prepared
<ide>
<del>* `#node-dev` on [webchat.freenode.net](https://webchat.freenode.net/) is the
<del> best place to interact with the TSC / other Collaborators
<del> * If there are any questions after the session, a good place to ask is there!
<del> * Presence is not mandatory, but please drop a note there if force-pushing to
<del> `master`
<add>The project has two venues for real-time discussion:
<add>* [`#nodejs-dev`](https://openjs-foundation.slack.com/archives/C019Y2T6STH) on
<add> the [OpenJS Foundation](https://slack-invite.openjsf.org/)
<add>* `#node-dev` on [webchat.freenode.net](https://webchat.freenode.net/) is a
<add> great place to interact with the TSC and other Collaborators
<add> * If there are any questions after the session, a good place to ask is
<add> there!
<add> * Presence is not mandatory, but please drop a note there if force-pushing
<add> to `master`
<ide>
<ide> ## Project goals & values
<ide>
| 3
|
Javascript
|
Javascript
|
generalize the optimisation a little
|
b1c3e11c42c2547dfb27c7eba0fa1aa458863499
|
<ide><path>packages/ember-glimmer/lib/ember-template-compiler/plugins/transform-has-block-syntax.js
<ide> export default function TransformHasBlockSyntax() {
<ide> this.syntax = null;
<ide> }
<ide>
<del>const OLD_HAS_BLOCK = 'hasBlock';
<del>const NEW_HAS_BLOCK = 'has-block';
<del>const OLD_HAS_BLOCK_PARAMS = 'hasBlockParams';
<del>const NEW_HAS_BLOCK_PARAMS = 'has-block-params';
<add>const TRANSFORMATIONS = {
<add> hasBlock: 'has-block',
<add> hasBlockParams: 'has-block-params'
<add>};
<ide>
<ide> /**
<ide> @private
<ide> TransformHasBlockSyntax.prototype.transform = function TransformHasBlockSyntax_t
<ide>
<ide> traverse(ast, {
<ide> PathExpression(node) {
<del> if (node.original === OLD_HAS_BLOCK) {
<del> return b.sexpr(b.path(NEW_HAS_BLOCK));
<del> }
<del>
<del> if (node.original === OLD_HAS_BLOCK_PARAMS) {
<del> return b.sexpr(b.path(NEW_HAS_BLOCK_PARAMS));
<add> if (TRANSFORMATIONS[node.original]) {
<add> return b.sexpr(b.path(TRANSFORMATIONS[node.original]));
<ide> }
<ide> },
<ide> MustacheStatement(node) {
<del> if (node.path.original === OLD_HAS_BLOCK) {
<del> return b.mustache(b.path(NEW_HAS_BLOCK), node.params, node.hash, null, node.loc);
<del> }
<del> if (node.path.original === OLD_HAS_BLOCK_PARAMS) {
<del> return b.mustache(b.path(NEW_HAS_BLOCK_PARAMS), node.params, node.hash, null, node.loc);
<add> if (TRANSFORMATIONS[node.path.original]) {
<add> return b.mustache(b.path(TRANSFORMATIONS[node.path.original]), node.params, node.hash, null, node.loc);
<ide> }
<ide> },
<ide> SubExpression(node) {
<del> if (node.path.original === OLD_HAS_BLOCK) {
<del> return b.sexpr(b.path(NEW_HAS_BLOCK), node.params, node.hash);
<del> }
<del> if (node.path.original === OLD_HAS_BLOCK_PARAMS) {
<del> return b.sexpr(b.path(NEW_HAS_BLOCK_PARAMS), node.params, node.hash);
<add> if (TRANSFORMATIONS[node.path.original]) {
<add> return b.sexpr(b.path(TRANSFORMATIONS[node.path.original]), node.params, node.hash);
<ide> }
<ide> }
<ide> });
| 1
|
Javascript
|
Javascript
|
show errored file always in the error overlay.
|
bc2d5f3bdb50aba7a68a35eb51655d96d43d1a08
|
<ide><path>lib/error-debug.js
<ide> export default ({ error, error: { name, message, module } }) => (
<ide> {module ? <div style={styles.heading}>Error in {module.rawRequest}</div> : null}
<ide> {
<ide> name === 'ModuleBuildError'
<del> ? <pre style={styles.message} dangerouslySetInnerHTML={{ __html: ansiHTML(encodeHtml(message)) }} />
<add> ? <pre style={styles.stack} dangerouslySetInnerHTML={{ __html: ansiHTML(encodeHtml(message)) }} />
<ide> : <StackTrace error={error} />
<ide> }
<ide> </div>
<ide> )
<ide>
<ide> const StackTrace = ({ error: { name, message, stack } }) => (
<ide> <div>
<del> <div style={styles.heading}>{name && message && `${name}: ${message}`}</div>
<del> <pre style={styles.message}>
<add> <div style={styles.heading}>{message || name}</div>
<add> <pre style={styles.stack}>
<ide> {stack}
<ide> </pre>
<ide> </div>
<ide> const styles = {
<ide> zIndex: 9999
<ide> },
<ide>
<del> message: {
<add> stack: {
<ide> fontFamily: '"SF Mono", "Roboto Mono", "Fira Mono", consolas, menlo-regular, monospace',
<ide> fontSize: '13px',
<ide> color: '#fbe7f1',
<ide> margin: 0,
<ide> whiteSpace: 'pre-wrap',
<del> wordWrap: 'break-word'
<add> wordWrap: 'break-word',
<add> marginTop: '20px'
<ide> },
<ide>
<ide> heading: {
<ide> fontFamily: '-apple-system, BlinkMacSystemFont, Roboto, "Segoe UI", "Fira Sans", Avenir, "Helvetica Neue", "Lucida Grande", sans-serif',
<ide> fontSize: '15px',
<ide> fontWeight: 'bold',
<del> color: '#ff84bf',
<del> marginBottom: '20px'
<add> color: '#febfdd',
<add> marginBottom: '5px'
<ide> }
<ide> }
<ide>
<ide><path>server/render.js
<ide> function errorToJSON (err) {
<ide> const { name, message, stack } = err
<ide> const json = { name, message, stack }
<ide>
<del> if (name === 'ModuleBuildError') {
<del> // webpack compilation error
<del> const { module: { rawRequest } } = err
<add> if (err.module) {
<add> // rawRequest contains the filename of the module which has the error.
<add> const { rawRequest } = err.module
<ide> json.module = { rawRequest }
<ide> }
<ide>
| 2
|
Python
|
Python
|
fix shoddy test case
|
9c58dfec4e6865a5296d0913be907a20f712f0d3
|
<ide><path>tests/test_authentication.py
<ide> def test_login_view_renders_on_get(self):
<ide> cf. [#1810](https://github.com/tomchristie/django-rest-framework/pull/1810)
<ide> """
<ide> response = self.csrf_client.get('/auth/login/')
<del> self.assertContains(response, '<Label class="span4">Username:</label>')
<add> self.assertContains(response, '<label class="span4">Username:</label>')
<ide>
<ide> def test_post_form_session_auth_failing_csrf(self):
<ide> """
| 1
|
Text
|
Text
|
clarify design goals
|
07cf1424cb5e7bb7284acb282c996690cfc6d8c5
|
<ide><path>README.md
<ide> redux
<ide> [](https://travis-ci.org/gaearon/redux)
<ide> [](https://www.npmjs.com/package/redux)
<ide>
<del>An experiment in fully hot-reloadable Flux.
<add>Atomic Flux with hot reloading.
<ide>
<del>**The API might change any day.**
<del>_**Don't use in production just yet.**_
<add>**The API is likely to change a few times before we reach 1.0.**
<add>**Its [surface area](http://www.youtube.com/watch?v=4anAwXYqLG8) is minimal so you can try it in production and report any issues.**
<ide>
<ide> ## Why another Flux framework?
<ide>
<ide> Read **[The Evolution of Flux Frameworks](https://medium.com/@dan_abramov/the-evolution-of-flux-frameworks-6c16ad26bb31)** for some context.
<ide>
<del>### Design Goals
<del>
<del>* Hot reloading of everything.
<del>* A hook for the future devtools to "commit" a state, and replay actions on top of it during hot reload.
<add>### Philosophy & Design Goals
<add>
<add>* You shouldn't need a book on functional programming to use Redux.
<add>* Everything (Stores, Action Creators, configuration) is hot reloadable.
<add>* Preserves the benefits of Flux, but adds other nice properties thanks to its functional nature.
<add>* Prevents some of the anti-patterns common in Flux code.
<add>* Works great in isomoprhic apps because it doesn't use singletons and the data can be rehydrated.
<add>* Doesn't care how you store your data: you may use JS objects, arrays, ImmutableJS, etc.
<add>* Under the hood, it keeps all yout data in a tree, but you don't need to think about it.
<add>* Lets you efficiently subscribe to finer-grained updates than individual Stores.
<add>* Provides hooks for powerful devtools (e.g. time travel, record/replay) to be implementable without user buy-in.
<add>* Provides extension points so it's easy to [support promises](https://github.com/gaearon/redux/issues/99#issuecomment-112212639) or [generate constants](https://gist.github.com/skevy/8a4ffc3cfdaf5fd68739) outside the core.
<ide> * No wrapper calls in your stores and actions. Your stuff is your stuff.
<del>* Super easy to test things in isolation without mocks.
<del>* I don't mind action constants. Seriously.
<del>* Keep Flux lingo. No cursors or observables in core.
<add>* It's super easy to test things in isolation without mocks.
<add>* You can use “flat” Stores, or [compose and reuse Stores](https://gist.github.com/gaearon/d77ca812015c0356654f) just like you compose Components.
<add>* The API surface area is minimal.
<ide> * Have I mentioned hot reloading yet?
<ide>
<ide> ## Demo
| 1
|
Javascript
|
Javascript
|
create ember.test with registerhelper method
|
79aefff9b875026e3925a8a4a2a7b9dfe07a4f0f
|
<ide><path>packages/ember-testing/lib/helpers.js
<del>/*globals EMBER_APP_BEING_TESTED */
<add>require('ember-testing/test');
<ide>
<ide> var Promise = Ember.RSVP.Promise,
<del> pendingAjaxRequests = 0,
<del> originalFind,
<del> slice = [].slice,
<del> get = Ember.get;
<add> get = Ember.get,
<add> helper = Ember.Test.registerHelper;
<ide>
<ide> function visit(app, url) {
<ide> Ember.run(app, app.handleURL, url);
<ide> function wait(app, value) {
<ide> var watcher = setInterval(function() {
<ide> var routerIsLoading = app.__container__.lookup('router:main').router.isLoading;
<ide> if (routerIsLoading) { return; }
<del> if (pendingAjaxRequests) { return; }
<add> if (Ember.Test.pendingAjaxRequests) { return; }
<ide> if (Ember.run.hasScheduledTimers() || Ember.run.currentRunLoop) { return; }
<ide> clearInterval(watcher);
<ide> start();
<ide> function wait(app, value) {
<ide> });
<ide> }
<ide>
<del>function curry(app, fn) {
<del> return function() {
<del> var args = slice.call(arguments);
<del> args.unshift(app);
<del> return fn.apply(app, args);
<del> };
<del>}
<del>
<del>Ember.Application.reopen({
<del> setupForTesting: function() {
<del> this.deferReadiness();
<del>
<del> this.Router.reopen({
<del> location: 'none'
<del> });
<del> },
<del>
<del> injectTestHelpers: function() {
<del> Ember.$(document).ajaxStart(function() {
<del> pendingAjaxRequests++;
<del> });
<del>
<del> Ember.$(document).ajaxStop(function() {
<del> pendingAjaxRequests--;
<del> });
<del>
<del> // todo do this safer.
<del> window.visit = curry(this, visit);
<del> window.click = curry(this, click);
<del> window.fillIn = curry(this, fillIn);
<del> originalFind = window.find;
<del> window.find = curry(this, find);
<del> window.wait = curry(this, wait);
<del> },
<del>
<del> removeTestHelpers: function() {
<del> window.visit = null;
<del> window.click = null;
<del> window.fillIn = null;
<del> window.wait = null;
<del> window.find = originalFind;
<del> }
<del>});
<add>// expose these methods as test helpers
<add>helper('visit', visit);
<add>helper('click', click);
<add>helper('fillIn', fillIn);
<add>helper('find', find);
<add>helper('wait', wait);
<ide>\ No newline at end of file
<ide><path>packages/ember-testing/lib/main.js
<ide> require('ember-application');
<add>require('ember-testing/test');
<ide> require('ember-testing/helpers');
<ide>\ No newline at end of file
<ide><path>packages/ember-testing/lib/test.js
<add>var slice = [].slice,
<add> helpers = {},
<add> originalMethods = {};
<add>
<add>/**
<add> @class Test
<add> @namespace Ember
<add>*/
<add>Ember.Test = {
<add>
<add> /**
<add> @public
<add>
<add> `registerHelper` is used to register a
<add> test helper that will be injected when
<add> `App.injectTestHelpers` is called.
<add>
<add> The helper method will always be called
<add> with the current Application as the first
<add> parameter.
<add>
<add> For example:
<add> ```javascript
<add> Ember.Test.registerHelper('boot', function(app)) {
<add> Ember.run(app, app.deferReadiness);
<add> }
<add> ```
<add>
<add> This helper can later be called without arguments
<add> because it will be called with `app` as the
<add> first parameter.
<add>
<add> ```javascript
<add> App = Ember.Application.create();
<add> App.injectTestHelpers();
<add> boot();
<add> ```
<add>
<add> Whenever you register a helper that
<add> performs async operations,
<add> make sure you `return wait();` at the
<add> end of the helper.
<add>
<add> If an async helper also needs to return a value,
<add> pass it to the `wait` helper as a first argument:
<add> `return wait(val);`
<add>
<add> @method registerHelper
<add> @param name {String}
<add> @param helperMethod {Function}
<add> */
<add> registerHelper: function(name, helperMethod) {
<add> helpers[name] = helperMethod;
<add> },
<add> /**
<add> @method unregisterHelper
<add> @param name {String}
<add> */
<add> unregisterHelper: function(name) {
<add> delete helpers[name];
<add> if (originalMethods[name]) {
<add> window[name] = originalMethods[name];
<add> }
<add> delete originalMethods[name];
<add> },
<add>
<add> pendingAjaxRequests: 0
<add>};
<add>
<add>
<add>function curry(app, fn) {
<add> return function() {
<add> var args = slice.call(arguments);
<add> args.unshift(app);
<add> return fn.apply(app, args);
<add> };
<add>}
<add>
<add>
<add>Ember.Application.reopen({
<add> testHelpers: {},
<add>
<add> setupForTesting: function() {
<add> this.deferReadiness();
<add>
<add> this.Router.reopen({
<add> location: 'none'
<add> });
<add> },
<add>
<add> injectTestHelpers: function() {
<add> this.testHelpers = {};
<add>
<add> Ember.$(document).ajaxStart(function() {
<add> Ember.Test.pendingAjaxRequests++;
<add> });
<add>
<add> Ember.$(document).ajaxStop(function() {
<add> Ember.Test.pendingAjaxRequests--;
<add> });
<add>
<add> for (var name in helpers) {
<add> originalMethods[name] = window[name];
<add> this.testHelpers[name] = window[name] = curry(this, helpers[name]);
<add> }
<add> },
<add>
<add> removeTestHelpers: function() {
<add> for (var name in helpers) {
<add> window[name] = originalMethods[name];
<add> delete this.testHelpers[name];
<add> delete originalMethods[name];
<add> }
<add> }
<add>});
<ide><path>packages/ember-testing/tests/helpers_test.js
<ide> module("ember-testing", {
<ide> test("Ember.Application#injectTestHelpers/#removeTestHelpers", function() {
<ide> App = Ember.run(Ember.Application, Ember.Application.create);
<ide> ok(!window.visit);
<add> ok(!App.testHelpers.visit);
<ide> ok(!window.click);
<add> ok(!App.testHelpers.click);
<ide> ok(!window.fillIn);
<add> ok(!App.testHelpers.fillIn);
<ide> ok(!window.wait);
<add> ok(!App.testHelpers.wait);
<ide>
<ide> App.injectTestHelpers();
<ide>
<ide> ok(window.visit);
<add> ok(App.testHelpers.visit);
<ide> ok(window.click);
<add> ok(App.testHelpers.click);
<ide> ok(window.fillIn);
<del> ok(window.find);
<add> ok(App.testHelpers.fillIn);
<ide> ok(window.wait);
<add> ok(App.testHelpers.wait);
<ide>
<ide> App.removeTestHelpers();
<ide>
<ide> ok(!window.visit);
<add> ok(!App.testHelpers.visit);
<ide> ok(!window.click);
<add> ok(!App.testHelpers.click);
<ide> ok(!window.fillIn);
<add> ok(!App.testHelpers.fillIn);
<ide> ok(!window.wait);
<add> ok(!App.testHelpers.wait);
<ide> });
<ide>
<ide> test("Ember.Application#setupForTesting", function() {
<ide> test("Ember.Application#setupForTesting", function() {
<ide>
<ide> equal(App.__container__.lookup('router:main').location.implementation, 'none');
<ide> });
<add>
<add>test("Ember.Test.registerHelper/unregisterHelper", function() {
<add> expect(3);
<add> var appBooted = false;
<add>
<add> Ember.Test.registerHelper('boot', function(app) {
<add> Ember.run(app, app.advanceReadiness);
<add> appBooted = true;
<add> return window.wait();
<add> });
<add>
<add> Ember.run(function() {
<add> App = Ember.Application.create();
<add> App.setupForTesting();
<add> App.injectTestHelpers();
<add> });
<add>
<add> ok(window.boot);
<add>
<add> window.boot().then(function() {
<add> ok(appBooted);
<add> });
<add>
<add> App.removeTestHelpers();
<add> Ember.Test.unregisterHelper('boot');
<add>
<add> ok(!window.boot);
<add>});
| 4
|
Javascript
|
Javascript
|
add test for broken child process stdio
|
7cdfe8a130deb2300d4311da9f4372e971a37eac
|
<ide><path>test/parallel/test-child-process-bad-stdio.js
<add>'use strict';
<add>// Flags: --expose_internals
<add>const common = require('../common');
<add>const assert = require('assert');
<add>const cp = require('child_process');
<add>
<add>if (process.argv[2] === 'child') {
<add> setTimeout(() => {}, common.platformTimeout(100));
<add> return;
<add>}
<add>
<add>// Monkey patch spawn() to create a child process normally, but destroy the
<add>// stdout and stderr streams. This replicates the conditions where the streams
<add>// cannot be properly created.
<add>const ChildProcess = require('internal/child_process').ChildProcess;
<add>const original = ChildProcess.prototype.spawn;
<add>
<add>ChildProcess.prototype.spawn = function() {
<add> const err = original.apply(this, arguments);
<add>
<add> this.stdout.destroy();
<add> this.stderr.destroy();
<add> this.stdout = null;
<add> this.stderr = null;
<add>
<add> return err;
<add>};
<add>
<add>function createChild(options, callback) {
<add> const cmd = `${process.execPath} ${__filename} child`;
<add>
<add> return cp.exec(cmd, options, common.mustCall(callback));
<add>}
<add>
<add>// Verify that normal execution of a child process is handled.
<add>{
<add> createChild({}, (err, stdout, stderr) => {
<add> assert.strictEqual(err, null);
<add> assert.strictEqual(stdout, '');
<add> assert.strictEqual(stderr, '');
<add> });
<add>}
<add>
<add>// Verify that execution with an error event is handled.
<add>{
<add> const error = new Error('foo');
<add> const child = createChild({}, (err, stdout, stderr) => {
<add> assert.strictEqual(err, error);
<add> assert.strictEqual(stdout, '');
<add> assert.strictEqual(stderr, '');
<add> });
<add>
<add> child.emit('error', error);
<add>}
<add>
<add>// Verify that execution with a killed process is handled.
<add>{
<add> createChild({ timeout: 1 }, (err, stdout, stderr) => {
<add> assert.strictEqual(err.killed, true);
<add> assert.strictEqual(stdout, '');
<add> assert.strictEqual(stderr, '');
<add> });
<add>}
| 1
|
Text
|
Text
|
make semicolon at end of line optional
|
a32427c1200c7ff6ae9a668a81cb4a73dfa4e938
|
<ide><path>curriculum/challenges/english/02-javascript-algorithms-and-data-structures/basic-javascript/finding-a-remainder-in-javascript.md
<ide> tests:
<ide> - text: The value of <code>remainder</code> should be <code>2</code>
<ide> testString: assert(remainder === 2);
<ide> - text: You should use the <code>%</code> operator
<del> testString: assert(/\s+?remainder\s*?=\s*?.*%.*;/.test(code));
<add> testString: assert(/\s+?remainder\s*?=\s*?.*%.*;?/.test(code));
<ide>
<ide> ```
<ide>
| 1
|
Javascript
|
Javascript
|
use arrow syntax for anonymous callbacks
|
01c5c16aba178e8cd87cd5dc026e995a014440fb
|
<ide><path>test/parallel/test-net-persistent-ref-unref.js
<ide> const net = require('net');
<ide> const { internalBinding } = require('internal/test/binding');
<ide> const TCPWrap = internalBinding('tcp_wrap').TCP;
<ide>
<del>const echoServer = net.createServer(function(conn) {
<add>const echoServer = net.createServer((conn) => {
<ide> conn.end();
<ide> });
<ide>
<ide> echoServer.on('listening', function() {
<ide> sock.unref();
<ide> sock.ref();
<ide> sock.connect(this.address().port);
<del> sock.on('end', function() {
<add> sock.on('end', () => {
<ide> assert.strictEqual(refCount, 0);
<ide> echoServer.close();
<ide> });
| 1
|
Text
|
Text
|
add missing item to the changelog
|
9f80a48ad464c9240ea7f9d3f37251559cf7fe7e
|
<ide><path>CHANGELOG.md
<ide> * Fix some transition updates being ignored. ([@acdlite](https://github.com/acdlite) in [#24353](https://github.com/facebook/react/pull/24353))
<ide> * Fix `useDeferredValue` causing an infinite loop when passed an unmemoized value. ([@acdlite](https://github.com/acdlite) in [#24247](https://github.com/facebook/react/pull/24247))
<ide> * Fix throttling of revealing Suspense fallbacks. ([@sunderls](https://github.com/sunderls) in [#24253](https://github.com/facebook/react/pull/24253))
<add>* Fix an inconsistency in whether the props object is the same between renders. ([@acdlite](https://github.com/acdlite) in [#24421](https://github.com/facebook/react/pull/24421))
<ide> * Fix a missing warning about a `setState` loop in `useEffect`. ([@gaearon](https://github.com/gaearon) in [#24298](https://github.com/facebook/react/pull/24298))
<ide> * Fix a spurious hydration error. ([@gnoff](https://github.com/gnoff) in [#24404](https://github.com/facebook/react/pull/24404))
<ide> * Warn when calling `setState` in `useInsertionEffect`. ([@gaearon](https://github.com/gaearon) in [#24295](https://github.com/facebook/react/pull/24295))
| 1
|
Python
|
Python
|
add decimalfield to field_mapping
|
c329d2f08511dbc7660af9b8fc94e92d97c015cc
|
<ide><path>rest_framework/serializers.py
<ide> class ModelSerializer(Serializer):
<ide> models.DateTimeField: DateTimeField,
<ide> models.DateField: DateField,
<ide> models.TimeField: TimeField,
<add> models.DecimalField: DecimalField,
<ide> models.EmailField: EmailField,
<ide> models.CharField: CharField,
<ide> models.URLField: URLField,
| 1
|
PHP
|
PHP
|
set correct permission on generated folders
|
faf511ea28d25901aed964dfa37caac65c82d458
|
<ide><path>src/Shell/I18nShell.php
<ide> public function init($language = null)
<ide> $sourceFolder = rtrim($response, DS) . DS;
<ide> $targetFolder = $sourceFolder . $language . DS;
<ide> if (!is_dir($targetFolder)) {
<del> mkdir($targetFolder, 0770, true);
<add> mkdir($targetFolder, 0775, true);
<ide> }
<ide>
<ide> $count = 0;
| 1
|
Go
|
Go
|
fix docker cp
|
f950de575483f562bc56b7f42b67b00b3df1387b
|
<ide><path>daemon/archive.go
<ide> import (
<ide> "io"
<ide> "os"
<ide> "path/filepath"
<add> "strings"
<ide>
<ide> "github.com/docker/docker/api/types"
<ide> "github.com/docker/docker/pkg/archive"
<ide> func (container *Container) ExtractToDir(path string, noOverwriteDirNonDir bool,
<ide> // Use the resolved path relative to the container rootfs as the new
<ide> // absPath. This way we fully follow any symlinks in a volume that may
<ide> // lead back outside the volume.
<del> baseRel, err := filepath.Rel(container.basefs, resolvedPath)
<add> //
<add> // The Windows implementation of filepath.Rel in golang 1.4 does not
<add> // support volume style file path semantics. On Windows when using the
<add> // filter driver, we are guaranteed that the path will always be
<add> // a volume file path.
<add> var baseRel string
<add> if strings.HasPrefix(resolvedPath, `\\?\Volume{`) {
<add> if strings.HasPrefix(resolvedPath, container.basefs) {
<add> baseRel = resolvedPath[len(container.basefs):]
<add> if baseRel[:1] == `\` {
<add> baseRel = baseRel[1:]
<add> }
<add> }
<add> } else {
<add> baseRel, err = filepath.Rel(container.basefs, resolvedPath)
<add> }
<ide> if err != nil {
<ide> return err
<ide> }
| 1
|
Java
|
Java
|
improve webflux exception logging
|
196f3f8cc1aae9f3df06e9d961c62185e8730bfb
|
<ide><path>spring-web/src/main/java/org/springframework/web/server/adapter/HttpWebHandlerAdapter.java
<ide> /*
<del> * Copyright 2002-2017 the original author or authors.
<add> * Copyright 2002-2018 the original author or authors.
<ide> *
<ide> * Licensed under the Apache License, Version 2.0 (the "License");
<ide> * you may not use this file except in compliance with the License.
<ide> public ApplicationContext getApplicationContext() {
<ide> public Mono<Void> handle(ServerHttpRequest request, ServerHttpResponse response) {
<ide> ServerWebExchange exchange = createExchange(request, response);
<ide> return getDelegate().handle(exchange)
<del> .onErrorResume(ex -> handleFailure(response, ex))
<add> .onErrorResume(ex -> handleFailure(request, response, ex))
<ide> .then(Mono.defer(response::setComplete));
<ide> }
<ide>
<ide> protected ServerWebExchange createExchange(ServerHttpRequest request, ServerHttp
<ide> getCodecConfigurer(), getLocaleContextResolver(), this.applicationContext);
<ide> }
<ide>
<del> private Mono<Void> handleFailure(ServerHttpResponse response, Throwable ex) {
<add> private Mono<Void> handleFailure(ServerHttpRequest request, ServerHttpResponse response, Throwable ex) {
<ide> if (isDisconnectedClientError(ex)) {
<ide> if (disconnectedClientLogger.isTraceEnabled()) {
<ide> disconnectedClientLogger.trace("Looks like the client has gone away", ex);
<ide> else if (disconnectedClientLogger.isDebugEnabled()) {
<ide> return Mono.empty();
<ide> }
<ide> if (response.setStatusCode(HttpStatus.INTERNAL_SERVER_ERROR)) {
<del> logger.error("Failed to handle request", ex);
<add> logger.error("Failed to handle request [" + request.getMethod() + " "
<add> + request.getURI() + "]", ex);
<ide> return Mono.empty();
<ide> }
<ide> // After the response is committed, propagate errors to the server..
<ide><path>spring-web/src/main/java/org/springframework/web/server/handler/ResponseStatusExceptionHandler.java
<ide> /*
<del> * Copyright 2002-2017 the original author or authors.
<add> * Copyright 2002-2018 the original author or authors.
<ide> *
<ide> * Licensed under the Apache License, Version 2.0 (the "License");
<ide> * you may not use this file except in compliance with the License.
<ide> import reactor.core.publisher.Mono;
<ide>
<ide> import org.springframework.http.HttpStatus;
<add>import org.springframework.http.server.reactive.ServerHttpRequest;
<ide> import org.springframework.web.server.ResponseStatusException;
<ide> import org.springframework.web.server.ServerWebExchange;
<ide> import org.springframework.web.server.WebExceptionHandler;
<ide> public Mono<Void> handle(ServerWebExchange exchange, Throwable ex) {
<ide> if (ex instanceof ResponseStatusException) {
<ide> HttpStatus status = ((ResponseStatusException) ex).getStatus();
<ide> if (exchange.getResponse().setStatusCode(status)) {
<del> logger.trace(ex.getMessage());
<add> if (status.is5xxServerError()) {
<add> logger.error(buildMessage(exchange.getRequest(), ex));
<add> }
<add> else if (status == HttpStatus.BAD_REQUEST) {
<add> logger.warn(buildMessage(exchange.getRequest(), ex));
<add> }
<add> else {
<add> logger.trace(buildMessage(exchange.getRequest(), ex));
<add> }
<ide> return exchange.getResponse().setComplete();
<ide> }
<ide> }
<ide> return Mono.error(ex);
<ide> }
<ide>
<add> private String buildMessage(ServerHttpRequest request, Throwable ex) {
<add> return "Failed to handle request [" + request.getMethod() + " "
<add> + request.getURI() + "]: " + ex.getMessage();
<add> }
<add>
<ide> }
<ide>\ No newline at end of file
| 2
|
Text
|
Text
|
replace linkedin with linkedin
|
d349399e47da4800f83f43b0706a5bc983f0a15e
|
<ide><path>guide/arabic/miscellaneous/add-free-code-camp-in-linkedin/index.md
<ide> ---
<del>title: Add Free Code Camp in Linkedin
<add>title: Add Free Code Camp in LinkedIn
<ide> localeTitle: أضف Free Code Camp في لينكدين
<ide> ---
<ide> ينضم LinkedIn إلى Free Code Camp كجامعة.
<ide><path>guide/chinese/miscellaneous/add-free-code-camp-in-linkedin/index.md
<ide> ---
<del>title: Add Free Code Camp in Linkedin
<add>title: Add Free Code Camp in LinkedIn
<ide> localeTitle: 在Linkedin中添加免费代码营
<ide> ---
<ide> LinkedIn将Free Code Camp视为一所大学。
<ide><path>guide/english/miscellaneous/add-free-code-camp-in-linkedin/index.md
<ide> ---
<del>title: Add Free Code Camp in Linkedin
<add>title: Add Free Code Camp in LinkedIn
<ide> ---
<ide> LinkedIn recognizes Free Code Camp as a university.
<ide>
<ide><path>guide/portuguese/miscellaneous/add-free-code-camp-in-linkedin/index.md
<ide> ---
<del>title: Add Free Code Camp in Linkedin
<del>localeTitle: Adicione o Code Camp grátis no Linkedin
<add>title: Add Free Code Camp in LinkedIn
<add>localeTitle: Adicione o Code Camp grátis no LinkedIn
<ide> ---
<ide> O LinkedIn reconhece o Free Code Camp como uma universidade.
<ide>
<ide><path>guide/russian/miscellaneous/add-free-code-camp-in-linkedin/index.md
<ide> ---
<del>title: Add Free Code Camp in Linkedin
<del>localeTitle: Добавить бесплатный кодовый лагерь в Linkedin
<add>title: Add Free Code Camp in LinkedIn
<add>localeTitle: Добавить бесплатный кодовый лагерь в LinkedIn
<ide> ---
<ide> LinkedIn признает Free Code Camp как университет.
<ide>
<ide><path>guide/spanish/miscellaneous/add-free-code-camp-in-linkedin/index.md
<ide> ---
<del>title: Add Free Code Camp in Linkedin
<del>localeTitle: Añadir Free Code Camp en Linkedin
<add>title: Add Free Code Camp in LinkedIn
<add>localeTitle: Añadir Free Code Camp en LinkedIn
<ide> ---
<ide> LinkedIn reconoce Free Code Camp como una universidad.
<ide>
| 6
|
Java
|
Java
|
fix registration of applicationstartupaware
|
77a658f51bbb257c018888b76b7149dae34c769c
|
<ide><path>spring-context/src/main/java/org/springframework/context/support/AbstractApplicationContext.java
<ide> import org.springframework.context.ApplicationEventPublisher;
<ide> import org.springframework.context.ApplicationEventPublisherAware;
<ide> import org.springframework.context.ApplicationListener;
<add>import org.springframework.context.ApplicationStartupAware;
<ide> import org.springframework.context.ConfigurableApplicationContext;
<ide> import org.springframework.context.EmbeddedValueResolverAware;
<ide> import org.springframework.context.EnvironmentAware;
<ide> protected void prepareBeanFactory(ConfigurableListableBeanFactory beanFactory) {
<ide> beanFactory.ignoreDependencyInterface(ApplicationEventPublisherAware.class);
<ide> beanFactory.ignoreDependencyInterface(MessageSourceAware.class);
<ide> beanFactory.ignoreDependencyInterface(ApplicationContextAware.class);
<del> beanFactory.ignoreDependencyInterface(ApplicationStartup.class);
<add> beanFactory.ignoreDependencyInterface(ApplicationStartupAware.class);
<ide>
<ide> // BeanFactory interface not registered as resolvable type in a plain factory.
<ide> // MessageSource registered (and found for autowiring) as a bean.
| 1
|
Go
|
Go
|
canonicalize stored paths
|
28842d3f093e69cf62a090a532aacba1e6ff6d1c
|
<ide><path>pkg/archive/archive.go
<ide> type tarAppender struct {
<ide> SeenFiles map[uint64]string
<ide> }
<ide>
<add>// canonicalTarName provides a platform-independent and consistent posix-style
<add>//path for files and directories to be archived regardless of the platform.
<add>func canonicalTarName(name string, isDir bool) (string, error) {
<add> name, err := canonicalTarNameForPath(name)
<add> if err != nil {
<add> return "", err
<add> }
<add>
<add> // suffix with '/' for directories
<add> if isDir && !strings.HasSuffix(name, "/") {
<add> name += "/"
<add> }
<add> return name, nil
<add>}
<add>
<ide> func (ta *tarAppender) addTarFile(path, name string) error {
<ide> fi, err := os.Lstat(path)
<ide> if err != nil {
<ide> func (ta *tarAppender) addTarFile(path, name string) error {
<ide> return err
<ide> }
<ide>
<del> if fi.IsDir() && !strings.HasSuffix(name, "/") {
<del> name = name + "/"
<add> name, err = canonicalTarName(name, fi.IsDir())
<add> if err != nil {
<add> return fmt.Errorf("tar: cannot canonicalize path: %v", err)
<ide> }
<del>
<ide> hdr.Name = name
<ide>
<ide> nlink, inode, err := setHeaderForSpecialDevice(hdr, ta, name, fi.Sys())
<ide><path>pkg/archive/archive_unix.go
<ide> import (
<ide> "github.com/docker/docker/vendor/src/code.google.com/p/go/src/pkg/archive/tar"
<ide> )
<ide>
<add>// canonicalTarNameForPath returns platform-specific filepath
<add>// to canonical posix-style path for tar archival. p is relative
<add>// path.
<add>func canonicalTarNameForPath(p string) (string, error) {
<add> return p, nil // already unix-style
<add>}
<add>
<ide> func setHeaderForSpecialDevice(hdr *tar.Header, ta *tarAppender, name string, stat interface{}) (nlink uint32, inode uint64, err error) {
<ide> s, ok := stat.(*syscall.Stat_t)
<ide>
<ide><path>pkg/archive/archive_unix_test.go
<add>// +build !windows
<add>
<add>package archive
<add>
<add>import (
<add> "testing"
<add>)
<add>
<add>func TestCanonicalTarNameForPath(t *testing.T) {
<add> cases := []struct{ in, expected string }{
<add> {"foo", "foo"},
<add> {"foo/bar", "foo/bar"},
<add> {"foo/dir/", "foo/dir/"},
<add> }
<add> for _, v := range cases {
<add> if out, err := canonicalTarNameForPath(v.in); err != nil {
<add> t.Fatalf("cannot get canonical name for path: %s: %v", v.in, err)
<add> } else if out != v.expected {
<add> t.Fatalf("wrong canonical tar name. expected:%s got:%s", v.expected, out)
<add> }
<add> }
<add>}
<add>
<add>func TestCanonicalTarName(t *testing.T) {
<add> cases := []struct {
<add> in string
<add> isDir bool
<add> expected string
<add> }{
<add> {"foo", false, "foo"},
<add> {"foo", true, "foo/"},
<add> {"foo/bar", false, "foo/bar"},
<add> {"foo/bar", true, "foo/bar/"},
<add> }
<add> for _, v := range cases {
<add> if out, err := canonicalTarName(v.in, v.isDir); err != nil {
<add> t.Fatalf("cannot get canonical name for path: %s: %v", v.in, err)
<add> } else if out != v.expected {
<add> t.Fatalf("wrong canonical tar name. expected:%s got:%s", v.expected, out)
<add> }
<add> }
<add>}
<ide><path>pkg/archive/archive_windows.go
<ide> package archive
<ide>
<ide> import (
<add> "fmt"
<add> "strings"
<add>
<ide> "github.com/docker/docker/vendor/src/code.google.com/p/go/src/pkg/archive/tar"
<ide> )
<ide>
<add>// canonicalTarNameForPath returns platform-specific filepath
<add>// to canonical posix-style path for tar archival. p is relative
<add>// path.
<add>func canonicalTarNameForPath(p string) (string, error) {
<add> // windows: convert windows style relative path with backslashes
<add> // into forward slashes. since windows does not allow '/' or '\'
<add> // in file names, it is mostly safe to replace however we must
<add> // check just in case
<add> if strings.Contains(p, "/") {
<add> return "", fmt.Errorf("windows path contains forward slash: %s", p)
<add> }
<add> return strings.Replace(p, "\\", "/", -1), nil
<add>}
<add>
<ide> func setHeaderForSpecialDevice(hdr *tar.Header, ta *tarAppender, name string, stat interface{}) (nlink uint32, inode uint64, err error) {
<ide> // do nothing. no notion of Rdev, Inode, Nlink in stat on Windows
<ide> return
<ide><path>pkg/archive/archive_windows_test.go
<add>// +build windows
<add>
<add>package archive
<add>
<add>import (
<add> "testing"
<add>)
<add>
<add>func TestCanonicalTarNameForPath(t *testing.T) {
<add> cases := []struct {
<add> in, expected string
<add> shouldFail bool
<add> }{
<add> {"foo", "foo", false},
<add> {"foo/bar", "___", true}, // unix-styled windows path must fail
<add> {`foo\bar`, "foo/bar", false},
<add> {`foo\bar`, "foo/bar/", false},
<add> }
<add> for _, v := range cases {
<add> if out, err := canonicalTarNameForPath(v.in); err != nil && !v.shouldFail {
<add> t.Fatalf("cannot get canonical name for path: %s: %v", v.in, err)
<add> } else if v.shouldFail && err == nil {
<add> t.Fatalf("canonical path call should have pailed with error. in=%s out=%s", v.in, out)
<add> } else if !v.shouldFail && out != v.expected {
<add> t.Fatalf("wrong canonical tar name. expected:%s got:%s", v.expected, out)
<add> }
<add> }
<add>}
<add>
<add>func TestCanonicalTarName(t *testing.T) {
<add> cases := []struct {
<add> in string
<add> isDir bool
<add> expected string
<add> }{
<add> {"foo", false, "foo"},
<add> {"foo", true, "foo/"},
<add> {`foo\bar`, false, "foo/bar"},
<add> {`foo\bar`, true, "foo/bar/"},
<add> }
<add> for _, v := range cases {
<add> if out, err := canonicalTarName(v.in, v.isDir); err != nil {
<add> t.Fatalf("cannot get canonical name for path: %s: %v", v.in, err)
<add> } else if out != v.expected {
<add> t.Fatalf("wrong canonical tar name. expected:%s got:%s", v.expected, out)
<add> }
<add> }
<add>}
| 5
|
PHP
|
PHP
|
use https for the www.cakephp.org url
|
45d22ff9e6c16972866b12ca59acd46a1ec30143
|
<ide><path>src/Http/Response.php
<ide> public function getCookies()
<ide> *
<ide> * ### Full URI
<ide> * ```
<del> * cors($request, 'http://www.cakephp.org');
<add> * cors($request, 'https://www.cakephp.org');
<ide> * ```
<ide> *
<ide> * ### URI with wildcard
<ide> * ```
<del> * cors($request, 'http://*.cakephp.org');
<add> * cors($request, 'https://*.cakephp.org');
<ide> * ```
<ide> *
<ide> * ### Ignoring the requested protocol
| 1
|
PHP
|
PHP
|
add langiage line for uploaded validation rule
|
1562407562859a880f5f494647d5c52f8af8d44e
|
<ide><path>resources/lang/en/validation.php
<ide> 'string' => 'The :attribute must be a string.',
<ide> 'timezone' => 'The :attribute must be a valid zone.',
<ide> 'unique' => 'The :attribute has already been taken.',
<add> 'uploaded' => 'The :attribute uploading failed.',
<ide> 'url' => 'The :attribute format is invalid.',
<ide>
<ide> /*
| 1
|
PHP
|
PHP
|
add info about new events to docblock
|
d2320d9f4a876fc5b3b266dd3163addc6c95602c
|
<ide><path>src/ORM/Table.php
<ide> public function exists($conditions)
<ide> * listeners will receive the entity and the options array as arguments. The type
<ide> * of operation performed (insert or update) can be determined by checking the
<ide> * entity's method `isNew`, true meaning an insert and false an update.
<add> * - Model.afterSaveCommit: Will be triggered after the transaction is commited
<add> * for atomic save, listeners will receive the entity and the options array
<add> * as arguments.
<ide> *
<ide> * This method will determine whether the passed entity needs to be
<ide> * inserted or updated in the database. It does that by checking the `isNew`
<ide> protected function _update($entity, $data)
<ide> *
<ide> * ### Events
<ide> *
<del> * - `beforeDelete` Fired before the delete occurs. If stopped the delete
<add> * - `Model.beforeDelete` Fired before the delete occurs. If stopped the delete
<ide> * will be aborted. Receives the event, entity, and options.
<del> * - `afterDelete` Fired after the delete has been successful. Receives
<add> * - `Model.afterDelete` Fired after the delete has been successful. Receives
<ide> * the event, entity, and options.
<add> * - `Model.afterDelete` Fired after the delete has been successful. Receives
<add> * the event, entity, and options.
<add> * - `Model.afterDeleteCommit` Fired after the transaction is committed for
<add> * an atomic delete. Receives the event, entity, and options.
<ide> *
<ide> * The options argument will be converted into an \ArrayObject instance
<ide> * for the duration of the callbacks, this allows listeners to modify
| 1
|
Text
|
Text
|
remove 2.0 notice on main docs
|
280dfa4343fb7cf188f4573361bbec447b8ea7c2
|
<ide><path>docs/index.md
<ide>
<ide> **A toolkit for building well-connected, self-describing Web APIs.**
<ide>
<del>---
<del>
<del>**Note**: This documentation is for the 2.0 version of REST framework. If you are looking for earlier versions please see the [0.4.x branch][0.4] on GitHub.
<del>
<del>---
<del>
<ide> Django REST framework is a lightweight library that makes it easy to build Web APIs. It is designed as a modular and easy to customize architecture, based on Django's class based views.
<ide>
<ide> Web APIs built using REST framework are fully self-describing and web browseable - a huge useability win for your developers. It also supports a wide range of media types, authentication and permission policies out of the box.
| 1
|
Ruby
|
Ruby
|
add assertion helpers to pendingmigrationstest
|
7eff9d6f683239140303dd57af97e06d899309b5
|
<ide><path>activerecord/test/cases/migration/pending_migrations_test.rb
<ide> # frozen_string_literal: true
<ide>
<ide> require "cases/helper"
<add>require "active_support/core_ext/hash/deep_merge"
<ide>
<ide> module ActiveRecord
<ide> class Migration
<ide> class PendingMigrationsTest < ActiveRecord::TestCase
<ide> @original_configurations = ActiveRecord::Base.configurations
<ide> ActiveRecord::Base.configurations = base_config
<ide> ActiveRecord::Base.establish_connection(:primary)
<del>
<del> @app = Minitest::Mock.new
<ide> end
<ide>
<ide> teardown do
<ide> class #{name.classify} < ActiveRecord::Migration::Current
<ide>
<ide> def test_errors_if_pending
<ide> create_migration "01", "create_foo"
<del>
<del> assert_raises ActiveRecord::PendingMigrationError do
<del> CheckPending.new(@app).call({})
<del> end
<del>
<del> # Continues failing
<del> assert_raises ActiveRecord::PendingMigrationError do
<del> CheckPending.new(@app).call({})
<del> end
<add> assert_pending_migrations
<ide> end
<ide>
<ide> def test_checks_if_supported
<ide> run_migrations
<del>
<del> check_pending = CheckPending.new(@app)
<del>
<del> @app.expect :call, nil, [{}]
<del> check_pending.call({})
<del> @app.verify
<del>
<del> # With cached result
<del> @app.expect :call, nil, [{}]
<del> check_pending.call({})
<del> @app.verify
<add> assert_no_pending_migrations
<ide> end
<ide>
<ide> def test_okay_with_no_migrations
<del> check_pending = CheckPending.new(@app)
<del>
<del> @app.expect :call, nil, [{}]
<del> check_pending.call({})
<del> @app.verify
<del>
<del> # With cached result
<del> @app.expect :call, nil, [{}]
<del> check_pending.call({})
<del> @app.verify
<add> assert_no_pending_migrations
<ide> end
<ide>
<ide> # Regression test for https://github.com/rails/rails/pull/29759
<ide> def test_understands_migrations_created_out_of_order
<ide> # With a prior file before even initialization
<ide> create_migration "05", "create_bar"
<ide> quietly { run_migrations }
<del>
<del> check_pending = CheckPending.new(@app)
<del>
<del> @app.expect :call, nil, [{}]
<del> check_pending.call({})
<del> @app.verify
<add> assert_no_pending_migrations
<ide>
<ide> # It understands the new migration created at 01
<ide> create_migration "01", "create_foo"
<del> assert_raises ActiveRecord::PendingMigrationError do
<del> check_pending.call({})
<del> end
<add> assert_pending_migrations
<ide> end
<ide>
<ide> def test_with_multiple_database
<ide> create_migration "01", "create_bar", database: :secondary
<del>
<del> assert_raises ActiveRecord::PendingMigrationError do
<del> CheckPending.new(@app).call({})
<del> end
<add> assert_pending_migrations
<ide>
<ide> ActiveRecord::Base.establish_connection(:secondary)
<ide> quietly { run_migrations }
<ide>
<ide> ActiveRecord::Base.establish_connection(:primary)
<ide>
<del> @app.expect :call, nil, [{}]
<del> CheckPending.new(@app).call({})
<del> @app.verify
<add> assert_no_pending_migrations
<ide>
<ide> # Now check exclusion if database_tasks is set to false for the db_config
<ide> create_migration "02", "create_foo", database: :secondary
<del> assert_raises ActiveRecord::PendingMigrationError do
<del> CheckPending.new(@app).call({})
<del> end
<del>
<del> new_config = base_config
<del> new_config[ActiveRecord::ConnectionHandling::DEFAULT_ENV.call][:secondary][:database_tasks] = false
<del> ActiveRecord::Base.configurations = new_config
<add> assert_pending_migrations
<ide>
<del> @app.expect :call, nil, [{}]
<del> CheckPending.new(@app).call({})
<del> @app.verify
<add> ActiveRecord::Base.configurations = base_config(secondary: { database_tasks: false })
<add> assert_no_pending_migrations
<ide> end
<ide>
<ide> def test_with_stdlib_logger
<del> old, ActiveRecord::Base.logger = ActiveRecord::Base.logger, ::Logger.new($stdout)
<del> quietly do
<del> assert_nothing_raised { ActiveRecord::Migration::CheckPending.new(Proc.new { }).call({}) }
<del> end
<add> old, ActiveRecord::Base.logger = ActiveRecord::Base.logger, ::Logger.new(StringIO.new)
<add> assert_nothing_raised { CheckPending.new(proc { }).call({}) }
<ide> ensure
<ide> ActiveRecord::Base.logger = old
<ide> end
<ide>
<ide> private
<add> def assert_pending_migrations
<add> # Do twice to test that the error continues to be raised.
<add> 2.times do
<add> assert_raises ActiveRecord::PendingMigrationError do
<add> CheckPending.new(proc { flunk }).call({})
<add> end
<add> end
<add> end
<add>
<add> def assert_no_pending_migrations
<add> app = Minitest::Mock.new
<add> check_pending = CheckPending.new(app)
<add>
<add> # Do twice to also test the cached result.
<add> 2.times do
<add> app.expect :call, nil, [{}]
<add> check_pending.call({})
<add> app.verify
<add> end
<add> end
<add>
<ide> def database_path_for(database_name)
<ide> File.join(@tmp_dir, "#{database_name}.sqlite3")
<ide> end
<ide> def migrations_path_for(database_name)
<ide> File.join(@tmp_dir, "#{database_name}-migrations")
<ide> end
<ide>
<del> def base_config
<add> def base_config(additional_config = {})
<ide> {
<ide> ActiveRecord::ConnectionHandling::DEFAULT_ENV.call => {
<ide> primary: {
<ide> def base_config
<ide> adapter: "sqlite3",
<ide> database: database_path_for(:secondary),
<ide> migrations_paths: migrations_path_for(:secondary),
<del> }
<del> }
<add> },
<add> }.deep_merge(additional_config)
<ide> }
<ide> end
<ide> end
| 1
|
PHP
|
PHP
|
remove uneeded "use" statements
|
66586166f23a2352fa0a3499b9c356e2acd38817
|
<ide><path>src/Auth/DigestAuthenticate.php
<ide> namespace Cake\Auth;
<ide>
<ide> use Cake\Controller\ComponentRegistry;
<del>use Cake\Core\Configure;
<ide> use Cake\Http\ServerRequest;
<ide> use Cake\Utility\Security;
<ide>
<ide><path>src/Collection/Collection.php
<ide> use ArrayIterator;
<ide> use InvalidArgumentException;
<ide> use IteratorIterator;
<del>use LogicException;
<ide> use Serializable;
<ide> use Traversable;
<ide>
<ide><path>src/Command/HelpCommand.php
<ide> use Cake\Console\ConsoleIo;
<ide> use Cake\Console\ConsoleOptionParser;
<ide> use Cake\Console\ConsoleOutput;
<del>use Cake\Utility\Inflector;
<ide> use SimpleXMLElement;
<ide>
<ide> /**
<ide><path>src/Console/CommandScanner.php
<ide> use Cake\Core\Plugin;
<ide> use Cake\Filesystem\Folder;
<ide> use Cake\Utility\Inflector;
<del>use InvalidArgumentException;
<ide>
<ide> /**
<ide> * Used by CommandCollection and CommandTask to scan the filesystem
<ide><path>src/Core/BasePlugin.php
<ide> */
<ide> namespace Cake\Core;
<ide>
<del>use Cake\Event\EventManagerInterface;
<ide> use InvalidArgumentException;
<ide> use ReflectionClass;
<ide>
<ide><path>src/Core/PluginApplicationInterface.php
<ide> namespace Cake\Core;
<ide>
<ide> use Cake\Event\EventDispatcherInterface;
<del>use Cake\Event\EventManagerInterface;
<ide>
<ide> /**
<ide> * Interface for Applications that leverage plugins & events.
<ide><path>src/Core/PluginCollection.php
<ide> */
<ide> namespace Cake\Core;
<ide>
<del>use ArrayIterator;
<ide> use Cake\Core\Exception\MissingPluginException;
<ide> use Countable;
<ide> use InvalidArgumentException;
<ide> use Iterator;
<del>use RuntimeException;
<ide>
<ide> /**
<ide> * Plugin Collection
<ide><path>src/Core/PluginInterface.php
<ide> */
<ide> namespace Cake\Core;
<ide>
<del>use Cake\Event\EventManagerInterface;
<del>
<ide> /**
<ide> * Plugin Interface
<ide> */
<ide><path>src/Database/DriverInterface.php
<ide> namespace Cake\Database;
<ide>
<ide> use Cake\Database\Query;
<del>use Cake\Database\Statement\PDOStatement;
<del>use InvalidArgumentException;
<del>use PDO;
<del>use PDOException;
<ide>
<ide> /**
<ide> * Interface for database driver.
<ide><path>src/Database/SchemaCache.php
<ide>
<ide> use Cake\Cache\Cache;
<ide> use Cake\Database\Connection;
<del>use Cake\Datasource\ConnectionManager;
<del>use InvalidArgumentException;
<ide> use RuntimeException;
<ide>
<ide> /**
<ide><path>src/Database/Type/BinaryUuidType.php
<ide>
<ide> use Cake\Core\Exception\Exception;
<ide> use Cake\Database\Driver;
<del>use Cake\Database\Driver\Sqlserver;
<ide> use Cake\Database\Type;
<ide> use Cake\Database\TypeInterface;
<ide> use Cake\Utility\Text;
<ide><path>src/Database/Type/DateType.php
<ide> */
<ide> namespace Cake\Database\Type;
<ide>
<del>use Cake\Database\Driver;
<ide> use DateTime;
<ide>
<ide> /**
<ide><path>src/Routing/Middleware/RoutingMiddleware.php
<ide> namespace Cake\Routing\Middleware;
<ide>
<ide> use Cake\Cache\Cache;
<del>use Cake\Core\Configure;
<ide> use Cake\Core\PluginApplicationInterface;
<ide> use Cake\Http\BaseApplication;
<ide> use Cake\Http\MiddlewareQueue;
<ide><path>src/Routing/Route/Route.php
<ide> */
<ide> namespace Cake\Routing\Route;
<ide>
<del>use Cake\Http\ServerRequest;
<ide> use Cake\Http\ServerRequestFactory;
<ide> use Cake\Routing\Router;
<ide> use InvalidArgumentException;
<ide><path>src/Routing/Router.php
<ide> use Cake\Routing\Exception\MissingRouteException;
<ide> use Cake\Utility\Inflector;
<ide> use Psr\Http\Message\ServerRequestInterface;
<del>use RuntimeException;
<ide>
<ide> /**
<ide> * Parses the request URL into controller, action, and parameters. Uses the connected routes
<ide><path>src/Shell/Task/CommandTask.php
<ide> */
<ide> namespace Cake\Shell\Task;
<ide>
<del>use Cake\Console\Command;
<ide> use Cake\Console\Shell;
<ide> use Cake\Core\App;
<ide> use Cake\Core\Plugin;
<ide><path>src/TestSuite/ConsoleIntegrationTestCase.php
<ide> */
<ide> namespace Cake\TestSuite;
<ide>
<del>use Cake\Console\Command;
<ide> use Cake\Console\CommandRunner;
<ide> use Cake\Console\ConsoleInput;
<ide> use Cake\Console\ConsoleIo;
<ide><path>tests/TestCase/Console/CommandCollectionTest.php
<ide> */
<ide> namespace Cake\Test\Console;
<ide>
<del>use Cake\Console\Command;
<ide> use Cake\Console\CommandCollection;
<ide> use Cake\Core\Configure;
<ide> use Cake\Core\Plugin;
<ide> use Cake\Shell\I18nShell;
<ide> use Cake\Shell\RoutesShell;
<ide> use Cake\TestSuite\TestCase;
<del>use InvalidArgumentException;
<ide> use stdClass;
<ide> use TestApp\Command\DemoCommand;
<ide>
<ide><path>tests/TestCase/Console/CommandFactoryTest.php
<ide> namespace Cake\Test\TestCase\Console;
<ide>
<ide> use Cake\Console\CommandFactory;
<del>use Cake\Console\ConsoleIo;
<ide> use Cake\TestSuite\TestCase;
<ide> use InvalidArgumentException;
<ide> use TestApp\Command\DemoCommand;
<ide><path>tests/TestCase/Console/CommandRunnerTest.php
<ide> use Cake\Console\Shell;
<ide> use Cake\Core\Configure;
<ide> use Cake\Core\ConsoleApplicationInterface;
<del>use Cake\Event\EventList;
<ide> use Cake\Event\EventManager;
<ide> use Cake\Http\BaseApplication;
<ide> use Cake\Routing\Router;
<ide> use Cake\TestSuite\Stub\ConsoleOutput;
<ide> use Cake\TestSuite\TestCase;
<ide> use InvalidArgumentException;
<ide> use TestApp\Command\DemoCommand;
<del>use TestApp\Http\EventApplication;
<ide> use TestApp\Shell\SampleShell;
<ide>
<ide> /**
<ide><path>tests/TestCase/Core/BasePluginTest.php
<ide> use Cake\Core\Configure;
<ide> use Cake\Core\Plugin;
<ide> use Cake\Core\PluginApplicationInterface;
<del>use Cake\Event\EventManager;
<ide> use Cake\Http\MiddlewareQueue;
<ide> use Cake\TestSuite\TestCase;
<ide> use Company\TestPluginThree\Plugin as TestPluginThree;
<ide><path>tests/TestCase/Database/ConnectionTest.php
<ide> use Cake\Database\Exception\NestedTransactionRollbackException;
<ide> use Cake\Database\Log\LoggingStatement;
<ide> use Cake\Database\Log\QueryLogger;
<del>use Cake\Database\Retry\CommandRetry;
<del>use Cake\Database\Retry\ReconnectStrategy;
<ide> use Cake\Database\StatementInterface;
<ide> use Cake\Datasource\ConnectionManager;
<ide> use Cake\Log\Log;
<ide><path>tests/TestCase/Database/SchemaCacheTest.php
<ide> use Cake\Database\SchemaCache;
<ide> use Cake\Database\Schema\CachedCollection;
<ide> use Cake\Datasource\ConnectionManager;
<del>use Cake\ORM\Entity;
<ide> use Cake\TestSuite\TestCase;
<ide>
<ide> /**
<ide><path>tests/TestCase/Database/Type/BinaryUuidTypeTest.php
<ide> */
<ide> namespace Cake\Test\TestCase\Database\Type;
<ide>
<del>use Cake\Database\Type;
<ide> use Cake\Database\Type\BinaryUuidType;
<ide> use Cake\TestSuite\TestCase;
<ide> use Cake\Utility\Text;
<ide><path>tests/TestCase/Http/ActionDispatcherTest.php
<ide> use Cake\Http\ServerRequest;
<ide> use Cake\Http\Session;
<ide> use Cake\Routing\DispatcherFactory;
<del>use Cake\Routing\Filter\ControllerFactoryFilter;
<ide> use Cake\Routing\Router;
<ide> use Cake\TestSuite\TestCase;
<ide>
<ide><path>tests/TestCase/Http/ServerTest.php
<ide>
<ide> use Cake\Core\HttpApplicationInterface;
<ide> use Cake\Event\Event;
<del>use Cake\Event\EventList;
<ide> use Cake\Event\EventManager;
<ide> use Cake\Http\BaseApplication;
<ide> use Cake\Http\CallbackStream;
<ide> use Cake\Http\MiddlewareQueue;
<ide> use Cake\Http\Server;
<ide> use Cake\TestSuite\TestCase;
<ide> use InvalidArgumentException;
<del>use Psr\Http\Message\ResponseInterface;
<ide> use RuntimeException;
<ide> use TestApp\Http\BadResponseApplication;
<del>use TestApp\Http\EventApplication;
<ide> use TestApp\Http\InvalidMiddlewareApplication;
<ide> use TestApp\Http\MiddlewareApplication;
<ide> use Zend\Diactoros\Response;
<ide><path>tests/TestCase/Mailer/EmailTest.php
<ide> use Cake\Mailer\Email;
<ide> use Cake\Mailer\Transport\DebugTransport;
<ide> use Cake\TestSuite\TestCase;
<del>use Cake\View\Exception\MissingTemplateException;
<ide> use Exception;
<ide> use SimpleXmlElement;
<ide>
<ide><path>tests/TestCase/ORM/TableRegressionTest.php
<ide> */
<ide> namespace Cake\Test\TestCase\ORM;
<ide>
<del>use Cake\ORM\TableRegistry;
<ide> use Cake\TestSuite\TestCase;
<ide> use InvalidArgumentException;
<ide>
<ide><path>tests/TestCase/Routing/Middleware/RoutingMiddlewareTest.php
<ide> namespace Cake\Test\TestCase\Routing\Middleware;
<ide>
<ide> use Cake\Cache\Cache;
<del>use Cake\Core\Configure;
<ide> use Cake\Routing\Middleware\RoutingMiddleware;
<ide> use Cake\Routing\RouteBuilder;
<ide> use Cake\Routing\RouteCollection;
<ide><path>tests/TestCase/Shell/Task/LoadTaskTest.php
<ide> namespace Cake\Test\TestCase\Shell\Task;
<ide>
<ide> use Cake\Console\Shell;
<del>use Cake\Core\Plugin;
<ide> use Cake\Filesystem\File;
<ide> use Cake\TestSuite\ConsoleIntegrationTestCase;
<ide>
<ide><path>tests/TestCase/TestSuite/CookieEncryptedUsingControllerTest.php
<ide> */
<ide> namespace Cake\Test\TestCase\Controller;
<ide>
<del>use Cake\Routing\DispatcherFactory;
<ide> use Cake\Routing\Router;
<ide> use Cake\TestSuite\IntegrationTestCase;
<ide> use Cake\Utility\Security;
<ide><path>tests/TestCase/Utility/SecurityTest.php
<ide> use Cake\Utility\Crypto\Mcrypt;
<ide> use Cake\Utility\Crypto\OpenSsl;
<ide> use Cake\Utility\Security;
<del>use RuntimeException;
<ide>
<ide> /**
<ide> * SecurityTest class
| 32
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.