content_type
stringclasses 8
values | main_lang
stringclasses 7
values | message
stringlengths 1
50
| sha
stringlengths 40
40
| patch
stringlengths 52
962k
| file_count
int64 1
300
|
|---|---|---|---|---|---|
Ruby
|
Ruby
|
remove intermediate variable
|
a7ca4bc3002d9977e988d5407f702b773e52a135
|
<ide><path>Library/Contributions/cmd/brew-test-bot.rb
<ide> def run
<ide> end
<ide> Process.wait(pid)
<ide>
<del> end_time = Time.now
<del> @time = end_time - start_time
<add> @time = Time.now - start_time
<ide>
<ide> success = $?.success?
<ide> @status = success ? :passed : :failed
| 1
|
Text
|
Text
|
add book recommendation
|
70ae03aa3a89f5a267d7b7d9b8c482d5af4d6768
|
<ide><path>guide/english/book-recommendations/javascript/index.md
<ide> ---
<ide> title: Books on JavaScript
<ide> ---
<del> ### List of Books
<add>## List of Books
<ide>
<del>#### Eloquent JavaScript
<add>### Eloquent JavaScript
<ide>
<ide> One of the best books on JavaScript. A must for both, beginners and intermediate programmers, who code in JavaScript. The best part is that the e-book is available for free.
<ide>
<ide> - [E-book](https://eloquentjavascript.net/)(free)
<ide> - [Amazon](https://www.amazon.com/gp/product/1593275846/ref=as_li_qf_sp_asin_il_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=1593275846&linkCode=as2&tag=marijhaver-20&linkId=VPXXXSRYC5COG5R5)
<ide>
<del>#### You Don't Know JS
<add>### You Don't Know JS
<ide>
<ide> Six book series by Kyle Simpson. In-depth series on every aspect of JavaScript. This is a must-read for every JS developer who wants to understand exactly how things like prototypes, closures, and the `this` keyword work.
<ide>
<ide> - [Github](https://github.com/getify/You-Dont-Know-JS)(free)
<ide> - [Kindle Version, Amazon](https://www.amazon.com/You-Dont-Know-Js-Book/dp/B01AY9P0P6)
<ide>
<del>#### JavaScript: The Good Parts
<add>### JavaScript: The Good Parts
<ide>
<ide> Book by the "grandfather" of JavaScript, Douglas Crockford. He discusses both the "good" and "bad" aspects of JavaScript. Note: The 1st edition of this book written in 2008 has never been updated and many of the things the book discusses are now out of date. It is good reading for someone who wants to understand why Crockford was so important to the history of JavaScript, but many of the techniques he recommends in the book are no longer best practice.
<ide>
<ide> - [Amazon](https://www.amazon.com/JavaScript-Good-Parts-Douglas-Crockford/dp/0596517742)
<ide>
<del>#### JavaScript: Info
<add>### JavaScript: Info
<ide>
<ide> A collection of articles covering the basics (core language and working with a browser) as well as advanced topics with concise explanations. Available as an e-book with pay and also as an online tutorial.
<ide>
<ide> - [Online](https://javascript.info/)
<ide> - [E-book](https://javascript.info/ebook)
<ide>
<del>#### Javascript and Jquery: Interactive Front End Web Development
<add>### Exploring ES6
<add>
<add>A book for people who already know JavaScript and want to learn ES6
<add>-[Online](http://exploringjs.com/)
<add>
<add>### Javascript and Jquery: Interactive Front End Web Development
<ide> This full-color book will show you how to make your websites more interactive and your interfaces more interesting and intuitive. Author is Jon Duckett who also penned HTML & CSS: Design and Build Websites. *Does not cover ES6*
<ide>
<ide> - [Amazon](https://www.amazon.com/JavaScript-JQuery-Interactive-Front-End-Development/dp/1118531647)
<ide>
<del>
<del>#### Other Resources
<add>### Other Resources
<ide>
<ide> - A selection of online free resources; including e-books for JavaScript are available at [JS Books](https://jsbooks.revolunet.com/)
| 1
|
Mixed
|
Java
|
add setnativevalue command to reactswitchmanager
|
3560093115d1825e5e01ad0d8e65d5a90a35fe2c
|
<ide><path>Libraries/Components/Switch/AndroidSwitchNativeComponent.js
<ide>
<ide> 'use strict';
<ide>
<add>import * as React from 'react';
<add>
<ide> import type {
<ide> WithDefault,
<ide> BubblingEventHandler,
<ide> } from 'react-native/Libraries/Types/CodegenTypes';
<ide>
<add>import codegenNativeCommands from 'react-native/Libraries/Utilities/codegenNativeCommands';
<ide> import codegenNativeComponent from 'react-native/Libraries/Utilities/codegenNativeComponent';
<ide> import type {HostComponent} from 'react-native/Libraries/Renderer/shims/ReactNativeTypes';
<ide>
<ide> type NativeProps = $ReadOnly<{|
<ide> onChange?: BubblingEventHandler<SwitchChangeEvent>,
<ide> |}>;
<ide>
<add>type NativeType = HostComponent<NativeProps>;
<add>
<add>interface NativeCommands {
<add> +setNativeValue: (
<add> viewRef: React.ElementRef<NativeType>,
<add> value: boolean,
<add> ) => void;
<add>}
<add>
<add>export const Commands: NativeCommands = codegenNativeCommands<NativeCommands>({
<add> supportedCommands: ['setNativeValue'],
<add>});
<add>
<ide> export default (codegenNativeComponent<NativeProps>('AndroidSwitch', {
<ide> interfaceOnly: true,
<del>}): HostComponent<NativeProps>);
<add>}): NativeType);
<ide><path>ReactAndroid/src/main/java/com/facebook/react/viewmanagers/AndroidSwitchManagerDelegate.java
<ide>
<ide> import android.view.View;
<ide> import androidx.annotation.Nullable;
<add>import com.facebook.react.bridge.ReadableArray;
<ide> import com.facebook.react.uimanager.BaseViewManagerDelegate;
<ide> import com.facebook.react.uimanager.BaseViewManagerInterface;
<ide> import com.facebook.react.uimanager.LayoutShadowNode;
<ide> public void setProperty(T view, String propName, @Nullable Object value) {
<ide> super.setProperty(view, propName, value);
<ide> }
<ide> }
<add>
<add> public void receiveCommand(AndroidSwitchManagerInterface<T> viewManager, T view, String commandName, ReadableArray args) {
<add> switch (commandName) {
<add> case "setNativeValue":
<add> viewManager.setNativeValue(view, args.getBoolean(0));
<add> break;
<add> }
<add> }
<ide> }
<ide><path>ReactAndroid/src/main/java/com/facebook/react/viewmanagers/AndroidSwitchManagerInterface.java
<ide> void setOn(T view, boolean value);
<ide> void setThumbTintColor(T view, @Nullable Integer value);
<ide> void setTrackTintColor(T view, @Nullable Integer value);
<add> void setNativeValue(T view, boolean value);
<ide> }
<ide><path>ReactAndroid/src/main/java/com/facebook/react/views/switchview/ReactSwitchManager.java
<ide> import android.content.Context;
<ide> import android.view.View;
<ide> import android.widget.CompoundButton;
<add>import androidx.annotation.NonNull;
<ide> import androidx.annotation.Nullable;
<ide> import com.facebook.react.bridge.ReactContext;
<add>import com.facebook.react.bridge.ReadableArray;
<ide> import com.facebook.react.bridge.ReadableMap;
<ide> import com.facebook.react.uimanager.LayoutShadowNode;
<ide> import com.facebook.react.uimanager.PixelUtil;
<ide> public void setEnabled(ReactSwitch view, boolean enabled) {
<ide> @Override
<ide> @ReactProp(name = ViewProps.ON)
<ide> public void setOn(ReactSwitch view, boolean on) {
<del> this.setValue(view, on);
<add> setValueInternal(view, on);
<ide> }
<ide>
<ide> @Override
<ide> @ReactProp(name = "value")
<ide> public void setValue(ReactSwitch view, boolean value) {
<del> // we set the checked change listener to null and then restore it so that we don't fire an
<del> // onChange event to JS when JS itself is updating the value of the switch
<del> view.setOnCheckedChangeListener(null);
<del> view.setOn(value);
<del> view.setOnCheckedChangeListener(ON_CHECKED_CHANGE_LISTENER);
<add> setValueInternal(view, value);
<ide> }
<ide>
<ide> @Override
<ide> public void setTrackTintColor(ReactSwitch view, @Nullable Integer color) {
<ide> view.setTrackColor(color);
<ide> }
<ide>
<add> @Override
<add> public void setNativeValue(ReactSwitch view, boolean value) {
<add> // TODO(T52835863): Implement when view commands start using delegates generated by JS.
<add> }
<add>
<add> @Override
<add> public void receiveCommand(
<add> @NonNull ReactSwitch view, String commandId, @Nullable ReadableArray args) {
<add> switch (commandId) {
<add> case "setNativeValue":
<add> setValueInternal(view, args != null && args.getBoolean(0));
<add> break;
<add> }
<add> }
<add>
<ide> @Override
<ide> protected void addEventEmitters(final ThemedReactContext reactContext, final ReactSwitch view) {
<ide> view.setOnCheckedChangeListener(ON_CHECKED_CHANGE_LISTENER);
<ide> public long measure(
<ide> PixelUtil.toDIPFromPixel(view.getMeasuredWidth()),
<ide> PixelUtil.toDIPFromPixel(view.getMeasuredHeight()));
<ide> }
<add>
<add> private static void setValueInternal(ReactSwitch view, boolean value) {
<add> // we set the checked change listener to null and then restore it so that we don't fire an
<add> // onChange event to JS when JS itself is updating the value of the switch
<add> view.setOnCheckedChangeListener(null);
<add> view.setOn(value);
<add> view.setOnCheckedChangeListener(ON_CHECKED_CHANGE_LISTENER);
<add> }
<ide> }
| 4
|
Javascript
|
Javascript
|
pass env variables to child process on z/os
|
02aa8c22c26220e16616a88370d111c0229efe5e
|
<ide><path>lib/child_process.js
<ide> const {
<ide>
<ide> const MAX_BUFFER = 1024 * 1024;
<ide>
<add>const isZOS = process.platform === 'os390';
<add>
<ide> /**
<ide> * Spawns a new Node.js process + fork.
<ide> * @param {string|URL} modulePath
<ide> ObjectDefineProperty(execFile, promisify.custom, {
<ide> value: customPromiseExecFunction(execFile)
<ide> });
<ide>
<add>function copyProcessEnvToEnv(env, name, optionEnv) {
<add> if (process.env[name] &&
<add> (!optionEnv ||
<add> !ObjectPrototypeHasOwnProperty(optionEnv, name))) {
<add> env[name] = process.env[name];
<add> }
<add>}
<add>
<ide> function normalizeSpawnArguments(file, args, options) {
<ide> validateString(file, 'file');
<ide>
<ide> function normalizeSpawnArguments(file, args, options) {
<ide>
<ide> // process.env.NODE_V8_COVERAGE always propagates, making it possible to
<ide> // collect coverage for programs that spawn with white-listed environment.
<del> if (process.env.NODE_V8_COVERAGE &&
<del> !ObjectPrototypeHasOwnProperty(options.env || {}, 'NODE_V8_COVERAGE')) {
<del> env.NODE_V8_COVERAGE = process.env.NODE_V8_COVERAGE;
<add> copyProcessEnvToEnv(env, 'NODE_V8_COVERAGE', options.env);
<add>
<add> if (isZOS) {
<add> // The following environment variables must always propagate if set.
<add> copyProcessEnvToEnv(env, '_BPXK_AUTOCVT', options.env);
<add> copyProcessEnvToEnv(env, '_CEE_RUNOPTS', options.env);
<add> copyProcessEnvToEnv(env, '_TAG_REDIR_ERR', options.env);
<add> copyProcessEnvToEnv(env, '_TAG_REDIR_IN', options.env);
<add> copyProcessEnvToEnv(env, '_TAG_REDIR_OUT', options.env);
<add> copyProcessEnvToEnv(env, 'STEPLIB', options.env);
<add> copyProcessEnvToEnv(env, 'LIBPATH', options.env);
<add> copyProcessEnvToEnv(env, '_EDC_SIG_DFLT', options.env);
<add> copyProcessEnvToEnv(env, '_EDC_SUSV3', options.env);
<ide> }
<ide>
<ide> let envKeys = [];
| 1
|
Python
|
Python
|
fix outdated docs link
|
fb47ba6a394b9b7afa2be4c9b10aedef5bad0cc8
|
<ide><path>setup.py
<ide> def parse_setuppy_commands():
<ide> return True
<ide>
<ide>
<add>def get_docs_url():
<add> if not ISRELEASED:
<add> return "https://numpy.org/devdocs"
<add> else:
<add> # For releaeses, this URL ends up on pypi.
<add> # By pinning the version, users looking at old PyPI releases can get
<add> # to the associated docs easily.
<add> return "https://numpy.org/doc/{}.{}".format(MAJOR, MINOR)
<add>
<add>
<ide> def setup_package():
<ide> src_path = os.path.dirname(os.path.abspath(__file__))
<ide> old_path = os.getcwd()
<ide> def setup_package():
<ide> download_url = "https://pypi.python.org/pypi/numpy",
<ide> project_urls={
<ide> "Bug Tracker": "https://github.com/numpy/numpy/issues",
<del> "Documentation": "https://docs.scipy.org/doc/numpy/",
<add> "Documentation": get_docs_url(),
<ide> "Source Code": "https://github.com/numpy/numpy",
<ide> },
<ide> license = 'BSD',
| 1
|
Python
|
Python
|
update dd node driver comments
|
3b947b311940bdd0fa75a03029095db34eb7a605
|
<ide><path>libcloud/compute/drivers/dimensiondata.py
<ide> class DimensionDataNodeDriver(NodeDriver):
<ide> """
<ide> DimensionData node driver.
<add> Default api_version is used unless specified.
<ide> """
<ide>
<ide> selected_region = None
| 1
|
Go
|
Go
|
increase verbosity for mem & cpu test
|
05a76477e6a70975b1c60637c3f8e2d7a507c490
|
<ide><path>integration-cli/docker_cli_run_test.go
<ide> func TestDockerRunEchoStdoutWithCPUAndMemoryLimit(t *testing.T) {
<ide> errorOut(err, t, out)
<ide>
<ide> if out != "test\n" {
<del> t.Errorf("container should've printed 'test'")
<add> t.Errorf("container should've printed 'test', got %q instead", out)
<ide> }
<ide>
<ide> deleteAllContainers()
| 1
|
Text
|
Text
|
move sample commands into code blocks
|
292361eaf04be33a2b6641dde9fb01701637870f
|
<ide><path>docs/How-To-Open-a-Homebrew-Pull-Request.md
<ide>
<ide> The following commands are used by Homebrew contributors to set up a fork of Homebrew's Git repository on GitHub, create a new branch and create a GitHub pull request ("PR") of the changes in that branch.
<ide>
<del>Depending on the change you want to make, you need to send the pull request to the appropriate one of Homebrew's main repositories. If you want to submit a change to Homebrew core code (the `brew` implementation), you should open the pull request on [Homebrew/brew](https://github.com/Homebrew/brew). If you want to submit a change for a formula, you should open the pull request on [the `homebrew/core` tap](https://github.com/Homebrew/homebrew-core) or another [official tap](https://github.com/Homebrew), based on the formula type.
<add>Depending on the change you want to make, you need to send the pull request to the appropriate one of Homebrew's main repositories. If you want to submit a change to Homebrew core code (the `brew` implementation), you should open the pull request on [Homebrew/brew](https://github.com/Homebrew/brew). If you want to submit a change for a formula, you should open the pull request on the [homebrew/core](https://github.com/Homebrew/homebrew-core) tap or another [official tap](https://github.com/Homebrew), based on the formula type.
<ide>
<ide> ## Submit a new version of an existing formula
<ide> 1. Use `brew bump-formula-pr` to do everything (i.e. forking, committing, pushing) with a single command. Run `brew bump-formula-pr --help` to learn more.
<ide> Depending on the change you want to make, you need to send the pull request to t
<ide>
<ide> 1. [Fork the Homebrew/brew repository on GitHub](https://github.com/Homebrew/brew/fork).
<ide> * This creates a personal remote repository that you can push to. This is needed because only Homebrew maintainers have push access to the main repositories.
<del>2. Change to the directory containing your Homebrew installation with `cd $(brew --repository)`.
<del>3. Add your pushable forked repository with `git remote add <YOUR_USERNAME> https://github.com/<YOUR_USERNAME>/brew.git`.
<add>2. Change to the directory containing your Homebrew installation:
<add> ```sh
<add> cd $(brew --repository)
<add> ```
<add>3. Add your pushable forked repository as a new remote:
<add> ```sh
<add> git remote add <YOUR_USERNAME> https://github.com/<YOUR_USERNAME>/brew.git
<add> ```
<ide> * `<YOUR_USERNAME>` is your GitHub username, not your local machine username.
<ide>
<ide> ### Formulae related pull request
<ide>
<ide> 1. [Fork the Homebrew/homebrew-core repository on GitHub](https://github.com/Homebrew/homebrew-core/fork).
<ide> * This creates a personal remote repository that you can push to. This is needed because only Homebrew maintainers have push access to the main repositories.
<del>2. Change to the directory containing Homebrew formulae with `cd $(brew --repository homebrew/core)`.
<del>3. Add your pushable forked repository with `git remote add <YOUR_USERNAME> https://github.com/<YOUR_USERNAME>/homebrew-core.git`
<add>2. Change to the directory containing Homebrew formulae:
<add> ```sh
<add> cd $(brew --repository homebrew/core)
<add> ```
<add>3. Add your pushable forked repository as a new remote:
<add> ```sh
<add> git remote add <YOUR_USERNAME> https://github.com/<YOUR_USERNAME>/homebrew-core.git
<add> ```
<ide> * `<YOUR_USERNAME>` is your GitHub username, not your local machine username.
<ide>
<ide> ## Create your pull request from a new branch
<ide>
<ide> To make a new branch and submit it for review, create a GitHub pull request with the following steps:
<ide>
<del>1. Check out the `master` branch with `git checkout master`.
<del>2. Retrieve new changes to the `master` branch with `brew update`.
<del>3. Create a new branch from the latest `master` branch with `git checkout -b <YOUR_BRANCH_NAME> origin/master`.
<add>1. Check out the `master` branch:
<add> ```sh
<add> git checkout master
<add> ```
<add>2. Retrieve new changes to the `master` branch:
<add> ```sh
<add> brew update
<add> ```
<add>3. Create a new branch from the latest `master` branch:
<add> ```sh
<add> git checkout -b <YOUR_BRANCH_NAME> origin/master
<add> ```
<ide> 4. Make your changes. For formulae, use `brew edit` or your favourite text editor, following all the guidelines in the [Formula Cookbook](Formula-Cookbook.md).
<ide> * If there's a `bottle do` block in the formula, don't remove or change it; we'll update it when we pull your PR.
<del>5. Test your changes by doing the following, and ensure they all pass without issue. For changed formulae, make sure you do the `brew audit` step while your changed formula is installed.
<del> * `brew tests`
<del> * `brew install --build-from-source <CHANGED_FORMULA>`
<del> * `brew test <CHANGED_FORMULA>`
<del> * `brew audit --strict <CHANGED_FORMULA>`
<del>6. Make a separate commit for each changed formula with `git add` and `git commit`.
<del>7. Upload your new commits to the branch on your fork with `git push --set-upstream <YOUR_USERNAME> <YOUR_BRANCH_NAME>`.
<add>5. Test your changes by running the following, and ensure they all pass without issue. For changed formulae, make sure you do the `brew audit` step while your changed formula is installed.
<add> ```sh
<add> brew tests
<add> brew install --build-from-source <CHANGED_FORMULA>
<add> brew test <CHANGED_FORMULA>
<add> brew audit --strict <CHANGED_FORMULA>
<add> ```
<add>6. [Make a separate commit](Formula-Cookbook.md#commit) for each changed formula with `git add` and `git commit`.
<add> * Please note that our preferred commit message format for simple version updates is "`<FORMULA_NAME> <NEW_VERSION>`", e.g. "`source-highlight 3.1.8`" but `devel` version updates should have the commit message suffixed with `(devel)`, e.g. "`nginx 1.9.1 (devel)`". If updating both `stable` and `devel`, the format should be a concatenation of these two forms, e.g. "`x264 r2699, r2705 (devel)`".
<add>7. Upload your branch of new commits to your fork:
<add> ```sh
<add> git push --set-upstream <YOUR_USERNAME> <YOUR_BRANCH_NAME>
<add> ```
<ide> 8. Go to the relevant repository (e.g. <https://github.com/Homebrew/brew>, <https://github.com/Homebrew/homebrew-core>, etc.) and create a pull request to request review and merging of the commits in your pushed branch. Explain why the change is needed and, if fixing a bug, how to reproduce the bug. Make sure you have done each step in the checklist that appears in your new PR.
<del> * Please note that our preferred commit message format for simple version updates is "`<FORMULA_NAME> <NEW_VERSION>`", e.g. "`source-highlight 3.1.8`". `devel` version updates should have the commit message suffixed with `(devel)`, e.g. "`nginx 1.9.1 (devel)`". If updating both `stable` and `devel`, the format should be a concatenation of these two forms, e.g. "`x264 r2699, r2705 (devel)`".
<ide> 9. Await feedback or a merge from Homebrew's maintainers. We typically respond to all PRs within a couple days, but it may take up to a week, depending on the maintainers' workload.
<ide>
<ide> Thank you!
<ide> To respond well to feedback:
<ide>
<ide> To make changes based on feedback:
<ide>
<del>1. Check out your branch again with `git checkout <YOUR_BRANCH_NAME>`.
<add>1. Check out your branch again:
<add> ```sh
<add> git checkout <YOUR_BRANCH_NAME>
<add> ```
<ide> 2. Make any requested changes and commit them with `git add` and `git commit`.
<del>3. Squash new commits into one commit per formula with `git rebase --interactive origin/master`.
<del>4. Push to your remote fork's branch and the pull request with `git push --force`.
<del>
<del>If you are working on a PR for a single formula, `git commit --amend` is a convenient way of keeping your commits squashed as you go.
<add>3. Squash new commits into one commit per formula:
<add> ```sh
<add> git rebase --interactive origin/master
<add> ```
<add> * If you are working on a PR for a single formula, `git commit --amend` is a convenient way of keeping your commits squashed as you go.
<add>4. Push to your remote fork's branch and the pull request:
<add> ```sh
<add> git push --force
<add> ```
<ide>
<ide> Once all feedback has been addressed and if it's a change we want to include (we include most changes), then we'll add your commit to Homebrew. Note that the PR status may show up as "Closed" instead of "Merged" because of the way we merge contributions. Don't worry: you will still get author credit in the actual merged commit.
<ide>
| 1
|
Python
|
Python
|
add predict_special_tokens option to gpt also
|
ce863365459d5c7b96fd1b5917bc9fb00f509d18
|
<ide><path>pytorch_pretrained_bert/modeling_openai.py
<ide> def __init__(
<ide> attn_pdrop=0.1,
<ide> layer_norm_epsilon=1e-5,
<ide> initializer_range=0.02,
<add> predict_special_tokens=True
<ide> ):
<ide> """Constructs OpenAIGPTConfig.
<ide>
<ide> def __init__(
<ide> layer_norm_epsilon: epsilon to use in the layer norm layers
<ide> initializer_range: The sttdev of the truncated_normal_initializer for
<ide> initializing all weight matrices.
<add> predict_special_tokens: should we predict special tokens (when the model has a LM head)
<ide> """
<ide> if isinstance(vocab_size_or_config_json_file, str) or (sys.version_info[0] == 2
<ide> and isinstance(vocab_size_or_config_json_file, unicode)):
<ide> def __init__(
<ide> self.attn_pdrop = attn_pdrop
<ide> self.layer_norm_epsilon = layer_norm_epsilon
<ide> self.initializer_range = initializer_range
<add> self.predict_special_tokens = predict_special_tokens
<ide> else:
<ide> raise ValueError(
<ide> "First argument must be either a vocabulary size (int)"
<ide> class OpenAIGPTLMHead(nn.Module):
<ide> def __init__(self, model_embeddings_weights, config):
<ide> super(OpenAIGPTLMHead, self).__init__()
<ide> self.n_embd = config.n_embd
<add> self.vocab_size = config.vocab_size
<add> self.predict_special_tokens = config.predict_special_tokens
<ide> embed_shape = model_embeddings_weights.shape
<ide> self.decoder = nn.Linear(embed_shape[1], embed_shape[0], bias=False)
<ide> self.set_embeddings_weights(model_embeddings_weights)
<ide>
<del> def set_embeddings_weights(self, model_embeddings_weights):
<add> def set_embeddings_weights(self, model_embeddings_weights, predict_special_tokens=True):
<add> self.predict_special_tokens = predict_special_tokens
<ide> embed_shape = model_embeddings_weights.shape
<ide> self.decoder.weight = model_embeddings_weights # Tied weights
<ide>
<ide> def forward(self, hidden_state):
<del> # Truncated Language modeling logits (we remove the last token)
<del> # h_trunc = h[:, :-1].contiguous().view(-1, self.n_embd)
<ide> lm_logits = self.decoder(hidden_state)
<add> if not self.predict_special_tokens:
<add> lm_logits = lm_logits[..., :self.vocab_size]
<ide> return lm_logits
<ide>
<ide>
<ide> def init_weights(self, module):
<ide> if isinstance(module, nn.Linear) and module.bias is not None:
<ide> module.bias.data.zero_()
<ide>
<del> def set_num_special_tokens(self, num_special_tokens):
<del> pass
<del>
<ide> @classmethod
<ide> def from_pretrained(
<ide> cls, pretrained_model_name_or_path, num_special_tokens=None, state_dict=None, cache_dir=None, from_tf=False, *inputs, **kwargs
<ide> def __init__(self, config, output_attentions=False):
<ide> self.h = nn.ModuleList([copy.deepcopy(block) for _ in range(config.n_layer)])
<ide>
<ide> self.apply(self.init_weights)
<del> # nn.init.normal_(self.embed.weight, std=0.02)
<ide>
<ide> def set_num_special_tokens(self, num_special_tokens):
<ide> " Update input embeddings with new embedding matrice if needed "
<ide> def __init__(self, config, output_attentions=False):
<ide> self.lm_head = OpenAIGPTLMHead(self.transformer.tokens_embed.weight, config)
<ide> self.apply(self.init_weights)
<ide>
<del> def set_num_special_tokens(self, num_special_tokens):
<add> def set_num_special_tokens(self, num_special_tokens, predict_special_tokens=True):
<ide> """ Update input and output embeddings with new embedding matrice
<ide> Make sure we are sharing the embeddings
<ide> """
<add> self.config.predict_special_tokens = self.transformer.config.predict_special_tokens = predict_special_tokens
<ide> self.transformer.set_num_special_tokens(num_special_tokens)
<del> self.lm_head.set_embeddings_weights(self.transformer.tokens_embed.weight)
<add> self.lm_head.set_embeddings_weights(self.transformer.tokens_embed.weight, predict_special_tokens=predict_special_tokens)
<ide>
<ide> def forward(self, input_ids, position_ids=None, token_type_ids=None, lm_labels=None):
<ide> hidden_states = self.transformer(input_ids, position_ids, token_type_ids)
<ide> def __init__(self, config, output_attentions=False):
<ide> self.multiple_choice_head = OpenAIGPTMultipleChoiceHead(config)
<ide> self.apply(self.init_weights)
<ide>
<del> def set_num_special_tokens(self, num_special_tokens):
<add> def set_num_special_tokens(self, num_special_tokens, predict_special_tokens=True):
<ide> """ Update input and output embeddings with new embedding matrice
<ide> Make sure we are sharing the embeddings
<ide> """
<add> self.config.predict_special_tokens = self.transformer.config.predict_special_tokens = predict_special_tokens
<ide> self.transformer.set_num_special_tokens(num_special_tokens)
<del> self.lm_head.set_embeddings_weights(self.transformer.tokens_embed.weight)
<add> self.lm_head.set_embeddings_weights(self.transformer.tokens_embed.weight, predict_special_tokens=predict_special_tokens)
<ide>
<ide> def forward(self, input_ids, mc_token_ids, lm_labels=None, mc_labels=None, token_type_ids=None, position_ids=None):
<ide> hidden_states = self.transformer(input_ids, position_ids, token_type_ids)
| 1
|
Python
|
Python
|
create virtualenv via python call
|
504294e4c231c4fe5b81c37d0a04c0832ce95503
|
<ide><path>airflow/utils/python_virtualenv.py
<ide> #
<ide> """Utilities for creating a virtual environment"""
<ide> import os
<add>import sys
<ide> from collections import deque
<ide> from typing import List, Optional
<ide>
<ide>
<ide>
<ide> def _generate_virtualenv_cmd(tmp_dir: str, python_bin: str, system_site_packages: bool) -> List[str]:
<del> cmd = ['virtualenv', tmp_dir]
<add> cmd = [sys.executable, '-m', 'virtualenv', tmp_dir]
<ide> if system_site_packages:
<ide> cmd.append('--system-site-packages')
<ide> if python_bin is not None:
<ide><path>tests/utils/test_python_virtualenv.py
<ide> # specific language governing permissions and limitations
<ide> # under the License.
<ide> #
<add>import sys
<ide> import unittest
<ide> from unittest import mock
<ide>
<ide> def test_should_create_virtualenv(self, mock_execute_in_subprocess):
<ide> venv_directory="/VENV", python_bin="pythonVER", system_site_packages=False, requirements=[]
<ide> )
<ide> assert "/VENV/bin/python" == python_bin
<del> mock_execute_in_subprocess.assert_called_once_with(['virtualenv', '/VENV', '--python=pythonVER'])
<add> mock_execute_in_subprocess.assert_called_once_with(
<add> [sys.executable, '-m', 'virtualenv', '/VENV', '--python=pythonVER']
<add> )
<ide>
<ide> @mock.patch('airflow.utils.python_virtualenv.execute_in_subprocess')
<ide> def test_should_create_virtualenv_with_system_packages(self, mock_execute_in_subprocess):
<ide> def test_should_create_virtualenv_with_system_packages(self, mock_execute_in_sub
<ide> )
<ide> assert "/VENV/bin/python" == python_bin
<ide> mock_execute_in_subprocess.assert_called_once_with(
<del> ['virtualenv', '/VENV', '--system-site-packages', '--python=pythonVER']
<add> [sys.executable, '-m', 'virtualenv', '/VENV', '--system-site-packages', '--python=pythonVER']
<ide> )
<ide>
<ide> @mock.patch('airflow.utils.python_virtualenv.execute_in_subprocess')
<ide> def test_should_create_virtualenv_with_extra_packages(self, mock_execute_in_subp
<ide> )
<ide> assert "/VENV/bin/python" == python_bin
<ide>
<del> mock_execute_in_subprocess.assert_any_call(['virtualenv', '/VENV', '--python=pythonVER'])
<add> mock_execute_in_subprocess.assert_any_call(
<add> [sys.executable, '-m', 'virtualenv', '/VENV', '--python=pythonVER']
<add> )
<ide>
<ide> mock_execute_in_subprocess.assert_called_with(['/VENV/bin/pip', 'install', 'apache-beam[gcp]'])
<ide>
| 2
|
PHP
|
PHP
|
add returntypewillchange attributes to iterators
|
9c62b50353040234cf3dfdaf4aaedba228d46713
|
<ide><path>src/Database/Query.php
<ide> use Closure;
<ide> use InvalidArgumentException;
<ide> use IteratorAggregate;
<add>use ReturnTypeWillChange;
<ide> use RuntimeException;
<ide>
<ide> /**
<ide> public function func(): FunctionsBuilder
<ide> * @return \Cake\Database\StatementInterface
<ide> * @psalm-suppress ImplementedReturnTypeMismatch
<ide> */
<add> #[ReturnTypeWillChange]
<ide> public function getIterator()
<ide> {
<ide> if ($this->_iterator === null || $this->_dirty) {
<ide><path>src/Database/Statement/BufferedStatement.php
<ide> use Cake\Database\StatementInterface;
<ide> use Cake\Database\TypeConverterTrait;
<ide> use Iterator;
<add>use ReturnTypeWillChange;
<ide>
<ide> /**
<ide> * A statement decorator that implements buffered results.
<ide> protected function _reset(): void
<ide> *
<ide> * @return mixed
<ide> */
<add> #[ReturnTypeWillChange]
<ide> public function key()
<ide> {
<ide> return $this->index;
<ide> public function key()
<ide> *
<ide> * @return mixed
<ide> */
<add> #[ReturnTypeWillChange]
<ide> public function current()
<ide> {
<ide> return $this->buffer[$this->index];
<ide><path>src/Database/Statement/StatementDecorator.php
<ide> use Cake\Database\TypeConverterTrait;
<ide> use Countable;
<ide> use IteratorAggregate;
<add>use ReturnTypeWillChange;
<ide>
<ide> /**
<ide> * Represents a database statement. Statements contains queries that can be
<ide> public function rowCount(): int
<ide> * @return \Cake\Database\StatementInterface
<ide> * @psalm-suppress ImplementedReturnTypeMismatch
<ide> */
<add> #[ReturnTypeWillChange]
<ide> public function getIterator()
<ide> {
<ide> if (!$this->_hasExecuted) {
<ide><path>src/Datasource/EntityTrait.php
<ide> use Cake\Utility\Hash;
<ide> use Cake\Utility\Inflector;
<ide> use InvalidArgumentException;
<add>use ReturnTypeWillChange;
<ide> use Traversable;
<ide>
<ide> /**
<ide> public function offsetExists($offset): bool
<ide> * @param string $offset The offset to get.
<ide> * @return mixed
<ide> */
<add> #[ReturnTypeWillChange]
<ide> public function &offsetGet($offset)
<ide> {
<ide> return $this->get($offset);
<ide><path>src/ORM/ResultSet.php
<ide> use Cake\Database\StatementInterface;
<ide> use Cake\Datasource\EntityInterface;
<ide> use Cake\Datasource\ResultSetInterface;
<add>use ReturnTypeWillChange;
<ide> use SplFixedArray;
<ide>
<ide> /**
<ide> public function __construct(Query $query, StatementInterface $statement)
<ide> *
<ide> * @return object|array
<ide> */
<add> #[ReturnTypeWillChange]
<ide> public function current()
<ide> {
<ide> return $this->_current;
| 5
|
Python
|
Python
|
defer ctypes imports in _dtypes_ctypes module
|
66cb824d8a266077841e27df955235886f118139
|
<ide><path>numpy/core/_dtype_ctypes.py
<ide> class DummyStruct(ctypes.Structure):
<ide> * PEP3118 cannot represent unions, but both numpy and ctypes can
<ide> * ctypes cannot handle big-endian structs with PEP3118 (bpo-32780)
<ide> """
<del>import _ctypes
<del>import ctypes
<ide>
<ide> import numpy as np
<ide>
<ide> def _from_ctypes_structure(t):
<ide> "ctypes bitfields have no dtype equivalent")
<ide>
<ide> if hasattr(t, "_pack_"):
<add> import ctypes
<ide> formats = []
<ide> offsets = []
<ide> names = []
<ide> def _from_ctypes_scalar(t):
<ide>
<ide>
<ide> def _from_ctypes_union(t):
<add> import ctypes
<ide> formats = []
<ide> offsets = []
<ide> names = []
<ide> def dtype_from_ctypes_type(t):
<ide> """
<ide> Construct a dtype object from a ctypes type
<ide> """
<add> import _ctypes
<ide> if issubclass(t, _ctypes.Array):
<ide> return _from_ctypes_array(t)
<ide> elif issubclass(t, _ctypes._Pointer):
| 1
|
Go
|
Go
|
fix path problems in testbuildrenameddockerfile
|
967d85a28fa1e9a8ac4d668960bca8760af2b722
|
<ide><path>integration-cli/docker_cli_build_test.go
<ide> func TestBuildRenamedDockerfile(t *testing.T) {
<ide> t.Fatalf("test1 should have used Dockerfile, output:%s", out)
<ide> }
<ide>
<del> out, _, err = dockerCmdInDir(t, ctx.Dir, "build", "-f", "files/Dockerfile", "-t", "test2", ".")
<add> out, _, err = dockerCmdInDir(t, ctx.Dir, "build", "-f", filepath.Join("files", "Dockerfile"), "-t", "test2", ".")
<ide> if err != nil {
<ide> t.Fatal(err)
<ide> }
<ide> if !strings.Contains(out, "from files/Dockerfile") {
<ide> t.Fatalf("test2 should have used files/Dockerfile, output:%s", out)
<ide> }
<ide>
<del> out, _, err = dockerCmdInDir(t, ctx.Dir, "build", "--file=files/dFile", "-t", "test3", ".")
<add> out, _, err = dockerCmdInDir(t, ctx.Dir, "build", fmt.Sprintf("--file=%s", filepath.Join("files", "dFile")), "-t", "test3", ".")
<ide> if err != nil {
<ide> t.Fatal(err)
<ide> }
<ide> func TestBuildRenamedDockerfile(t *testing.T) {
<ide> t.Fatalf("test4 should have used dFile, output:%s", out)
<ide> }
<ide>
<del> out, _, err = dockerCmdInDir(t, ctx.Dir, "build", "--file=/etc/passwd", "-t", "test5", ".")
<add> dirWithNoDockerfile, _ := ioutil.TempDir(os.TempDir(), "test5")
<add> nonDockerfileFile := filepath.Join(dirWithNoDockerfile, "notDockerfile")
<add> if _, err = os.Create(nonDockerfileFile); err != nil {
<add> t.Fatal(err)
<add> }
<add> out, _, err = dockerCmdInDir(t, ctx.Dir, "build", fmt.Sprintf("--file=%s", nonDockerfileFile), "-t", "test5", ".")
<add>
<ide> if err == nil {
<ide> t.Fatalf("test5 was supposed to fail to find passwd")
<ide> }
<del> if !strings.Contains(out, "The Dockerfile (/etc/passwd) must be within the build context (.)") {
<del> t.Fatalf("test5 - wrong error message for passwd:%v", out)
<add>
<add> if expected := fmt.Sprintf("The Dockerfile (%s) must be within the build context (.)", strings.Replace(nonDockerfileFile, `\`, `\\`, -1)); !strings.Contains(out, expected) {
<add> t.Fatalf("wrong error messsage:%v\nexpected to contain=%v", out, expected)
<ide> }
<ide>
<del> out, _, err = dockerCmdInDir(t, ctx.Dir+"/files", "build", "-f", "../Dockerfile", "-t", "test6", "..")
<add> out, _, err = dockerCmdInDir(t, filepath.Join(ctx.Dir, "files"), "build", "-f", filepath.Join("..", "Dockerfile"), "-t", "test6", "..")
<ide> if err != nil {
<ide> t.Fatalf("test6 failed: %s", err)
<ide> }
<ide> if !strings.Contains(out, "from Dockerfile") {
<ide> t.Fatalf("test6 should have used root Dockerfile, output:%s", out)
<ide> }
<ide>
<del> out, _, err = dockerCmdInDir(t, filepath.Join(ctx.Dir, "files"), "build", "-f", ctx.Dir+"/files/Dockerfile", "-t", "test7", "..")
<add> out, _, err = dockerCmdInDir(t, filepath.Join(ctx.Dir, "files"), "build", "-f", filepath.Join(ctx.Dir, "files", "Dockerfile"), "-t", "test7", "..")
<ide> if err != nil {
<ide> t.Fatalf("test7 failed: %s", err)
<ide> }
<ide> if !strings.Contains(out, "from files/Dockerfile") {
<ide> t.Fatalf("test7 should have used files Dockerfile, output:%s", out)
<ide> }
<ide>
<del> out, _, err = dockerCmdInDir(t, ctx.Dir+"/files", "build", "-f", "../Dockerfile", "-t", "test8", ".")
<add> out, _, err = dockerCmdInDir(t, filepath.Join(ctx.Dir, "files"), "build", "-f", filepath.Join("..", "Dockerfile"), "-t", "test8", ".")
<ide> if err == nil || !strings.Contains(out, "must be within the build context") {
<ide> t.Fatalf("test8 should have failed with Dockerfile out of context: %s", err)
<ide> }
<ide>
<ide> tmpDir := os.TempDir()
<del>
<ide> out, _, err = dockerCmdInDir(t, tmpDir, "build", "-t", "test9", ctx.Dir)
<ide> if err != nil {
<ide> t.Fatalf("test9 - failed: %s", err)
| 1
|
Text
|
Text
|
add note regarding unfinished tla
|
f65bbce90b1574588275dc16d620d9a3ba2680ad
|
<ide><path>doc/api/esm.md
<ide> would provide the exports interface for the instantiation of `module.wasm`.
<ide>
<ide> ## Top-level `await`
<ide>
<add><!--
<add>added: v14.8.0
<add>-->
<add>
<ide> > Stability: 1 - Experimental
<ide>
<del>The `await` keyword may be used in the top level (outside of async functions)
<del>within modules as per the [ECMAScript Top-Level `await` proposal][].
<add>The `await` keyword may be used in the top level body of an ECMAScript module.
<ide>
<ide> Assuming an `a.mjs` with
<ide>
<del><!-- eslint-skip -->
<del>
<ide> ```js
<ide> export const five = await Promise.resolve(5);
<ide> ```
<ide> console.log(five); // Logs `5`
<ide> node b.mjs # works
<ide> ```
<ide>
<add>If a top level `await` expression never resolves, the `node` process will exit
<add>with a `13` [status code][].
<add>
<add>```js
<add>import { spawn } from 'child_process';
<add>import { execPath } from 'process';
<add>
<add>spawn(execPath, [
<add> '--input-type=module',
<add> '--eval',
<add> // Never-resolving Promise:
<add> 'await new Promise(() => {})',
<add>]).once('exit', (code) => {
<add> console.log(code); // Logs `13`
<add>});
<add>```
<add>
<ide> <i id="esm_experimental_loaders"></i>
<ide>
<ide> ## Loaders
<ide> success!
<ide> [Conditional exports]: packages.md#conditional-exports
<ide> [Core modules]: modules.md#core-modules
<ide> [Dynamic `import()`]: https://wiki.developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/import#Dynamic_Imports
<del>[ECMAScript Top-Level `await` proposal]: https://github.com/tc39/proposal-top-level-await/
<ide> [ES Module Integration Proposal for WebAssembly]: https://github.com/webassembly/esm-integration
<ide> [Import Assertions]: #import-assertions
<ide> [Import Assertions proposal]: https://github.com/tc39/proposal-import-assertions
<ide> success!
<ide> [percent-encoded]: url.md#percent-encoding-in-urls
<ide> [resolve hook]: #resolvespecifier-context-defaultresolve
<ide> [special scheme]: https://url.spec.whatwg.org/#special-scheme
<add>[status code]: process.md#exit-codes
<ide> [the official standard format]: https://tc39.github.io/ecma262/#sec-modules
<ide> [url.pathToFileURL]: url.md#urlpathtofileurlpath
| 1
|
Go
|
Go
|
remove swarm inspect and use info instead
|
e6923f6d75c2bd1b22cc1229214ffceca3251cc6
|
<ide><path>api/client/swarm/cmd.go
<ide> func NewSwarmCommand(dockerCli *client.DockerCli) *cobra.Command {
<ide> newJoinTokenCommand(dockerCli),
<ide> newUpdateCommand(dockerCli),
<ide> newLeaveCommand(dockerCli),
<del> newInspectCommand(dockerCli),
<ide> )
<ide> return cmd
<ide> }
<ide><path>api/client/swarm/inspect.go
<del>package swarm
<del>
<del>import (
<del> "golang.org/x/net/context"
<del>
<del> "github.com/docker/docker/api/client"
<del> "github.com/docker/docker/api/client/inspect"
<del> "github.com/docker/docker/cli"
<del> "github.com/spf13/cobra"
<del>)
<del>
<del>type inspectOptions struct {
<del> format string
<del>}
<del>
<del>func newInspectCommand(dockerCli *client.DockerCli) *cobra.Command {
<del> var opts inspectOptions
<del>
<del> cmd := &cobra.Command{
<del> Use: "inspect [OPTIONS]",
<del> Short: "Inspect the swarm",
<del> Args: cli.NoArgs,
<del> RunE: func(cmd *cobra.Command, args []string) error {
<del> return runInspect(dockerCli, opts)
<del> },
<del> }
<del>
<del> flags := cmd.Flags()
<del> flags.StringVarP(&opts.format, "format", "f", "", "Format the output using the given go template")
<del> return cmd
<del>}
<del>
<del>func runInspect(dockerCli *client.DockerCli, opts inspectOptions) error {
<del> client := dockerCli.Client()
<del> ctx := context.Background()
<del>
<del> swarm, err := client.SwarmInspect(ctx)
<del> if err != nil {
<del> return err
<del> }
<del>
<del> getRef := func(_ string) (interface{}, []byte, error) {
<del> return swarm, nil, nil
<del> }
<del>
<del> return inspect.Inspect(dockerCli.Out(), []string{""}, opts.format, getRef)
<del>}
<ide><path>api/client/system/info.go
<ide> package system
<ide> import (
<ide> "fmt"
<ide> "strings"
<add> "time"
<ide>
<ide> "golang.org/x/net/context"
<ide>
<ide> func NewInfoCommand(dockerCli *client.DockerCli) *cobra.Command {
<ide> }
<ide>
<ide> func runInfo(dockerCli *client.DockerCli) error {
<del> info, err := dockerCli.Client().Info(context.Background())
<add> ctx := context.Background()
<add> info, err := dockerCli.Client().Info(ctx)
<ide> if err != nil {
<ide> return err
<ide> }
<ide> func runInfo(dockerCli *client.DockerCli) error {
<ide> }
<ide> fmt.Fprintf(dockerCli.Out(), " Is Manager: %v\n", info.Swarm.ControlAvailable)
<ide> if info.Swarm.ControlAvailable {
<add> fmt.Fprintf(dockerCli.Out(), " ClusterID: %s\n", info.Swarm.Cluster.ID)
<ide> fmt.Fprintf(dockerCli.Out(), " Managers: %d\n", info.Swarm.Managers)
<ide> fmt.Fprintf(dockerCli.Out(), " Nodes: %d\n", info.Swarm.Nodes)
<add> fmt.Fprintf(dockerCli.Out(), " Name: %s\n", info.Swarm.Cluster.Spec.Annotations.Name)
<add> fmt.Fprintf(dockerCli.Out(), " Orchestration:\n")
<add> fmt.Fprintf(dockerCli.Out(), " Task History Retention: %d\n", info.Swarm.Cluster.Spec.Orchestration.TaskHistoryRetentionLimit)
<add> fmt.Fprintf(dockerCli.Out(), " Raft:\n")
<add> fmt.Fprintf(dockerCli.Out(), " Snapshot interval: %d\n", info.Swarm.Cluster.Spec.Raft.SnapshotInterval)
<add> fmt.Fprintf(dockerCli.Out(), " Heartbeat tick: %d\n", info.Swarm.Cluster.Spec.Raft.HeartbeatTick)
<add> fmt.Fprintf(dockerCli.Out(), " Election tick: %d\n", info.Swarm.Cluster.Spec.Raft.ElectionTick)
<add> fmt.Fprintf(dockerCli.Out(), " Dispatcher:\n")
<add> fmt.Fprintf(dockerCli.Out(), " Heartbeat period: %s\n", units.HumanDuration(time.Duration(info.Swarm.Cluster.Spec.Dispatcher.HeartbeatPeriod)))
<add> fmt.Fprintf(dockerCli.Out(), " CA configuration:\n")
<add> fmt.Fprintf(dockerCli.Out(), " Expiry duration: %s\n", units.HumanDuration(info.Swarm.Cluster.Spec.CAConfig.NodeCertExpiry))
<ide> }
<ide> fmt.Fprintf(dockerCli.Out(), " Node Address: %s\n", info.Swarm.NodeAddr)
<ide> }
<ide><path>daemon/cluster/cluster.go
<ide> func (c *Cluster) Info() types.Info {
<ide>
<ide> if c.isActiveManager() {
<ide> info.ControlAvailable = true
<add> swarm, err := c.Inspect()
<add> if err != nil {
<add> info.Error = err.Error()
<add> }
<add> info.Cluster = swarm
<ide> if r, err := c.client.ListNodes(ctx, &swarmapi.ListNodesRequest{}); err == nil {
<ide> info.Nodes = len(r.Nodes)
<ide> for _, n := range r.Nodes {
<ide><path>integration-cli/daemon_swarm.go
<ide> func (d *SwarmDaemon) listServices(c *check.C) []swarm.Service {
<ide> return services
<ide> }
<ide>
<del>func (d *SwarmDaemon) updateSwarm(c *check.C, f ...specConstructor) {
<add>func (d *SwarmDaemon) getSwarm(c *check.C) swarm.Swarm {
<ide> var sw swarm.Swarm
<ide> status, out, err := d.SockRequest("GET", "/swarm", nil)
<ide> c.Assert(err, checker.IsNil)
<ide> c.Assert(status, checker.Equals, http.StatusOK, check.Commentf("output: %q", string(out)))
<ide> c.Assert(json.Unmarshal(out, &sw), checker.IsNil)
<add> return sw
<add>}
<ide>
<add>func (d *SwarmDaemon) updateSwarm(c *check.C, f ...specConstructor) {
<add> sw := d.getSwarm(c)
<ide> for _, fn := range f {
<ide> fn(&sw.Spec)
<ide> }
<ide> url := fmt.Sprintf("/swarm/update?version=%d", sw.Version.Index)
<del> status, out, err = d.SockRequest("POST", url, sw.Spec)
<add> status, out, err := d.SockRequest("POST", url, sw.Spec)
<ide> c.Assert(err, checker.IsNil)
<ide> c.Assert(status, checker.Equals, http.StatusOK, check.Commentf("output: %q", string(out)))
<ide> }
<ide><path>integration-cli/docker_cli_swarm_test.go
<ide> package main
<ide>
<ide> import (
<del> "encoding/json"
<ide> "io/ioutil"
<ide> "strings"
<ide> "time"
<ide> func (s *DockerSwarmSuite) TestSwarmUpdate(c *check.C) {
<ide> d := s.AddDaemon(c, true, true)
<ide>
<ide> getSpec := func() swarm.Spec {
<del> out, err := d.Cmd("swarm", "inspect")
<del> c.Assert(err, checker.IsNil)
<del> var sw []swarm.Swarm
<del> c.Assert(json.Unmarshal([]byte(out), &sw), checker.IsNil)
<del> c.Assert(len(sw), checker.Equals, 1)
<del> return sw[0].Spec
<add> sw := d.getSwarm(c)
<add> return sw.Spec
<ide> }
<ide>
<ide> out, err := d.Cmd("swarm", "update", "--cert-expiry", "30h", "--dispatcher-heartbeat", "11s")
<ide> func (s *DockerSwarmSuite) TestSwarmInit(c *check.C) {
<ide> d := s.AddDaemon(c, false, false)
<ide>
<ide> getSpec := func() swarm.Spec {
<del> out, err := d.Cmd("swarm", "inspect")
<del> c.Assert(err, checker.IsNil)
<del> var sw []swarm.Swarm
<del> c.Assert(json.Unmarshal([]byte(out), &sw), checker.IsNil)
<del> c.Assert(len(sw), checker.Equals, 1)
<del> return sw[0].Spec
<add> sw := d.getSwarm(c)
<add> return sw.Spec
<ide> }
<ide>
<ide> out, err := d.Cmd("swarm", "init", "--cert-expiry", "30h", "--dispatcher-heartbeat", "11s")
<ide><path>vendor/src/github.com/docker/engine-api/types/swarm/swarm.go
<ide> type Info struct {
<ide> RemoteManagers []Peer
<ide> Nodes int
<ide> Managers int
<add>
<add> Cluster Swarm
<ide> }
<ide>
<ide> // Peer represents a peer.
| 7
|
Text
|
Text
|
pass 2 over testing guide
|
6026b37ffc1a9b5d3ea1c1c7812a3950495d7877
|
<ide><path>guides/source/testing.md
<ide> A dedicated test database allows you to set up and interact with test data in is
<ide>
<ide> In order to run your tests, your test database will need to have the current
<ide> structure. The test helper checks whether your test database has any pending
<del>migrations. If so, it will try to load your `db/schema.rb` or `db/structure.sql`
<add>migrations. It will try to load your `db/schema.rb` or `db/structure.sql`
<ide> into the test database. If migrations are still pending, an error will be
<ide> raised. Usually this indicates that your schema is not fully migrated. Running
<ide> the migrations against the development database (`bin/rails db:migrate`) will
<ide> bring the schema up to date.
<ide>
<del>NOTE: If existing migrations required modifications, the test database needs to
<add>NOTE: If there were modifications to existing migrations, the test database needs to
<ide> be rebuilt. This can be done by executing `bin/rails db:test:prepare`.
<ide>
<ide> ### The Low-Down on Fixtures
<ide> about:
<ide> name: About
<ide>
<ide> # In fixtures/articles.yml
<del>one:
<add>first:
<ide> title: Welcome to Rails!
<ide> body: Hello world!
<ide> category: about
<ide> ```
<ide>
<del>Notice the `category` key of the `one` article found in `fixtures/articles.yml` has a value of `about`. This tells Rails to load the category `about` found in `fixtures/categories.yml`.
<add>Notice the `category` key of the `first` article found in `fixtures/articles.yml` has a value of `about`. This tells Rails to load the category `about` found in `fixtures/categories.yml`.
<ide>
<del>NOTE: For associations to reference one another by name, you cannot specify the `id:` attribute on the associated fixtures. Rails will auto assign a primary key to be consistent between runs. For more information on this association behavior please read the [Fixtures API documentation](http://api.rubyonrails.org/classes/ActiveRecord/FixtureSet.html).
<add>NOTE: For associations to reference one another by name, you can use the fixture name instead of specifying the `id:` attribute on the associated fixtures. Rails will auto assign a primary key to be consistent between runs. For more information on this association behavior please read the [Fixtures API documentation](http://api.rubyonrails.org/classes/ActiveRecord/FixtureSet.html).
<ide>
<ide> #### ERB'in It Up
<ide>
<ide> Model tests don't have their own superclass like `ActionMailer::TestCase` instea
<ide> Integration Testing
<ide> -------------------
<ide>
<del>Integration tests are used to test how various parts of your application interact. They are generally used to test important workflows within your application.
<add>Integration tests are used to test how various parts of your application interact. They are generally used to test important workflows within our application.
<ide>
<del>For creating Rails integration tests, we use the 'test/integration' directory for your application. Rails provides a generator to create an integration test skeleton for you.
<add>For creating Rails integration tests, we use the 'test/integration' directory for our application. Rails provides a generator to create an integration test skeleton for us.
<ide>
<ide> ```bash
<ide> $ bin/rails generate integration_test user_flows
<ide> class UserFlowsTest < ActionDispatch::IntegrationTest
<ide> end
<ide> ```
<ide>
<del>Inheriting from `ActionDispatch::IntegrationTest` comes with some advantages. This makes available some additional helpers to use in your integration tests.
<add>Here the test is inheriting from `ActionDispatch::IntegrationTest`. This makes some additional helpers available for us to use in our integration tests.
<ide>
<ide> ### Helpers Available for Integration Tests
<ide>
<del>In addition to the standard testing helpers, inheriting `ActionDispatch::IntegrationTest` comes with some additional helpers available when writing integration tests. Let's briefly introduce you to the three categories of helpers you get to choose from.
<add>In addition to the standard testing helpers, inheriting from `ActionDispatch::IntegrationTest` comes with some additional helpers available when writing integration tests. Let's get briefly introduced to the three categories of helpers we get to choose from.
<ide>
<ide> For dealing with the integration test runner, see [`ActionDispatch::Integration::Runner`](http://api.rubyonrails.org/classes/ActionDispatch/Integration/Runner.html).
<ide>
<del>When performing requests, you will have [`ActionDispatch::Integration::RequestHelpers`](http://api.rubyonrails.org/classes/ActionDispatch/Integration/RequestHelpers.html) available for your use.
<add>When performing requests, we will have [`ActionDispatch::Integration::RequestHelpers`](http://api.rubyonrails.org/classes/ActionDispatch/Integration/RequestHelpers.html) available for our use.
<ide>
<del>If you'd like to modify the session, or state of your integration test you should look for [`ActionDispatch::Integration::Session`](http://api.rubyonrails.org/classes/ActionDispatch/Integration/Session.html) to help.
<add>If we need to modify the session, or state of our integration test, take a look at [`ActionDispatch::Integration::Session`](http://api.rubyonrails.org/classes/ActionDispatch/Integration/Session.html) to help.
<ide>
<ide> ### Implementing an integration test
<ide>
<ide> $ bin/rails generate integration_test blog_flow
<ide> ```
<ide>
<ide> It should have created a test file placeholder for us. With the output of the
<del>previous command you should see:
<add>previous command we should see:
<ide>
<ide> ```bash
<ide> invoke test_unit
<ide> class BlogFlowTest < ActionDispatch::IntegrationTest
<ide> end
<ide> ```
<ide>
<del>If you remember from earlier in the "Testing Views" section we covered `assert_select` to query the resulting HTML of a request.
<add>We will take a look at `assert_select` to query the resulting HTML of a request in the "Testing Views" section below. It is used for testing the response of our request by asserting the presence of key HTML elements and their content.
<ide>
<del>When visit our root path, we should see `welcome/index.html.erb` rendered for the view. So this assertion should pass.
<add>When we visit our root path, we should see `welcome/index.html.erb` rendered for the view. So this assertion should pass.
<ide>
<ide> #### Creating articles integration
<ide>
| 1
|
PHP
|
PHP
|
remove broken test
|
c9246fb5db7bccae0039a64040587cd70030a5d3
|
<ide><path>src/Illuminate/Database/Eloquent/Relations/Pivot.php
<ide> public static function fromRawAttributes(Model $parent, $attributes, $table, $ex
<ide>
<ide> $instance->setRawAttributes($attributes, true);
<ide>
<add> $instance->timestamps = $instance->hasTimestampAttributes();
<add>
<ide> return $instance;
<ide> }
<ide>
<ide><path>tests/Database/DatabaseEloquentPivotTest.php
<ide> public function testTimestampPropertyIsSetIfCreatedAtInAttributes()
<ide> $this->assertFalse($pivot->timestamps);
<ide> }
<ide>
<add> public function testTimestampPropertyIsTrueWhenCreatingFromRawAttributes()
<add> {
<add> $parent = m::mock('Illuminate\Database\Eloquent\Model[getConnectionName,getDates]');
<add> $parent->shouldReceive('getConnectionName')->andReturn('connection');
<add> $pivot = Pivot::fromRawAttributes($parent, ['foo' => 'bar', 'created_at' => 'foo'], 'table');
<add> $this->assertTrue($pivot->timestamps);
<add> }
<add>
<ide> public function testKeysCanBeSetProperly()
<ide> {
<ide> $parent = m::mock('Illuminate\Database\Eloquent\Model[getConnectionName]');
<ide><path>tests/Filesystem/FilesystemAdapterTest.php
<ide> public function testResponse()
<ide>
<ide> $this->assertInstanceOf(StreamedResponse::class, $response);
<ide> $this->assertEquals('Hello World', $content);
<del> $this->assertEquals('inline; filename="file.txt"', $response->headers->get('content-disposition'));
<add> $this->assertEquals('inline; filename=file.txt', $response->headers->get('content-disposition'));
<ide> }
<ide>
<ide> public function testDownload()
<ide> public function testDownload()
<ide> $files = new FilesystemAdapter($this->filesystem);
<ide> $response = $files->download('file.txt', 'hello.txt');
<ide> $this->assertInstanceOf(StreamedResponse::class, $response);
<del> $this->assertEquals('attachment; filename="hello.txt"', $response->headers->get('content-disposition'));
<add> $this->assertEquals('attachment; filename=hello.txt', $response->headers->get('content-disposition'));
<ide> }
<ide>
<ide> public function testExists()
<ide><path>tests/Integration/Console/ConsoleApplicationTest.php
<ide> public function test_artisan_call_using_command_class()
<ide> /**
<ide> * @expectedException \Symfony\Component\Console\Exception\CommandNotFoundException
<ide> */
<del> public function test_artisan_call_invalid_command_name()
<del> {
<del> $this->artisan('foo:bars');
<del> }
<add> // public function test_artisan_call_invalid_command_name()
<add> // {
<add> // $this->artisan('foo:bars');
<add> // }
<ide> }
<ide>
<ide> class FooCommandStub extends Command
| 4
|
Java
|
Java
|
cache the instance of `choreographercompat`
|
7f6254be43ead1a69e3b7d3cdb30a4be04933be6
|
<ide><path>ReactAndroid/src/androidTest/java/com/facebook/react/testing/idledetection/ReactIdleDetectionUtil.java
<ide>
<ide> package com.facebook.react.testing.idledetection;
<ide>
<add>import android.view.Choreographer;
<ide> import java.util.concurrent.CountDownLatch;
<ide> import java.util.concurrent.TimeUnit;
<ide>
<ide> private static void waitForChoreographer(long timeToWait) {
<ide> new Runnable() {
<ide> @Override
<ide> public void run() {
<del> ChoreographerCompat.getInstance().postFrameCallback(
<add> final ChoreographerCompat choreographerCompat = ChoreographerCompat.getInstance();
<add> choreographerCompat.postFrameCallback(
<ide> new ChoreographerCompat.FrameCallback() {
<ide>
<ide> private int frameCount = 0;
<ide> public void doFrame(long frameTimeNanos) {
<ide> if (frameCount == waitFrameCount) {
<ide> latch.countDown();
<ide> } else {
<del> ChoreographerCompat.getInstance().postFrameCallback(this);
<add> choreographerCompat.postFrameCallback(this);
<ide> }
<ide> }
<ide> });
| 1
|
PHP
|
PHP
|
add resource defaults to router
|
1a774e9446ecc0dd85fe337f7eda2c49f18345ab
|
<ide><path>src/Illuminate/Routing/Router.php
<ide> class Router implements HttpKernelInterface, RouteFiltererInterface {
<ide> */
<ide> public static $verbs = array('GET', 'HEAD', 'POST', 'PUT', 'PATCH', 'DELETE', 'OPTIONS');
<ide>
<add> /**
<add> * The default actions for a resourceful controller.
<add> *
<add> * @var array
<add> */
<add> protected $resourceDefaults = array('index', 'create', 'store', 'show', 'edit', 'update', 'destroy');
<add>
<ide> /**
<ide> * Create a new Router instance.
<ide> *
| 1
|
Text
|
Text
|
improve server.listen() random port
|
66af6a902887b7340d573436b4c1bbe2486f5ace
|
<ide><path>doc/api/http.md
<ide> Start a UNIX socket server listening for connections on the given `path`.
<ide> This function is asynchronous. `callback` will be added as a listener for the
<ide> [`'listening'`][] event. See also [`net.Server.listen(path)`][].
<ide>
<del>### server.listen(port[, hostname][, backlog][, callback])
<add>### server.listen([port][, hostname][, backlog][, callback])
<ide> <!-- YAML
<ide> added: v0.1.90
<ide> -->
<ide>
<ide> Begin accepting connections on the specified `port` and `hostname`. If the
<ide> `hostname` is omitted, the server will accept connections on any IPv6 address
<del>(`::`) when IPv6 is available, or any IPv4 address (`0.0.0.0`) otherwise. Use a
<del>port value of `0` to have the operating system assign an available port.
<add>(`::`) when IPv6 is available, or any IPv4 address (`0.0.0.0`) otherwise.
<add>Omit the port argument, or use a port value of `0`, to have the operating system
<add>assign a random port, which can be retrieved by using `server.address().port`
<add>after the `'listening'` event has been emitted.
<ide>
<ide> To listen to a unix socket, supply a filename instead of port and hostname.
<ide>
<ide><path>doc/api/net.md
<ide> var server = net.createServer((socket) => {
<ide>
<ide> // grab a random port.
<ide> server.listen(() => {
<del> address = server.address();
<del> console.log('opened server on %j', address);
<add> console.log('opened server on', server.address());
<ide> });
<ide> ```
<ide>
<ide> The last parameter `callback` will be added as a listener for the
<ide> [`'listening'`][] event.
<ide>
<ide> The parameter `backlog` behaves the same as in
<del>[`server.listen(port[, hostname][, backlog][, callback])`][`server.listen(port, host, backlog, callback)`].
<add>[`server.listen([port][, hostname][, backlog][, callback])`][`server.listen(port, host, backlog, callback)`].
<ide>
<ide> ### server.listen(options[, callback])
<ide> <!-- YAML
<ide> added: v0.11.14
<ide>
<ide> The `port`, `host`, and `backlog` properties of `options`, as well as the
<ide> optional callback function, behave as they do on a call to
<del>[`server.listen(port[, hostname][, backlog][, callback])`][`server.listen(port, host, backlog, callback)`].
<add>[`server.listen([port][, hostname][, backlog][, callback])`][`server.listen(port, host, backlog, callback)`].
<ide> Alternatively, the `path` option can be used to specify a UNIX socket.
<ide>
<ide> If `exclusive` is `false` (default), then cluster workers will use the same
<ide> double-backslashes, such as:
<ide> path.join('\\\\?\\pipe', process.cwd(), 'myctl'))
<ide>
<ide> The parameter `backlog` behaves the same as in
<del>[`server.listen(port[, hostname][, backlog][, callback])`][`server.listen(port, host, backlog, callback)`].
<add>[`server.listen([port][, hostname][, backlog][, callback])`][`server.listen(port, host, backlog, callback)`].
<ide>
<del>### server.listen(port[, hostname][, backlog][, callback])
<add>### server.listen([port][, hostname][, backlog][, callback])
<ide> <!-- YAML
<ide> added: v0.1.90
<ide> -->
<ide>
<ide> Begin accepting connections on the specified `port` and `hostname`. If the
<ide> `hostname` is omitted, the server will accept connections on any IPv6 address
<del>(`::`) when IPv6 is available, or any IPv4 address (`0.0.0.0`) otherwise. Use a
<del>port value of `0` to have the operating system assign an available port.
<add>(`::`) when IPv6 is available, or any IPv4 address (`0.0.0.0`) otherwise.
<add>Omit the port argument, or use a port value of `0`, to have the operating system
<add>assign a random port, which can be retrieved by using `server.address().port`
<add>after the `'listening'` event has been emitted.
<ide>
<ide> Backlog is the maximum length of the queue of pending connections.
<ide> The actual length will be determined by the OS through sysctl settings such as
| 2
|
Javascript
|
Javascript
|
add internal genericnodeerror() function
|
bd86e5186a33803aa9283b9a4c6946da33b67511
|
<ide><path>lib/buffer.js
<ide> const {
<ide> Array,
<ide> ArrayIsArray,
<ide> ArrayPrototypeForEach,
<del> Error,
<ide> MathFloor,
<ide> MathMin,
<ide> MathTrunc,
<ide> const {
<ide> ERR_MISSING_ARGS,
<ide> ERR_UNKNOWN_ENCODING
<ide> },
<del> hideStackFrames
<add> genericNodeError,
<add> hideStackFrames,
<ide> } = require('internal/errors');
<ide> const {
<ide> validateArray,
<ide> if (internalBinding('config').hasIntl) {
<ide> return result;
<ide>
<ide> const code = icuErrName(result);
<del> // eslint-disable-next-line no-restricted-syntax
<del> const err = new Error(`Unable to transcode Buffer [${code}]`);
<del> err.code = code;
<del> err.errno = result;
<add> const err = genericNodeError(
<add> `Unable to transcode Buffer [${code}]`,
<add> { code: code, errno: result }
<add> );
<ide> throw err;
<ide> };
<ide> }
<ide><path>lib/child_process.js
<ide> const {
<ide> ArrayPrototypeSort,
<ide> ArrayPrototypeSplice,
<ide> ArrayPrototypeUnshift,
<del> Error,
<ide> NumberIsInteger,
<ide> ObjectAssign,
<ide> ObjectDefineProperty,
<ide> const { Pipe, constants: PipeConstants } = internalBinding('pipe_wrap');
<ide> const {
<ide> AbortError,
<ide> codes: errorCodes,
<add> genericNodeError,
<ide> } = require('internal/errors');
<ide> const {
<ide> ERR_INVALID_ARG_VALUE,
<ide> function execFile(file, args = [], options, callback) {
<ide> cmd += ` ${ArrayPrototypeJoin(args, ' ')}`;
<ide>
<ide> if (!ex) {
<del> // eslint-disable-next-line no-restricted-syntax
<del> ex = new Error('Command failed: ' + cmd + '\n' + stderr);
<del> ex.killed = child.killed || killed;
<del> ex.code = code < 0 ? getSystemErrorName(code) : code;
<del> ex.signal = signal;
<add> ex = genericNodeError(`Command failed: ${cmd}\n${stderr}`, {
<add> code: code < 0 ? getSystemErrorName(code) : code,
<add> killed: child.killed || killed,
<add> signal: signal
<add> });
<ide> }
<ide>
<ide> ex.cmd = cmd;
<ide> function checkExecSyncError(ret, args, cmd) {
<ide> let err;
<ide> if (ret.error) {
<ide> err = ret.error;
<add> ObjectAssign(err, ret);
<ide> } else if (ret.status !== 0) {
<ide> let msg = 'Command failed: ';
<ide> msg += cmd || ArrayPrototypeJoin(args, ' ');
<ide> if (ret.stderr && ret.stderr.length > 0)
<ide> msg += `\n${ret.stderr.toString()}`;
<del> // eslint-disable-next-line no-restricted-syntax
<del> err = new Error(msg);
<del> }
<del> if (err) {
<del> ObjectAssign(err, ret);
<add> err = genericNodeError(msg, ret);
<ide> }
<ide> return err;
<ide> }
<ide><path>lib/events.js
<ide> const {
<ide> ERR_OUT_OF_RANGE,
<ide> ERR_UNHANDLED_ERROR
<ide> },
<add> genericNodeError,
<ide> } = require('internal/errors');
<ide>
<ide> const {
<ide> function _addListener(target, type, listener, prepend) {
<ide> if (m > 0 && existing.length > m && !existing.warned) {
<ide> existing.warned = true;
<ide> // No error code for this since it is a Warning
<del> // eslint-disable-next-line no-restricted-syntax
<del> const w = new Error('Possible EventEmitter memory leak detected. ' +
<del> `${existing.length} ${String(type)} listeners ` +
<del> `added to ${inspect(target, { depth: -1 })}. Use ` +
<del> 'emitter.setMaxListeners() to increase limit');
<del> w.name = 'MaxListenersExceededWarning';
<del> w.emitter = target;
<del> w.type = type;
<del> w.count = existing.length;
<add> const w = genericNodeError(
<add> `Possible EventEmitter memory leak detected. ${existing.length} ${String(type)} listeners ` +
<add> `added to ${inspect(target, { depth: -1 })}. Use emitter.setMaxListeners() to increase limit`,
<add> { name: 'MaxListenersExceededWarning', emitter: target, type: type, count: existing.length });
<ide> process.emitWarning(w);
<ide> }
<ide> }
<ide><path>lib/internal/errors.js
<ide> const {
<ide> MathMax,
<ide> Number,
<ide> NumberIsInteger,
<add> ObjectAssign,
<ide> ObjectDefineProperty,
<ide> ObjectDefineProperties,
<ide> ObjectIsExtensible,
<ide> class AbortError extends Error {
<ide> this.name = 'AbortError';
<ide> }
<ide> }
<add>
<add>/**
<add> * This creates a generic Node.js error.
<add> *
<add> * @param {string} message The error message.
<add> * @param {object} errorProperties Object with additional properties to be added to the error.
<add> * @returns {Error}
<add> */
<add>const genericNodeError = hideStackFrames(function genericNodeError(message, errorProperties) {
<add> // eslint-disable-next-line no-restricted-syntax
<add> const err = new Error(message);
<add> ObjectAssign(err, errorProperties);
<add> return err;
<add>});
<add>
<ide> module.exports = {
<add> AbortError,
<ide> aggregateTwoErrors,
<add> captureLargerStackTrace,
<ide> codes,
<add> connResetException,
<ide> dnsException,
<add> // This is exported only to facilitate testing.
<add> E,
<ide> errnoException,
<ide> exceptionWithHostPort,
<add> fatalExceptionStackEnhancers,
<add> genericNodeError,
<ide> getMessage,
<del> hideStackFrames,
<ide> hideInternalStackFrames,
<add> hideStackFrames,
<ide> isErrorStackTraceLimitWritable,
<ide> isStackOverflowError,
<add> kEnhanceStackBeforeInspector,
<add> kIsNodeError,
<add> kNoOverride,
<add> maybeOverridePrepareStackTrace,
<add> overrideStackTrace,
<add> prepareStackTrace,
<ide> setArrowMessage,
<del> connResetException,
<add> SystemError,
<ide> uvErrmapGet,
<ide> uvException,
<ide> uvExceptionWithHostPort,
<del> SystemError,
<del> AbortError,
<del> // This is exported only to facilitate testing.
<del> E,
<del> kNoOverride,
<del> prepareStackTrace,
<del> maybeOverridePrepareStackTrace,
<del> overrideStackTrace,
<del> kEnhanceStackBeforeInspector,
<del> fatalExceptionStackEnhancers,
<del> kIsNodeError,
<del> captureLargerStackTrace,
<ide> };
<ide>
<ide> // To declare an error message, use the E(sym, val, def) function above. The sym
<ide><path>lib/net.js
<ide> const {
<ide> ArrayIsArray,
<ide> ArrayPrototypeIndexOf,
<ide> Boolean,
<del> Error,
<ide> Number,
<ide> NumberIsNaN,
<ide> NumberParseInt,
<ide> const {
<ide> },
<ide> errnoException,
<ide> exceptionWithHostPort,
<del> uvExceptionWithHostPort
<add> genericNodeError,
<add> uvExceptionWithHostPort,
<ide> } = require('internal/errors');
<ide> const { isUint8Array } = require('internal/util/types');
<ide> const {
<ide> function writeAfterFIN(chunk, encoding, cb) {
<ide> encoding = null;
<ide> }
<ide>
<del> // eslint-disable-next-line no-restricted-syntax
<del> const er = new Error('This socket has been ended by the other party');
<del> er.code = 'EPIPE';
<add> const er = genericNodeError(
<add> 'This socket has been ended by the other party',
<add> { code: 'EPIPE' }
<add> );
<ide> if (typeof cb === 'function') {
<ide> defaultTriggerAsyncIdScope(this[async_id_symbol], process.nextTick, cb, er);
<ide> }
<ide><path>lib/zlib.js
<ide> const {
<ide> ArrayPrototypeForEach,
<ide> ArrayPrototypeMap,
<ide> ArrayPrototypePush,
<del> Error,
<ide> FunctionPrototypeBind,
<ide> MathMaxApply,
<ide> NumberIsFinite,
<ide> const {
<ide> ERR_OUT_OF_RANGE,
<ide> ERR_ZLIB_INITIALIZATION_FAILED,
<ide> },
<del> hideStackFrames
<add> genericNodeError,
<add> hideStackFrames,
<ide> } = require('internal/errors');
<ide> const { Transform, finished } = require('stream');
<ide> const {
<ide> function zlibOnError(message, errno, code) {
<ide> // There is no way to cleanly recover.
<ide> // Continuing only obscures problems.
<ide>
<del> // eslint-disable-next-line no-restricted-syntax
<del> const error = new Error(message);
<add> const error = genericNodeError(message, { errno, code });
<ide> error.errno = errno;
<ide> error.code = code;
<ide> self.destroy(error);
<ide><path>test/sequential/test-child-process-execsync.js
<ide> try {
<ide> assert.ok(caught, 'execSync should throw');
<ide> const end = Date.now() - start;
<ide> assert(end < SLEEP);
<del> assert(err.status > 128 || err.signal);
<add> assert(err.status > 128 || err.signal, `status: ${err.status}, signal: ${err.signal}`);
<ide> }
<ide>
<ide> assert.throws(function() {
| 7
|
Python
|
Python
|
add custom error when evaluation throws a keyerror
|
18dfb279850adb00c3b3efa18bbb6d58c17bc453
|
<ide><path>spacy/errors.py
<ide> class Errors:
<ide> "issue tracker: http://github.com/explosion/spaCy/issues")
<ide>
<ide> # TODO: fix numbering after merging develop into master
<add> E900 = ("Could not run the full 'nlp' pipeline for evaluation. If you specified "
<add> "frozen components, make sure they were already trained and initialized. "
<add> "You can also consider moving them to the 'disabled' list instead.")
<ide> E901 = ("Failed to remove existing output directory: {path}. If your "
<ide> "config and the components you train change between runs, a "
<ide> "non-empty output directory can lead to stale pipeline data. To "
<ide><path>spacy/training/loop.py
<ide> def create_evaluation_callback(
<ide>
<ide> def evaluate() -> Tuple[float, Dict[str, float]]:
<ide> dev_examples = list(dev_corpus(nlp))
<del> scores = nlp.evaluate(dev_examples)
<add> try:
<add> scores = nlp.evaluate(dev_examples)
<add> except KeyError as e:
<add> raise KeyError(Errors.E900) from e
<ide> # Calculate a weighted sum based on score_weights for the main score.
<ide> # We can only consider scores that are ints/floats, not dicts like
<ide> # entity scores per type etc.
| 2
|
Python
|
Python
|
add support for float hex format to loadtxt
|
4aef6a89fda5015129e124099f3809fa4da894a7
|
<ide><path>numpy/lib/npyio.py
<ide> def _savez(file, args, kwds, compress):
<ide>
<ide> def _getconv(dtype):
<ide> """ Find the correct dtype converter. Adapted from matplotlib """
<add>
<add> def floatconv(x):
<add> x.lower()
<add> if b'0x' in x:
<add> return float.fromhex(asstr(x))
<add> return float(x)
<add>
<ide> typ = dtype.type
<ide> if issubclass(typ, np.bool_):
<ide> return lambda x: bool(int(x))
<ide> def _getconv(dtype):
<ide> if issubclass(typ, np.integer):
<ide> return lambda x: int(float(x))
<ide> elif issubclass(typ, np.floating):
<del> return float
<add> return floatconv
<ide> elif issubclass(typ, np.complex):
<ide> return complex
<ide> elif issubclass(typ, np.bytes_):
<ide> def loadtxt(fname, dtype=float, comments='#', delimiter=None,
<ide> `genfromtxt` function provides more sophisticated handling of, e.g.,
<ide> lines with missing values.
<ide>
<add> .. versionadded:: 1.10.0
<add> The strings produced by the Python float.hex method can be used as
<add> input for floats.
<add>
<ide> Examples
<ide> --------
<ide> >>> from StringIO import StringIO # StringIO behaves like a file object
| 1
|
Java
|
Java
|
fix failing test
|
214064824680bb60eca256c99f09f48996a7c235
|
<ide><path>spring-webmvc/src/test/java/org/springframework/web/servlet/view/groovy/GroovyMarkupConfigurerTests.java
<ide> import groovy.text.TemplateEngine;
<ide> import groovy.text.markup.MarkupTemplateEngine;
<ide> import groovy.text.markup.TemplateConfiguration;
<del>import groovy.text.markup.TemplateResolver;
<ide> import org.hamcrest.Matchers;
<ide> import org.junit.Assert;
<ide> import org.junit.Before;
<ide> public class GroovyMarkupConfigurerTests {
<ide>
<ide> private GroovyMarkupConfigurer configurer;
<ide>
<del> private TemplateResolver resolver;
<ide>
<ide> @Before
<ide> public void setup() throws Exception {
<ide> this.applicationContext = new StaticApplicationContext();
<ide> this.configurer = new GroovyMarkupConfigurer();
<ide> this.configurer.setResourceLoaderPath(RESOURCE_LOADER_PATH);
<del> this.resolver = this.configurer.createTemplateResolver();
<del> this.resolver.configure(this.getClass().getClassLoader(), null);
<ide> }
<ide>
<ide> @Test
<ide> public TestTemplateEngine() {
<ide>
<ide> @Test
<ide> public void resolveSampleTemplate() throws Exception {
<del> URL url = this.resolver.resolveTemplate(TEMPLATE_PREFIX + "test.tpl");
<add> URL url = this.configurer.resolveTemplate(getClass().getClassLoader(), TEMPLATE_PREFIX + "test.tpl");
<ide> Assert.assertNotNull(url);
<ide> }
<ide>
<ide> @Test
<ide> public void resolveI18nFullLocale() throws Exception {
<ide> LocaleContextHolder.setLocale(Locale.GERMANY);
<del> URL url = this.resolver.resolveTemplate(TEMPLATE_PREFIX + "i18n.tpl");
<add> URL url = this.configurer.resolveTemplate(getClass().getClassLoader(), TEMPLATE_PREFIX + "i18n.tpl");
<ide> Assert.assertNotNull(url);
<ide> Assert.assertThat(url.getPath(), Matchers.containsString("i18n_de_DE.tpl"));
<ide> }
<ide>
<ide> @Test
<ide> public void resolveI18nPartialLocale() throws Exception {
<ide> LocaleContextHolder.setLocale(Locale.FRANCE);
<del> URL url = this.resolver.resolveTemplate(TEMPLATE_PREFIX + "i18n.tpl");
<add> URL url = this.configurer.resolveTemplate(getClass().getClassLoader(), TEMPLATE_PREFIX + "i18n.tpl");
<ide> Assert.assertNotNull(url);
<ide> Assert.assertThat(url.getPath(), Matchers.containsString("i18n_fr.tpl"));
<ide> }
<ide>
<ide> @Test
<ide> public void resolveI18nDefaultLocale() throws Exception {
<ide> LocaleContextHolder.setLocale(Locale.US);
<del> URL url = this.resolver.resolveTemplate(TEMPLATE_PREFIX + "i18n.tpl");
<add> URL url = this.configurer.resolveTemplate(getClass().getClassLoader(), TEMPLATE_PREFIX + "i18n.tpl");
<ide> Assert.assertNotNull(url);
<ide> Assert.assertThat(url.getPath(), Matchers.containsString("i18n.tpl"));
<ide> }
<ide>
<ide> @Test(expected = IOException.class)
<ide> public void failMissingTemplate() throws Exception {
<ide> LocaleContextHolder.setLocale(Locale.US);
<del> this.resolver.resolveTemplate(TEMPLATE_PREFIX + "missing.tpl");
<add> this.configurer.resolveTemplate(getClass().getClassLoader(), TEMPLATE_PREFIX + "missing.tpl");
<ide> Assert.fail();
<ide> }
<ide> }
| 1
|
Javascript
|
Javascript
|
fix linting issues
|
b1490e951cf642294319db17620638772aa0002f
|
<ide><path>src/config.js
<ide> const {
<ide> const Color = require('./color')
<ide> const ScopedPropertyStore = require('scoped-property-store')
<ide> const ScopeDescriptor = require('./scope-descriptor')
<del>const crypto = require('crypto')
<ide>
<ide> // Essential: Used to access all of Atom's configuration details.
<ide> //
<ide><path>src/project.js
<ide> class Project extends Model {
<ide> }
<ide>
<ide> // Layers the contents of an atomProject file's config
<del> // on top of the current global config.
<add> // on top of the current global config.
<ide> replaceAtomProject (newSettings) {
<ide> atom.config.resetProjectSettings(newSettings.config)
<ide> this.projectFilePath = newSettings.originPath
| 2
|
Text
|
Text
|
update changelog for 1.7 and 1.8.0-beta.1
|
b3887a0d0ed5aeb31fdb0f7ce1c5c7e8cea99b16
|
<ide><path>CHANGELOG.md
<ide> # Ember Changelog
<ide>
<add>### Ember 1.8.0-beta.1 (August 20, 2014)
<add>
<add>* Remove `metamorph` in favor of `morph` package (removes the need for `<script>` tags in the DOM).
<add>* [FEATURE] ember-routing-linkto-target-attribute
<add>* [FEATURE] ember-routing-multi-current-when
<add>* [FEATURE] ember-routing-auto-location-uses-replace-state-for-history
<add>* [FEATURE] ember-metal-is-present
<add>* [FEATURE] property-brace-expansion-improvement
<ide> * Deprecate usage of Internet Explorer 6 & 7.
<add>* Deprecate global access to view classes from template (see the [deprecation guide](http://emberjs.com/guides/deprecations/)).
<add>* Deprecate `Ember.Set` (note: this is NOT the `Ember.set`).
<add>* Deprecate `Ember.computed.defaultTo`.
<add>* Remove long deprecated `Ember.StateManager` warnings.
<add>* Use intelligent caching for `Ember.String` (`camelize`, `dasherize`, etc.).
<add>* Use intelligent caching for container normalization.
<add>* Polyfill `Object.create` (use for new caching techniques).
<add>* Refactor internals to make debugging easier (use a single assignment per `var` statement).
<ide> * [BREAKING] Remove deprecated controller action lookup. Support for pre-1.0.0 applications with actions in the root
<ide> of the controller (instead of inside the `actions` hash) has been removed.
<ide>
<del>### Ember 1.7.0-beta.2 (July, 16, 2014)
<del>
<add>### Ember 1.7.0 (August 19, 2014)
<add>
<add>* Update `Ember.computed.notEmpty` to properly respect arrays.
<add>* Bind `tabindex` property on LinkView.
<add>* Update to RSVP 3.0.13 (fixes an error with `RSVP.hash` in IE8 amongst other changes).
<add>* Fix incorrect quoteless action deprecation warnings.
<add>* Prevent duplicate message getting printed by errors in Route hooks.
<add>* Deprecate observing container views like arrays.
<add>* Add `catch` and `finally` to Transition.
<add>* [BUGFIX] paramsFor: don’t clobber falsy params.
<add>* [BUGFIX] Controllers with query params are unit testable.
<add>* [BUGFIX] Controllers have new QP values before setupController.
<add>* [BUGFIX] Fix initial render of {{input type=bound}} for checkboxes.
<add>* [BUGFIX] makeBoundHelper supports unquoted bound property options.
<add>* [BUGFIX] link-to helper can be inserted in DOM when the router is not present.
<add>* [PERFORMANCE] Do not pass `arguments` around in a hot-path.
<add>* Remove Container.defaultContainer.
<add>* Polyfill contains for older browsers.
<add>* [BUGFIX] Ensure that `triggerEvent` handles all argument signatures properly.
<add>* [BUGFIX] Stub meta on AliasedProperty (fixes regression from beta.2 with Ember Data).
<add>* [DOC] Fixed issue with docs showing 'Ember.run' as 'run.run'.
<add>* [BUGFIX] SimpleHandlebarsView should not re-render if normalized value is unchanged.
<add>* Allow Router DSL to nest routes via `this.route`.
<add>* [BUGFIX] Don't pass function UNDEFINED as oldValue to computed properties.
<add>* [BUGFIX] dramatically improve performance of eachComputedProperty.
<add>* [BUGFIX] Prevent strict mode errors from superWrapper.
<add>* Deprecate Ember.DeferredMixin and Ember.Deferred.
<add>* Deprecate `.then` on Ember.Application.
<add>* Revert ember-routing-consistent-resources.
<ide> * [BUGFIX] Wrap es3 keywords in quotes.
<ide> * [BUGFIX] Use injected integration test helpers instead of local functions.
<ide> * [BUGFIX] Add alias descriptor, and replace `Ember.computed.alias` with new descriptor.
<ide> * [BUGFIX] Use view:toplevel for {{view}} instead of view:default.
<ide> * [BUGFIX] Do not throw uncaught errors mid-transition.
<ide> * [BUGFIX] Don't assume that the router has a container.
<del>
<del>### Ember 1.7.0-beta.1 (July, 8, 2014)
<del>
<ide> * Fix components inside group helper.
<ide> * [BUGFIX] Fix wrong view keyword in a component block.
<ide> * Update to RSVP 3.0.7.
| 1
|
Text
|
Text
|
fix internal link
|
c405a2349140626d924818ff764f232ef94eaadb
|
<ide><path>share/doc/homebrew/Installation.md
<ide> The suggested and easiest way to install Homebrew is on the
<ide> [homepage](http://brew.sh).
<ide>
<ide> The standard script installs Homebrew to `/usr/local` so that
<del>[you don’t need sudo](FAQ.md#wiki-sudo) when you `brew install`. It is a
<add>[you don’t need sudo](FAQ.md#why-does-homebrew-say-sudo-is-bad-) when you `brew install`. It is a
<ide> careful script, it can be run even if you have stuff installed to
<ide> `/usr/local` already. It tells you exactly what it will do before it
<ide> does it too. And you have to confirm everything it will do before it
| 1
|
Python
|
Python
|
add deprecated helper
|
39e0586192bc2c514f9a29363d7085848e0f794d
|
<ide><path>spacy/util.py
<ide> import textwrap
<ide> import random
<ide> from collections import OrderedDict
<add>import inspect
<add>import warnings
<ide> from thinc.neural._classes.model import Model
<ide> import functools
<ide>
<ide> def from_disk(path, readers, exclude):
<ide> return path
<ide>
<ide>
<add>def deprecated(message, filter='always'):
<add> """Show a deprecation warning.
<add>
<add> message (unicode): The message to display.
<add> filter (unicode): Filter value.
<add> """
<add> stack = inspect.stack()[-1]
<add> with warnings.catch_warnings():
<add> warnings.simplefilter(filter, DeprecationWarning)
<add> warnings.warn_explicit(message, DeprecationWarning, stack[1], stack[2])
<add>
<add>
<ide> def print_table(data, title=None):
<ide> """Print data in table format.
<ide>
| 1
|
Go
|
Go
|
use lowercase for error
|
afd9a6c2b2aa905ea9b93a9413f757a887646ab9
|
<ide><path>builder/dockerfile/internals_test.go
<ide> func TestDockerfileOutsideTheBuildContext(t *testing.T) {
<ide> contextDir, cleanup := createTestTempDir(t, "", "builder-dockerfile-test")
<ide> defer cleanup()
<ide>
<del> expectedError := "Forbidden path outside the build context: ../../Dockerfile ()"
<add> expectedError := "path outside the build context: ../../Dockerfile ()"
<ide> if runtime.GOOS == "windows" {
<ide> expectedError = "failed to resolve scoped path ../../Dockerfile ()"
<ide> }
<ide><path>builder/remotecontext/detect.go
<ide> func FullPath(remote builder.Source, path string) (string, error) {
<ide> if runtime.GOOS == "windows" {
<ide> return "", fmt.Errorf("failed to resolve scoped path %s (%s): %s. Possible cause is a forbidden path outside the build context", path, fullPath, err)
<ide> }
<del> return "", fmt.Errorf("Forbidden path outside the build context: %s (%s)", path, fullPath) // backwards compat with old error
<add> return "", fmt.Errorf("forbidden path outside the build context: %s (%s)", path, fullPath) // backwards compat with old error
<ide> }
<ide> return fullPath, nil
<ide> }
| 2
|
Python
|
Python
|
add date and numpy version to testnpy_char
|
7dbdbfaaa19c84ffcfa2987a05ca603483dfa54f
|
<ide><path>numpy/core/tests/test_deprecations.py
<ide> def test_int_dtypes(self):
<ide>
<ide>
<ide> class TestNPY_CHAR(_DeprecationTestCase):
<add> # 2017-05-03, 1.13.0
<ide> def test_npy_char_deprecation(self):
<ide> from numpy.core.multiarray_tests import npy_char_deprecation
<ide> self.assert_deprecated(npy_char_deprecation)
| 1
|
Ruby
|
Ruby
|
convert cat test to spec
|
2ade29a5cf5df84ed9fb7dcf429c59c3e084a6a1
|
<add><path>Library/Homebrew/cask/spec/cask/cli/cat_spec.rb
<del><path>Library/Homebrew/cask/test/cask/cli/cat_test.rb
<del>require "test_helper"
<add>require "spec_helper"
<ide>
<ide> describe Hbc::CLI::Cat do
<ide> describe "given a basic Cask" do
<del> before do
<del> @expected_output = <<-EOS.undent
<add> let(:expected_output) {
<add> <<-EOS.undent
<ide> cask 'basic-cask' do
<ide> version '1.2.3'
<ide> sha256 '8c62a2b791cf5f0da6066a0a4b6e85f62949cd60975da062df44adf887f4370b'
<ide> app 'TestCask.app'
<ide> end
<ide> EOS
<del> end
<add> }
<ide>
<ide> it "displays the Cask file content about the specified Cask" do
<del> lambda {
<add> expect {
<ide> Hbc::CLI::Cat.run("basic-cask")
<del> }.must_output(@expected_output)
<add> }.to output(expected_output).to_stdout
<ide> end
<ide>
<ide> it "throws away additional Cask arguments and uses the first" do
<del> lambda {
<add> expect {
<ide> Hbc::CLI::Cat.run("basic-cask", "local-caffeine")
<del> }.must_output(@expected_output)
<add> }.to output(expected_output).to_stdout
<ide> end
<ide>
<ide> it "throws away stray options" do
<del> lambda {
<add> expect {
<ide> Hbc::CLI::Cat.run("--notavalidoption", "basic-cask")
<del> }.must_output(@expected_output)
<add> }.to output(expected_output).to_stdout
<ide> end
<ide> end
<ide>
<ide> it "raises an exception when the Cask does not exist" do
<del> lambda {
<add> expect {
<ide> Hbc::CLI::Cat.run("notacask")
<del> }.must_raise Hbc::CaskUnavailableError
<add> }.to raise_error(Hbc::CaskUnavailableError)
<ide> end
<ide>
<ide> describe "when no Cask is specified" do
<ide> it "raises an exception" do
<del> lambda {
<add> expect {
<ide> Hbc::CLI::Cat.run
<del> }.must_raise Hbc::CaskUnspecifiedError
<add> }.to raise_error(Hbc::CaskUnspecifiedError)
<ide> end
<ide> end
<ide>
<ide> describe "when no Cask is specified, but an invalid option" do
<ide> it "raises an exception" do
<del> lambda {
<add> expect {
<ide> Hbc::CLI::Cat.run("--notavalidoption")
<del> }.must_raise Hbc::CaskUnspecifiedError
<add> }.to raise_error(Hbc::CaskUnspecifiedError)
<ide> end
<ide> end
<ide> end
| 1
|
Python
|
Python
|
correct an issue with except
|
197c98077f892f79458a9f41bab81c8dd8e29f2e
|
<ide><path>glances/outdated.py
<ide> try:
<ide> from packaging.version import Version
<ide> PACKAGING_IMPORT = True
<del>except ModuleNotFoundError as e:
<add>except Exception as e:
<ide> logger.error("Unable to import 'packaging' module ({}). Glances cannot check for updates.".format(e))
<ide> PACKAGING_IMPORT = False
<ide>
<del>
<ide> PYPI_API_URL = 'https://pypi.python.org/pypi/Glances/json'
<ide>
<ide>
| 1
|
Javascript
|
Javascript
|
fix mmdanimationhelper lint errors
|
6f86fda4d312996e3f4281ceccc02f0419e55c0d
|
<ide><path>examples/js/animation/MMDAnimationHelper.js
<ide> THREE.MMDAnimationHelper = ( function () {
<ide> mesh.updateMatrixWorld( true );
<ide>
<ide> // PMX animation system special path
<del> if ( this.configuration.pmxAnimation &&
<add> if ( this.configuration.pmxAnimation &&
<ide> mesh.geometry.userData.MMD && mesh.geometry.userData.MMD.format === 'pmx' ) {
<ide>
<ide> var sortedBonesData = this._sortBoneDataArray( mesh.geometry.userData.MMD.bones.slice() );
<ide> THREE.MMDAnimationHelper = ( function () {
<ide> grantResultMap.set( boneIndex, quaternion.copy( bone.quaternion ) );
<ide>
<ide> // @TODO: Support global grant and grant position
<del> if ( grantSolver && boneData.grant &&
<add> if ( grantSolver && boneData.grant &&
<ide> ! boneData.grant.isLocal && boneData.grant.affectRotation ) {
<ide>
<ide> var parentIndex = boneData.grant.parentIndex;
<ide> THREE.MMDAnimationHelper = ( function () {
<ide> if ( ikSolver && boneData.ik ) {
<ide>
<ide> // @TODO: Updating world matrices every time solving an IK bone is
<del> // costly. Optimize if possible.
<add> // costly. Optimize if possible.
<ide> mesh.updateMatrixWorld( true );
<ide> ikSolver.updateOne( boneData.ik );
<ide>
<ide> THREE.MMDAnimationHelper = ( function () {
<ide>
<ide> addGrantRotation: function () {
<ide>
<del> var quaternion = new Quaternion();
<add> var quaternion = new THREE.Quaternion();
<ide>
<ide> return function ( bone, q, ratio ) {
<ide>
<ide><path>examples/jsm/animation/MMDAnimationHelper.js
<ide> var MMDAnimationHelper = ( function () {
<ide> mesh.updateMatrixWorld( true );
<ide>
<ide> // PMX animation system special path
<del> if ( this.configuration.pmxAnimation &&
<add> if ( this.configuration.pmxAnimation &&
<ide> mesh.geometry.userData.MMD && mesh.geometry.userData.MMD.format === 'pmx' ) {
<ide>
<ide> var sortedBonesData = this._sortBoneDataArray( mesh.geometry.userData.MMD.bones.slice() );
<ide> var MMDAnimationHelper = ( function () {
<ide> grantResultMap.set( boneIndex, quaternion.copy( bone.quaternion ) );
<ide>
<ide> // @TODO: Support global grant and grant position
<del> if ( grantSolver && boneData.grant &&
<add> if ( grantSolver && boneData.grant &&
<ide> ! boneData.grant.isLocal && boneData.grant.affectRotation ) {
<ide>
<ide> var parentIndex = boneData.grant.parentIndex;
<ide> var MMDAnimationHelper = ( function () {
<ide> if ( ikSolver && boneData.ik ) {
<ide>
<ide> // @TODO: Updating world matrices every time solving an IK bone is
<del> // costly. Optimize if possible.
<add> // costly. Optimize if possible.
<ide> mesh.updateMatrixWorld( true );
<ide> ikSolver.updateOne( boneData.ik );
<ide>
| 2
|
Python
|
Python
|
implement extra controls for slas
|
f2790f6c801ba8d40450463ab0a7030fe4d6f7e3
|
<ide><path>airflow/www/views.py
<ide> class SlaMissModelView(AirflowModelView):
<ide> "map_index": wwwutils.format_map_index,
<ide> }
<ide>
<add> @action('muldelete', 'Delete', "Are you sure you want to delete selected records?", single=False)
<add> def action_muldelete(self, items):
<add> """Multiple delete action."""
<add> self.datamodel.delete_all(items)
<add> self.update_redirect()
<add> return redirect(self.get_redirect())
<add>
<add> @action(
<add> "mulnotificationsent",
<add> "Set notification sent to true",
<add> "Are you sure you want to set all these notifications to sent?",
<add> single=False,
<add> )
<add> def action_mulnotificationsent(self, items: list[SlaMiss]):
<add> return self._set_notification_property(items, "notification_sent", True)
<add>
<add> @action(
<add> "mulnotificationsentfalse",
<add> "Set notification sent to false",
<add> "Are you sure you want to mark these SLA alerts as notification not sent yet?",
<add> single=False,
<add> )
<add> def action_mulnotificationsentfalse(self, items: list[SlaMiss]):
<add> return self._set_notification_property(items, "notification_sent", False)
<add>
<add> @action(
<add> "mulemailsent",
<add> "Set email sent to true",
<add> "Are you sure you want to mark these SLA alerts as emails were sent?",
<add> single=False,
<add> )
<add> def action_mulemailsent(self, items: list[SlaMiss]):
<add> return self._set_notification_property(items, "email_sent", True)
<add>
<add> @action(
<add> "mulemailsentfalse",
<add> "Set email sent to false",
<add> "Are you sure you want to mark these SLA alerts as emails not sent yet?",
<add> single=False,
<add> )
<add> def action_mulemailsentfalse(self, items: list[SlaMiss]):
<add> return self._set_notification_property(items, "email_sent", False)
<add>
<add> @provide_session
<add> def _set_notification_property(self, items: list[SlaMiss], attr: str, new_value: bool, session=None):
<add> try:
<add> count = 0
<add> for sla in items:
<add> count += 1
<add> setattr(sla, attr, new_value)
<add> session.merge(sla)
<add> session.commit()
<add> flash(f"{count} SLAMisses had {attr} set to {new_value}.")
<add> except Exception as ex:
<add> flash(str(ex), 'error')
<add> flash('Failed to set state', 'error')
<add> self.update_redirect()
<add> return redirect(self.get_default_url())
<add>
<ide>
<ide> class XComModelView(AirflowModelView):
<ide> """View to show records from XCom table"""
| 1
|
Java
|
Java
|
catch iae when parsing content type
|
c611083415845bcb9758c0f92c4749a712b049f0
|
<ide><path>spring-webmvc/src/main/java/org/springframework/web/servlet/mvc/method/RequestMappingInfoHandlerMapping.java
<ide> else if (patternAndMethodMatches.isEmpty() && !allowedMethods.isEmpty()) {
<ide> if (!consumableMediaTypes.isEmpty()) {
<ide> MediaType contentType = null;
<ide> if (StringUtils.hasLength(request.getContentType())) {
<del> contentType = MediaType.parseMediaType(request.getContentType());
<add> try {
<add> contentType = MediaType.parseMediaType(request.getContentType());
<add> }
<add> catch (IllegalArgumentException ex) {
<add> throw new HttpMediaTypeNotSupportedException(ex.getMessage());
<add> }
<ide> }
<ide> throw new HttpMediaTypeNotSupportedException(contentType, new ArrayList<MediaType>(consumableMediaTypes));
<ide> }
<ide><path>spring-webmvc/src/test/java/org/springframework/web/servlet/mvc/method/RequestMappingInfoHandlerMappingTests.java
<ide> private void testMediaTypeNotSupported(String url) throws Exception {
<ide> }
<ide> }
<ide>
<add> @Test
<add> public void testMediaTypeNotValue() throws Exception {
<add> try {
<add> MockHttpServletRequest request = new MockHttpServletRequest("PUT", "/person/1");
<add> request.setContentType("bogus");
<add> this.handlerMapping.getHandler(request);
<add> fail("HttpMediaTypeNotSupportedException expected");
<add> }
<add> catch (HttpMediaTypeNotSupportedException ex) {
<add> assertEquals("Invalid media type \"bogus\": does not contain '/'", ex.getMessage());
<add> }
<add> }
<add>
<ide> @Test
<ide> public void mediaTypeNotAccepted() throws Exception {
<ide> testMediaTypeNotAccepted("/persons");
| 2
|
Python
|
Python
|
fix typo in hubconf
|
19ef2b0a660e97b109e82c51ace5c0cef749c401
|
<ide><path>hubconf.py
<ide> bertForTokenClassification
<ide> )
<ide> from hubconfs.gpt_hubconf import (
<del> OpenAIGPTTokenizer,
<del> OpenAIGPTModel,
<del> OpenAIGPTLMHeadModel,
<del> OpenAIGPTDoubleHeadsModel
<add> openAIGPTTokenizer,
<add> openAIGPTModel,
<add> openAIGPTLMHeadModel,
<add> openAIGPTDoubleHeadsModel
<ide> )
<ide>\ No newline at end of file
| 1
|
Python
|
Python
|
skip `test_lookfor` in 3.10rc1
|
67b0df4b38700acceb0197d704e0eb37b3fbd837
|
<ide><path>numpy/lib/tests/test_utils.py
<ide>
<ide>
<ide> @pytest.mark.skipif(sys.flags.optimize == 2, reason="Python running -OO")
<add>@pytest.mark.skipif(
<add> sys.version_info == (3, 10, 0, "candidate", 1),
<add> reason="Broken as of bpo-44524",
<add>)
<ide> def test_lookfor():
<ide> out = StringIO()
<ide> utils.lookfor('eigenvalue', module='numpy', output=out,
<ide> class NoPublicMethods:
<ide> class WithPublicMethods:
<ide> def first_method():
<ide> pass
<del>
<add>
<ide> def _has_method_heading(cls):
<ide> out = StringIO()
<ide> utils.info(cls, output=out)
| 1
|
Javascript
|
Javascript
|
add missing newwindow parameter
|
372652915eed851af5bc27bf030fad1029e5dca7
|
<ide><path>src/main-process/atom-application.js
<ide> class AtomApplication extends EventEmitter {
<ide> openPath ({
<ide> pathToOpen,
<ide> pidToKillWhenClosed,
<add> newWindow,
<ide> devMode,
<ide> safeMode,
<ide> profileStartup,
| 1
|
Text
|
Text
|
add clarity to -p option
|
37c6c53b56c7a26bcce81bc12b83fadd4da8709a
|
<ide><path>man/docker-run.1.md
<ide> ports and the exposed ports, use `docker port`.
<ide>
<ide> **-p**, **--publish**=[]
<ide> Publish a container's port, or range of ports, to the host.
<del> format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort | containerPort
<del> Both hostPort and containerPort can be specified as a range of ports.
<del> When specifying ranges for both, the number of container ports in the range must match the number of host ports in the range. (e.g., `-p 1234-1236:1234-1236/tcp`)
<del> (use 'docker port' to see the actual mapping)
<add>
<add> Format: `ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort | containerPort`
<add>Both hostPort and containerPort can be specified as a range of ports.
<add>When specifying ranges for both, the number of container ports in the range must match the number of host ports in the range.
<add>(e.g., `docker run -p 1234-1236:1222-1224 --name thisWorks -t busybox`
<add>but not `docker run -p 1230-1236:1230-1240 --name RangeContainerPortsBiggerThanRangeHostPorts -t busybox`)
<add>With ip: `docker run -p 127.0.0.1:$HOSTPORT:$CONTAINERPORT --name CONTAINER -t someimage`
<add>Use `docker port` to see the actual mapping: `docker port CONTAINER $CONTAINERPORT`
<ide>
<ide> **--pid**=host
<ide> Set the PID mode for the container
| 1
|
Text
|
Text
|
create renderer readme
|
3956ee163b83e49c2e010b7fec5e188cca209c1a
|
<ide><path>Libraries/Renderer/README.md
<add># WARNING
<add>
<add>### The code in the `oss` folder is sync'ed from the React repo. Please submit a pull request on https://github.com/facebook/react/tree/master/packages/react-native-renderer if you want to make changes.
| 1
|
Go
|
Go
|
add test for copying entire container rootfs
|
6db9f1c3d6e9ad634554cacaf197a435efcf8833
|
<ide><path>integration/container/copy_test.go
<ide> package container // import "github.com/docker/docker/integration/container"
<ide>
<ide> import (
<add> "archive/tar"
<ide> "context"
<add> "encoding/json"
<ide> "fmt"
<add> "io"
<add> "io/ioutil"
<add> "os"
<ide> "testing"
<ide>
<ide> "github.com/docker/docker/api/types"
<ide> "github.com/docker/docker/client"
<ide> "github.com/docker/docker/integration/internal/container"
<add> "github.com/docker/docker/internal/test/fakecontext"
<add> "github.com/docker/docker/pkg/jsonmessage"
<ide> "gotest.tools/assert"
<ide> is "gotest.tools/assert/cmp"
<ide> "gotest.tools/skip"
<ide> func TestCopyToContainerPathIsNotDir(t *testing.T) {
<ide> err := apiclient.CopyToContainer(ctx, cid, "/etc/passwd/", nil, types.CopyToContainerOptions{})
<ide> assert.Assert(t, is.ErrorContains(err, "not a directory"))
<ide> }
<add>
<add>func TestCopyFromContainerRoot(t *testing.T) {
<add> skip.If(t, testEnv.DaemonInfo.OSType == "windows")
<add> defer setupTest(t)()
<add>
<add> ctx := context.Background()
<add> apiClient := testEnv.APIClient()
<add>
<add> dir, err := ioutil.TempDir("", t.Name())
<add> assert.NilError(t, err)
<add> defer os.RemoveAll(dir)
<add>
<add> buildCtx := fakecontext.New(t, dir, fakecontext.WithFile("foo", "hello"), fakecontext.WithFile("baz", "world"), fakecontext.WithDockerfile(`
<add> FROM scratch
<add> COPY foo /foo
<add> COPY baz /bar/baz
<add> CMD /fake
<add> `))
<add> defer buildCtx.Close()
<add>
<add> resp, err := apiClient.ImageBuild(ctx, buildCtx.AsTarReader(t), types.ImageBuildOptions{})
<add> assert.NilError(t, err)
<add> defer resp.Body.Close()
<add>
<add> var imageID string
<add> err = jsonmessage.DisplayJSONMessagesStream(resp.Body, ioutil.Discard, 0, false, func(msg jsonmessage.JSONMessage) {
<add> var r types.BuildResult
<add> assert.NilError(t, json.Unmarshal(*msg.Aux, &r))
<add> imageID = r.ID
<add> })
<add> assert.NilError(t, err)
<add> assert.Assert(t, imageID != "")
<add>
<add> cid := container.Create(ctx, t, apiClient, container.WithImage(imageID))
<add>
<add> rdr, _, err := apiClient.CopyFromContainer(ctx, cid, "/")
<add> assert.NilError(t, err)
<add> defer rdr.Close()
<add>
<add> tr := tar.NewReader(rdr)
<add> expect := map[string]string{
<add> "/foo": "hello",
<add> "/bar/baz": "world",
<add> }
<add> found := make(map[string]bool, 2)
<add> var numFound int
<add> for {
<add> h, err := tr.Next()
<add> if err == io.EOF {
<add> break
<add> }
<add> assert.NilError(t, err)
<add>
<add> expected, exists := expect[h.Name]
<add> if !exists {
<add> // this archive will have extra stuff in it since we are copying from root
<add> // and docker adds a bunch of stuff
<add> continue
<add> }
<add>
<add> numFound++
<add> found[h.Name] = true
<add>
<add> buf, err := ioutil.ReadAll(tr)
<add> assert.NilError(t, err)
<add> assert.Check(t, is.Equal(string(buf), expected))
<add>
<add> if numFound == len(expect) {
<add> break
<add> }
<add> }
<add>
<add> assert.Check(t, found["/foo"], "/foo file not found in archive")
<add> assert.Check(t, found["/bar/baz"], "/bar/baz file not found in archive")
<add>}
| 1
|
Java
|
Java
|
fix typo in comment
|
3b6707a41c88993ef3913987501dc79319fb171a
|
<ide><path>src/main/java/rx/internal/operators/OperatorMulticast.java
<ide>
<ide> /** Guarded by guard. */
<ide> private Subscriber<T> subscription;
<del> // wraps subscription above with for unsubscription using guard
<add> // wraps subscription above for unsubscription using guard
<ide> private Subscription guardedSubscription;
<ide>
<ide> public OperatorMulticast(Observable<? extends T> source, final Func0<? extends Subject<? super T, ? extends R>> subjectFactory) {
| 1
|
PHP
|
PHP
|
remove test that isnt written properly
|
04091833c1d22406828862ef9d58519882342b61
|
<ide><path>src/Illuminate/Queue/Queue.php
<ide> abstract class Queue
<ide> */
<ide> protected $container;
<ide>
<del> /**
<del> * The encrypter implementation.
<del> *
<del> * @var \Illuminate\Contracts\Encryption\Encrypter
<del> */
<del> protected $encrypter;
<del>
<ide> /**
<ide> * The connection name for the queue.
<ide> *
<ide><path>tests/Support/SupportTestingStorageFakeTest.php
<del><?php
<del>
<del>namespace Illuminate\Tests\Support;
<del>
<del>use PHPUnit\Framework\TestCase;
<del>use Illuminate\Foundation\Application;
<del>use Illuminate\Support\Facades\Storage;
<del>use Illuminate\Filesystem\FilesystemManager;
<del>use PHPUnit\Framework\ExpectationFailedException;
<del>
<del>class StorageFakeTest extends TestCase
<del>{
<del> protected $fake;
<del>
<del> protected function setUp()
<del> {
<del> parent::setUp();
<del> $app = new Application;
<del> $app['path.storage'] = __DIR__;
<del> $app['filesystem'] = new FilesystemManager($app);
<del> Storage::setFacadeApplication($app);
<del> Storage::fake('testing');
<del> $this->fake = Storage::disk('testing');
<del> }
<del>
<del> public function testAssertExists()
<del> {
<del> $this->expectException(ExpectationFailedException::class);
<del>
<del> $this->fake->assertExists('letter.txt');
<del> }
<del>
<del> public function testAssertMissing()
<del> {
<del> $this->fake->put('letter.txt', 'hi');
<del>
<del> $this->expectException(ExpectationFailedException::class);
<del>
<del> $this->fake->assertMissing('letter.txt');
<del> }
<del>}
| 2
|
Javascript
|
Javascript
|
add regression test for immediate socket errors
|
2cf4882136b2c17c37aeb7b793a3040e64936b9f
|
<ide><path>test/parallel/test-http-client-immediate-error.js
<add>'use strict';
<add>
<add>// Make sure http.request() can catch immediate errors in
<add>// net.createConnection().
<add>
<add>const common = require('../common');
<add>const assert = require('assert');
<add>const http = require('http');
<add>const req = http.get({ host: '127.0.0.1', port: 1 });
<add>req.on('error', common.mustCall((err) => {
<add> assert.strictEqual(err.code, 'ECONNREFUSED');
<add>}));
| 1
|
Java
|
Java
|
fix javadoc errors in annotatedelementutils
|
11221f5ccb7c87316fec1d2960f9468446976f7c
|
<ide><path>spring-core/src/main/java/org/springframework/core/annotation/AnnotatedElementUtils.java
<ide> /*
<del> * Copyright 2002-2015 the original author or authors.
<add> * Copyright 2002-2016 the original author or authors.
<ide> *
<ide> * Licensed under the Apache License, Version 2.0 (the "License");
<ide> * you may not use this file except in compliance with the License.
<ide> public Object process(AnnotatedElement annotatedElement, Annotation annotation,
<ide> /**
<ide> * Determine if the supplied {@link AnnotatedElement} is annotated with
<ide> * a <em>composed annotation</em> that is meta-annotated with an
<del> * annotation of the specified {@code annotationName}.
<add> * annotation of the specified {@code annotationType}.
<ide> * <p>This method follows <em>get semantics</em> as described in the
<ide> * {@linkplain AnnotatedElementUtils class-level javadoc}.
<ide> * @param element the annotated element
<del> * @param annotationType the annotation type on which to find meta-annotations
<add> * @param annotationType the meta-annotation type to find
<ide> * @return {@code true} if a matching meta-annotation is present
<ide> * @since 4.2.3
<ide> * @see #getMetaAnnotationTypes
<ide> public Boolean process(AnnotatedElement annotatedElement, Annotation annotation,
<ide> }
<ide>
<ide> /**
<del> * Determine if an annotation of the specified {@code annotationName}
<add> * Determine if an annotation of the specified {@code annotationType}
<ide> * is <em>present</em> on the supplied {@link AnnotatedElement} or
<ide> * within the annotation hierarchy <em>above</em> the specified element.
<ide> * <p>If this method returns {@code true}, then {@link #getMergedAnnotationAttributes}
<ide> * will return a non-null value.
<ide> * <p>This method follows <em>get semantics</em> as described in the
<ide> * {@linkplain AnnotatedElementUtils class-level javadoc}.
<ide> * @param element the annotated element
<del> * @param annotationType the annotation type on which to find meta-annotations
<add> * @param annotationType the annotation type to find
<ide> * @return {@code true} if a matching annotation is present
<ide> * @since 4.2.3
<ide> */
<ide> public static <A extends Annotation> A findMergedAnnotation(AnnotatedElement ele
<ide> }
<ide>
<ide> /**
<del> * Find the first annotation of the specified {@code annotationName} within
<add> * Find the first annotation of the specified {@code annotationType} within
<ide> * the annotation hierarchy <em>above</em> the supplied {@code element} and
<ide> * merge that annotation's attributes with <em>matching</em> attributes from
<ide> * annotations in lower levels of the annotation hierarchy.
<ide> public static <A extends Annotation> A findMergedAnnotation(AnnotatedElement ele
<ide> * <p>In contrast to {@link #getAllAnnotationAttributes}, the search
<ide> * algorithm used by this method will stop searching the annotation
<ide> * hierarchy once the first annotation of the specified
<del> * {@code annotationName} has been found. As a consequence, additional
<del> * annotations of the specified {@code annotationName} will be ignored.
<add> * {@code annotationType} has been found. As a consequence, additional
<add> * annotations of the specified {@code annotationType} will be ignored.
<ide> * <p>This method follows <em>find semantics</em> as described in the
<ide> * {@linkplain AnnotatedElementUtils class-level javadoc}.
<ide> * @param element the annotated element
<ide> public Void process(AnnotatedElement annotatedElement, Annotation annotation, in
<ide> }
<ide>
<ide> /**
<del> * Search for annotations of the specified {@code annotationName} on
<del> * the specified {@code element}, following <em>get semantics</em>.
<add> * Search for annotations of the specified {@code annotationName} or
<add> * {@code annotationType} on the specified {@code element}, following
<add> * <em>get semantics</em>.
<ide> * @param element the annotated element
<del> * @param annotationType the annotation type on which to find meta-annotations
<add> * @param annotationType the annotation type to find
<ide> * @param annotationName the fully qualified class name of the annotation
<ide> * type to find (as an alternative to {@code annotationType})
<ide> * @param processor the processor to delegate to
<ide> private static <T> T searchWithGetSemantics(AnnotatedElement element,
<ide> * <p>The {@code metaDepth} parameter is explained in the
<ide> * {@link Processor#process process()} method of the {@link Processor} API.
<ide> * @param element the annotated element
<del> * @param annotationType the annotation type on which to find meta-annotations
<add> * @param annotationType the annotation type to find
<ide> * @param annotationName the fully qualified class name of the annotation
<ide> * type to find (as an alternative to {@code annotationType})
<ide> * @param processor the processor to delegate to
<ide> private static <T> T searchWithGetSemantics(AnnotatedElement element,
<ide> * @param annotatedElement the element that is annotated with the supplied
<ide> * annotations, used for contextual logging; may be {@code null} if unknown
<ide> * @param annotations the annotations to search in
<del> * @param annotationType the annotation type on which to find meta-annotations
<add> * @param annotationType the annotation type to find
<ide> * @param annotationName the fully qualified class name of the annotation
<ide> * type to find (as an alternative to {@code annotationType})
<ide> * @param processor the processor to delegate to
<ide> private static <T> T searchWithGetSemanticsInAnnotations(AnnotatedElement annota
<ide> }
<ide>
<ide> /**
<del> * Search for annotations of the specified {@code annotationName} on
<del> * the specified {@code element}, following <em>find semantics</em>.
<add> * Search for annotations of the specified {@code annotationName} or
<add> * {@code annotationType} on the specified {@code element}, following
<add> * <em>find semantics</em>.
<ide> * @param element the annotated element
<del> * @param annotationType the annotation type on which to find meta-annotations
<add> * @param annotationType the annotation type to find
<ide> * @param annotationName the fully qualified class name of the annotation
<ide> * type to find (as an alternative to {@code annotationType})
<ide> * @param processor the processor to delegate to
<ide> private static <T> T searchWithFindSemantics(
<ide> * <p>The {@code metaDepth} parameter is explained in the
<ide> * {@link Processor#process process()} method of the {@link Processor} API.
<ide> * @param element the annotated element
<del> * @param annotationType the annotation type on which to find meta-annotations
<add> * @param annotationType the annotation type to find
<ide> * @param annotationName the fully qualified class name of the annotation
<ide> * type to find (as an alternative to {@code annotationType})
<ide> * @param processor the processor to delegate to
<ide> public final void postProcess(AnnotatedElement annotatedElement, Annotation anno
<ide> * annotation attributes from lower levels in the annotation hierarchy
<ide> * during the {@link #postProcess} phase.
<ide> * @since 4.2
<del> * @see AnnotationUtils#retrieveAnnotationAttributes(AnnotatedElement, Annotation, boolean, boolean)
<add> * @see AnnotationUtils#retrieveAnnotationAttributes
<ide> * @see AnnotationUtils#postProcessAnnotationAttributes
<ide> */
<ide> private static class MergedAnnotationAttributesProcessor implements Processor<AnnotationAttributes> {
| 1
|
Go
|
Go
|
fix golint errors
|
e04375fb8c2f08da158cce21c1591d39d2e68242
|
<ide><path>distribution/errors_test.go
<ide> import (
<ide> "github.com/docker/distribution/registry/client"
<ide> )
<ide>
<del>var always_continue = []error{
<add>var alwaysContinue = []error{
<ide> &client.UnexpectedHTTPResponseError{},
<ide>
<ide> // Some errcode.Errors that don't disprove the existence of a V1 image
<ide> var always_continue = []error{
<ide> errors.New("some totally unexpected error"),
<ide> }
<ide>
<del>var continue_from_mirror_endpoint = []error{
<add>var continueFromMirrorEndpoint = []error{
<ide> ImageConfigPullError{},
<ide>
<ide> // Some other errcode.Error that doesn't indicate we should search for a V1 image.
<ide> errcode.Error{Code: errcode.ErrorCodeTooManyRequests},
<ide> }
<ide>
<del>var never_continue = []error{
<add>var neverContinue = []error{
<ide> errors.New(strings.ToLower(syscall.ESRCH.Error())), // No such process
<ide> }
<ide>
<ide> func TestContinueOnError_NonMirrorEndpoint(t *testing.T) {
<del> for _, err := range always_continue {
<add> for _, err := range alwaysContinue {
<ide> if !continueOnError(err, false) {
<ide> t.Errorf("Should continue from non-mirror endpoint: %T: '%s'", err, err.Error())
<ide> }
<ide> }
<ide>
<del> for _, err := range continue_from_mirror_endpoint {
<add> for _, err := range continueFromMirrorEndpoint {
<ide> if continueOnError(err, false) {
<ide> t.Errorf("Should only continue from mirror endpoint: %T: '%s'", err, err.Error())
<ide> }
<ide> func TestContinueOnError_NonMirrorEndpoint(t *testing.T) {
<ide>
<ide> func TestContinueOnError_MirrorEndpoint(t *testing.T) {
<ide> errs := []error{}
<del> errs = append(errs, always_continue...)
<del> errs = append(errs, continue_from_mirror_endpoint...)
<add> errs = append(errs, alwaysContinue...)
<add> errs = append(errs, continueFromMirrorEndpoint...)
<ide> for _, err := range errs {
<ide> if !continueOnError(err, true) {
<ide> t.Errorf("Should continue from mirror endpoint: %T: '%s'", err, err.Error())
<ide> func TestContinueOnError_MirrorEndpoint(t *testing.T) {
<ide> }
<ide>
<ide> func TestContinueOnError_NeverContinue(t *testing.T) {
<del> for _, is_mirror_endpoint := range []bool{true, false} {
<del> for _, err := range never_continue {
<del> if continueOnError(err, is_mirror_endpoint) {
<add> for _, isMirrorEndpoint := range []bool{true, false} {
<add> for _, err := range neverContinue {
<add> if continueOnError(err, isMirrorEndpoint) {
<ide> t.Errorf("Should never continue: %T: '%s'", err, err.Error())
<ide> }
<ide> }
| 1
|
Javascript
|
Javascript
|
move hikes store to main store
|
9ba5d2c44826df6ac6b3933b4ae299c765bf7609
|
<ide><path>common/app/flux/Store.js
<ide> const initValue = {
<ide> title: 'Learn To Code | Free Code Camp',
<ide> username: null,
<ide> picture: null,
<del> points: 0
<add> points: 0,
<add> hikesApp: {
<add> hikes: [],
<add> currentHikes: {}
<add> }
<ide> };
<ide>
<ide> export default Store({
<ide> export default Store({
<ide> init({ instance: appStore, args: [cat] }) {
<ide> const { updateRoute, setUser, setTitle } = cat.getActions('appActions');
<ide> const register = createRegistrar(appStore);
<add> let { setHikes } = cat.getActions('hikesActions');
<ide>
<add> // app
<ide> register(setter(fromMany(setUser, setTitle, updateRoute)));
<ide>
<add> // hikes
<add> register(setHikes);
<add>
<ide> return appStore;
<ide> }
<ide> });
<ide><path>common/app/routes/Hikes/flux/Actions.js
<ide> export default Actions({
<ide> ({ isPrimed, dashedName }) => {
<ide> if (isPrimed) {
<ide> return hikeActions.setHikes({
<del> transform: (oldState) => {
<del> const { hikes } = oldState;
<add> transform: (state) => {
<add>
<add> const { hikesApp: oldState } = state;
<ide> const currentHike = getCurrentHike(
<del> hikes,
<add> oldState.hikes,
<ide> dashedName,
<ide> oldState.currentHike
<ide> );
<del> return Object.assign({}, oldState, { currentHike });
<add>
<add> const hikesApp = { ...oldState, currentHike };
<add> return Object.assign({}, state, { hikesApp });
<ide> }
<ide> });
<ide> }
<add>
<ide> services.read('hikes', null, null, (err, hikes) => {
<ide> if (err) {
<del> debug('an error occurred fetching hikes', err);
<add> return console.error(err);
<ide> }
<add>
<add> const hikesApp = {
<add> hikes,
<add> currentHike: getCurrentHike(hikes, dashedName)
<add> };
<add>
<ide> hikeActions.setHikes({
<del> set: {
<del> hikes: hikes,
<del> currentHike: getCurrentHike(hikes, dashedName)
<add> transform(oldState) {
<add> return Object.assign({}, oldState, { hikesApp });
<ide> }
<ide> });
<ide> });
<ide><path>common/app/routes/Hikes/flux/Store.js
<del>import { Store } from 'thundercats';
<del>
<del>const initialValue = {
<del> hikes: [],
<del> currentHike: {}
<del>};
<del>
<del>export default Store({
<del> refs: {
<del> displayName: 'HikesStore',
<del> value: initialValue
<del> },
<del> init({ instance: hikeStore, args: [cat] }) {
<del>
<del> let { setHikes } = cat.getActions('hikesActions');
<del> hikeStore.register(setHikes);
<del>
<del> return hikeStore;
<del> }
<del>});
| 3
|
Java
|
Java
|
avoid npe in autowiredannotationbeanpostprocessor
|
2624b909060e0967e16771de7a35261decd5a4a9
|
<ide><path>spring-beans/src/main/java/org/springframework/beans/factory/annotation/AutowiredAnnotationBeanPostProcessor.java
<ide> /*
<del> * Copyright 2002-2011 the original author or authors.
<add> * Copyright 2002-2012 the original author or authors.
<ide> *
<ide> * Licensed under the Apache License, Version 2.0 (the "License");
<ide> * you may not use this file except in compliance with the License.
<ide> protected <T> Map<String, T> findAutowireCandidates(Class<T> type) throws BeansE
<ide> protected boolean determineRequiredStatus(Annotation annotation) {
<ide> try {
<ide> Method method = ReflectionUtils.findMethod(annotation.annotationType(), this.requiredParameterName);
<add> if (method == null) {
<add> // annotations like @Inject, @Value and @Resource don't have a method
<add> // (attribute) named "required" -> default to required status
<add> return true;
<add> }
<ide> return (this.requiredParameterValue == (Boolean) ReflectionUtils.invokeMethod(method, annotation));
<ide> }
<ide> catch (Exception ex) {
<del> // required by default
<add> // an exception was thrown during reflective invocation of the required
<add> // attribute -> default to required status
<ide> return true;
<ide> }
<ide> }
| 1
|
Ruby
|
Ruby
|
use real assigns instead of a method call [dhh]
|
35b74de7703515cf93da2d3e850702f58a2a6f48
|
<ide><path>actionpack/lib/action_view/helpers/prototype_helper.rb
<ide> def initialize(generator, root)
<ide> @generator = generator
<ide> @generator << root
<ide> end
<del>
<del> def assign(variable, value)
<del> append_to_function_chain! "#{variable} = #{@generator.send(:javascript_object_for, value)}"
<del> end
<ide>
<ide> def replace_html(*options_for_render)
<ide> call 'update', @generator.render(*options_for_render)
<ide> def replace(*options_for_render)
<ide> end
<ide>
<ide> private
<add> def method_missing(method, *arguments)
<add> if method.to_s =~ /(.*)=$/
<add> assign($1, arguments.first)
<add> else
<add> call(method, *arguments)
<add> end
<add> end
<add>
<ide> def call(function, *arguments)
<ide> append_to_function_chain!("#{function}(#{@generator.send(:arguments_for_call, arguments)})")
<ide> self
<ide> end
<del>
<del> alias_method :method_missing, :call
<add>
<add> def assign(variable, value)
<add> append_to_function_chain! "#{variable} = #{@generator.send(:javascript_object_for, value)}"
<add> end
<ide>
<ide> def function_chain
<ide> @function_chain ||= @generator.instance_variable_get("@lines")
<ide><path>actionpack/test/template/prototype_helper_test.rb
<ide> def test_element_proxy_one_deep
<ide> end
<ide>
<ide> def test_element_proxy_assignment
<del> @generator['hello'].assign :width, 400
<add> @generator['hello'].width = 400
<ide> assert_equal %($('hello').width = 400;), @generator.to_s
<ide> end
<ide>
| 2
|
Text
|
Text
|
drop support for vs2015
|
c5a49e148d3293eb9e8c17a15cb8c876977f76af
|
<ide><path>BUILDING.md
<ide> in production.
<ide> |--------------|--------------|----------------------------------|----------------------|------------------|
<ide> | GNU/Linux | Tier 1 | kernel >= 2.6.32, glibc >= 2.12 | x86, x64, arm, arm64 | |
<ide> | macOS | Tier 1 | >= 10.10 | x64 | |
<del>| Windows | Tier 1 | >= Windows 7 / 2008 R2 | x86, x64 | vs2015 or vs2017 |
<add>| Windows | Tier 1 | >= Windows 7 / 2008 R2 | x86, x64 | vs2017 |
<ide> | SmartOS | Tier 2 | >= 15 < 16.4 | x86, x64 | see note1 |
<ide> | FreeBSD | Tier 2 | >= 10 | x64 | |
<ide> | GNU/Linux | Tier 2 | kernel >= 3.13.0, glibc >= 2.19 | ppc64le >=power8 | |
<ide> Depending on host platform, the selection of toolchains may vary.
<ide>
<ide> #### Windows
<ide>
<del>* Visual Studio 2015 or Visual C++ Build Tools 2015 or newer
<add>* Visual Studio 2017 or the Build Tools thereof
<ide>
<ide> ## Building Node.js on supported platforms
<ide>
<ide> $ [sudo] make install
<ide> Prerequisites:
<ide>
<ide> * [Python 2.6 or 2.7](https://www.python.org/downloads/)
<del>* One of:
<del> * [Visual C++ Build Tools](http://landinghub.visualstudio.com/visual-cpp-build-tools)
<del> * [Visual Studio 2015 Update 3](https://www.visualstudio.com/), all editions
<del> including the Community edition (remember to select
<del> "Common Tools for Visual C++ 2015" feature during installation).
<del> * The "Desktop development with C++" workload from
<del> [Visual Studio 2017](https://www.visualstudio.com/downloads/) or the
<del> "Visual C++ build tools" workload from the
<del> [Build Tools](https://www.visualstudio.com/downloads/#build-tools-for-visual-studio-2017),
<del> with the default optional components.
<add>* The "Desktop development with C++" workload from
<add> [Visual Studio 2017](https://www.visualstudio.com/downloads/) or the
<add> "Visual C++ build tools" workload from the
<add> [Build Tools](https://www.visualstudio.com/downloads/#build-tools-for-visual-studio-2017),
<add> with the default optional components.
<ide> * Basic Unix tools required for some tests,
<ide> [Git for Windows](http://git-scm.com/download/win) includes Git Bash
<ide> and tools which can be included in the global `PATH`.
| 1
|
Text
|
Text
|
update changelog [ci skip]
|
b04e000d4be12f5ae36f36e9938e611fbe8b3992
|
<ide><path>CHANGELOG.md
<ide> - [#17872](https://github.com/emberjs/ember.js/pull/17872) [BUGFIX] Fix issue where `{{link-to}}` is causing unexpected local variable shadowing assertions.
<ide> - [#17874](https://github.com/emberjs/ember.js/pull/17874) [BUGFIX] Fix issue with `event.stopPropagation()` in component event handlers when jQuery is disabled.
<ide> - [#17876](https://github.com/emberjs/ember.js/pull/17876) [BUGFIX] Fix issue with multiple `{{action}}` modifiers on the same element when jQuery is disabled.
<add>- [#17841](https://github.com/emberjs/ember.js/pull/17841) [BUGFIX] Ensure `@sort` works on non-`Ember.Object`s.
<add>- [#17855](https://github.com/emberjs/ember.js/pull/17855) [BUGFIX] Expose (private) computed `_getter` functions.
<add>- [#17860](https://github.com/emberjs/ember.js/pull/17860) [BUGFIX] Add assertions for required parameters in computed macros, when used as a decorator.
<add>- [#17868](https://github.com/emberjs/ember.js/pull/17868) [BUGFIX] Fix controller injection via decorators.
<ide>
<ide> ### v3.10.0-beta.1 (April 02, 2019)
<ide>
| 1
|
Text
|
Text
|
add addon api (nan) to working group list
|
5178f93bc0f5b4e16cb68001e6f4684b4fd4ed7c
|
<ide><path>WORKING_GROUPS.md
<ide> back in to the TC.
<ide> * [Evangelism](#evangelism)
<ide> * [Roadmap](#roadmap)
<ide> * [Docker](#docker)
<add>* [Addon API](#addon-api)
<ide> * [Starting a Working Group](#starting-a-wg)
<ide> * [Bootstrap Governance](#bootstrap-governance)
<ide>
<ide> Their responsibilities are:
<ide> * Maintain and improve the images' documentation.
<ide>
<ide>
<del>### Addon API
<add>### [Addon API](https://github.com/iojs/nan)
<ide>
<ide> The Addon API Working Group is responsible for maintaining the NAN project and
<ide> corresponding _nan_ package in npm. The NAN project makes available an
<ide> versions of Node.js, io.js, V8 and libuv.
<ide>
<ide> Their responsibilities are:
<ide>
<del>* Maintaining the [NAN](https://github.com/rvagg/nan) GitHub repository,
<add>* Maintaining the [NAN](https://github.com/iojs/nan) GitHub repository,
<ide> including code, issues and documentation.
<del>* Maintaining the [addon-examples](https://github.com/rvagg/node-addon-examples)
<add>* Maintaining the [addon-examples](https://github.com/iojs/node-addon-examples)
<ide> GitHub repository, including code, issues and documentation.
<ide> * Maintaining the C++ Addon API within the io.js project, in subordination to
<ide> the io.js TC.
<ide> Their responsibilities are:
<ide> community advance notice of changes.
<ide>
<ide> The current members can be found in their
<del>[README](https://github.com/rvagg/nan#collaborators).
<add>[README](https://github.com/iojs/nan#collaborators).
<ide>
<ide> ## Starting a WG
<ide>
| 1
|
Ruby
|
Ruby
|
raise an error if `direct` is inside a scope block
|
80dcfd014b27e560f5c4b07ee5ffa98894d8ff63
|
<ide><path>actionpack/lib/action_dispatch/routing/mapper.rb
<ide> module DirectUrls
<ide> # array passed to `polymorphic_url` is a hash then it's treated as options
<ide> # to the url helper that gets called.
<ide> #
<del> # NOTE: The `direct` method doesn't observe the current scope in routes.rb
<del> # and because of this it's recommended to define them outside of any blocks
<del> # such as `namespace` or `scope`.
<add> # NOTE: The `direct` methodn can't be used inside of a scope block such as
<add> # `namespace` or `scope` and will raise an error if it detects that it is.
<ide> def direct(name_or_hash, options = nil, &block)
<add> unless @scope.root?
<add> raise RuntimeError, "The direct method can't be used inside a routes scope block"
<add> end
<add>
<ide> case name_or_hash
<ide> when Hash
<ide> @set.add_polymorphic_mapping(name_or_hash, &block)
<ide> def nested?
<ide> scope_level == :nested
<ide> end
<ide>
<add> def null?
<add> @hash.nil? && @parent.nil?
<add> end
<add>
<add> def root?
<add> @parent.null?
<add> end
<add>
<ide> def resources?
<ide> scope_level == :resources
<ide> end
<ide><path>actionpack/test/dispatch/routing/direct_url_helpers_test.rb
<ide> def test_missing_class_raises_argument_error
<ide> end
<ide> end
<ide> end
<add>
<add> def test_defining_inside_a_scope_raises_runtime_error
<add> routes = ActionDispatch::Routing::RouteSet.new
<add>
<add> assert_raises RuntimeError do
<add> routes.draw do
<add> namespace :admin do
<add> direct(:rubyonrails) { "http://www.rubyonrails.org" }
<add> end
<add> end
<add> end
<add> end
<ide> end
| 2
|
Ruby
|
Ruby
|
pass default value as argument to fetch
|
13688cf8a94c5248d7848f86c16abeac82d88e6e
|
<ide><path>actionpack/lib/action_view/helpers/tags/base.rb
<ide> def select_content_tag(option_tags, options, html_options)
<ide> add_default_name_and_id(html_options)
<ide> select = content_tag("select", add_options(option_tags, options, value(object)), html_options)
<ide>
<del> if html_options["multiple"] && options.fetch(:include_hidden) { true }
<add> if html_options["multiple"] && options.fetch(:include_hidden, true)
<ide> tag("input", :disabled => html_options["disabled"], :name => html_options["name"], :type => "hidden", :value => "") + select
<ide> else
<ide> select
| 1
|
Javascript
|
Javascript
|
remove shims for now
|
316667fb9b77f02a712d58361c6d02b91af09a5d
|
<ide><path>index.js
<ide> function add(paths, name, path) {
<ide>
<ide> add(paths, 'prod', 'vendor/ember/ember.prod.js');
<ide> add(paths, 'debug', 'vendor/ember/ember.debug.js');
<del>add(paths, 'shims', 'vendor/ember/shims.js');
<ide> add(paths, 'jquery', 'vendor/ember/jquery/jquery.js');
<ide>
<ide> add(absolutePaths, 'templateCompiler', __dirname + '/dist/ember-template-compiler.js');
<ide> module.exports = {
<ide> ]
<ide> });
<ide>
<del> var shims = stew.find(__dirname + '/vendor/ember', {
<del> destDir: 'ember',
<del> files: [ 'shims.js' ]
<del> });
<del>
<ide> return stew.find([
<ide> ember,
<del> shims,
<ide> jquery
<ide> ]);
<ide> }
| 1
|
Text
|
Text
|
add note about overlay not being production ready
|
67cb748e26c3b6979983493f307a38b59e291642
|
<ide><path>docs/sources/reference/commandline/cli.md
<ide> The `overlay` is a very fast union filesystem. It is now merged in the main
<ide> Linux kernel as of [3.18.0](https://lkml.org/lkml/2014/10/26/137). Call
<ide> `docker -d -s overlay` to use it.
<ide>
<add>> **Note:**
<add>> As promising as `overlay` is, the feature is still quite young and should not
<add>> be used in production. Most notably, using `overlay` can cause excessive
<add>> inode consumption (especially as the number of images grows), as well as
<add>> being incompatible with the use of RPMs.
<add>
<ide> > **Note:**
<ide> > It is currently unsupported on `btrfs` or any Copy on Write filesystem
<ide> > and should only be used over `ext4` partitions.
| 1
|
Javascript
|
Javascript
|
add pan gesture recognizer
|
12213887e27c14c48926f85df51565609fa90b4d
|
<ide><path>packages/sproutcore-touch/lib/gesture_recognizers.js
<ide> require('sproutcore-touch/gesture_recognizers/pinch')
<add>require('sproutcore-touch/gesture_recognizers/pan')
<ide> require('sproutcore-touch/gesture_recognizers/tap')
<ide><path>packages/sproutcore-touch/lib/gesture_recognizers/pan.js
<add>// ==========================================================================
<add>// Project: SproutCore Runtime
<add>// Copyright: ©2011 Strobe Inc. and contributors.
<add>// License: Licensed under MIT license (see license.js)
<add>// ==========================================================================
<add>
<add>var get = SC.get;
<add>var set = SC.set;
<add>var x = 0;
<add>
<add>var sigFigs = 100;
<add>
<add>SC.PanGestureRecognizer = SC.Gesture.extend({
<add> numberOfTouches: 2,
<add>
<add> _initialLocation: null,
<add> _totalTranslation: null,
<add> _accumulated: null,
<add> _deltaThreshold: 0,
<add>
<add> init: function() {
<add> this._super();
<add>
<add> this._totalTranslation = {x:0,y:0};
<add> this._accumulated = {x:0,y:0};
<add> },
<add>
<add> _centerPointForTouches: function(first, second) {
<add> var location = {x: null, y: null};
<add>
<add> location.x = Math.round(((first.pageX + second.pageX) / 2)*sigFigs)/sigFigs;
<add> location.y = Math.round(((first.pageY + second.pageY) / 2)*sigFigs)/sigFigs;
<add>
<add> return location;
<add> },
<add>
<add> _logPoint: function(pre, point) {
<add> console.log(pre+' ('+point.x+','+point.y+')');
<add> },
<add>
<add> touchStart: function(evt, view) {
<add> var touches = evt.originalEvent.targetTouches;
<add> var len = touches.length;
<add>
<add> if (len < get(this, 'numberOfTouches')) {
<add> this.state = SC.Gesture.WAITING_FOR_TOUCHES;
<add> }
<add> else {
<add> this.state = SC.Gesture.POSSIBLE;
<add> this._initialLocation = this._centerPointForTouches(touches[0],touches[1]);
<add> }
<add>
<add> this.redispatchEventToView(view,'touchstart');
<add> },
<add>
<add> touchMove: function(evt, view) {
<add> var touches = evt.originalEvent.targetTouches;
<add> if(touches.length !== get(this, 'numberOfTouches')) { return; }
<add>
<add> var initial = this._initialLocation;
<add>
<add> var current = this._centerPointForTouches(touches[0],touches[1]);
<add>
<add> current.x -= initial.x;
<add> current.y -= initial.y;
<add>
<add> current.x = this._totalTranslation.x + current.x;
<add> current.y = this._totalTranslation.y + current.y;
<add>
<add> this._accumulated.x = current.x;
<add> this._accumulated.y = current.y;
<add>
<add> if (this.state === SC.Gesture.POSSIBLE) {
<add> this.state = SC.Gesture.BEGAN;
<add> this.notifyViewOfGestureEvent(view,'panStart', current);
<add>
<add> evt.preventDefault();
<add> }
<add> else if (this.state === SC.Gesture.BEGAN || this.state === SC.Gesture.CHANGED) {
<add> this.state = SC.Gesture.CHANGED;
<add> this.notifyViewOfGestureEvent(view,'panChange', current);
<add>
<add> evt.preventDefault();
<add> }
<add> else {
<add> this.redispatchEventToView(view,'touchmove');
<add> }
<add> },
<add>
<add> touchEnd: function(evt, view) {
<add> var touches = evt.originalEvent.targetTouches;
<add>
<add> if(touches.length !== 0) {
<add> this.redispatchEventToView(view,'touchend');
<add> return;
<add> }
<add>
<add> this._totalTranslation.x = this._accumulated.x;
<add> this._totalTranslation.y = this._accumulated.y;
<add>
<add> this._accumulated.x = 0;
<add> this._accumulated.y = 0;
<add>
<add> this.state = SC.Gesture.ENDED;
<add> },
<add>
<add> touchCancel: function(evt, view) {
<add> this.state = SC.Gesture.CANCELLED;
<add> this.redispatchEventToView(view,'touchcancel');
<add> }
<add>});
<add>
<add>SC.Gestures.register('pan', SC.PanGestureRecognizer);
| 2
|
Javascript
|
Javascript
|
remove friendlyerrorswebpackplugin option
|
e1a231cd6807f06989f4440e84c373be2f169d6d
|
<ide><path>server/build/webpack.js
<ide> import { createHash } from 'crypto'
<ide> import webpack from 'webpack'
<ide> import glob from 'glob-promise'
<ide> import WriteFilePlugin from 'write-file-webpack-plugin'
<add>import FriendlyErrorsWebpackPlugin from 'friendly-errors-webpack-plugin'
<ide> import UnlinkFilePlugin from './plugins/unlink-file-plugin'
<ide> import WatchPagesPlugin from './plugins/watch-pages-plugin'
<ide> import WatchRemoveEventPlugin from './plugins/watch-remove-event-plugin'
<ide> import DynamicEntryPlugin from './plugins/dynamic-entry-plugin'
<ide> import DetachPlugin from './plugins/detach-plugin'
<del>import FriendlyErrorsWebpackPlugin from 'friendly-errors-webpack-plugin'
<ide>
<ide> export default async function createCompiler (dir, { hotReload = false, dev = false } = {}) {
<ide> dir = resolve(dir)
<ide> export default async function createCompiler (dir, { hotReload = false, dev = fa
<ide> new UnlinkFilePlugin(),
<ide> new WatchRemoveEventPlugin(),
<ide> new WatchPagesPlugin(dir),
<del> new FriendlyErrorsWebpackPlugin({
<del> // see https://github.com/geowarin/friendly-errors-webpack-plugin/pull/11
<del> clearConsole: true
<del> })
<add> new FriendlyErrorsWebpackPlugin()
<ide> )
<ide> }
<ide>
| 1
|
Ruby
|
Ruby
|
add a test case for exists? with multiple values
|
78befcfc287eae6dbfbd6287329b1cbcd9d77e3c
|
<ide><path>activerecord/test/cases/finder_test.rb
<ide> def test_exists
<ide> assert_equal true, Topic.exists?(heading: "The First Topic")
<ide> assert_equal true, Topic.exists?(:author_name => "Mary", :approved => true)
<ide> assert_equal true, Topic.exists?(["parent_id = ?", 1])
<add> assert_equal true, Topic.exists?(id: [1, 9999])
<ide>
<ide> assert_equal false, Topic.exists?(45)
<ide> assert_equal false, Topic.exists?(Topic.new)
| 1
|
Python
|
Python
|
add freebsd support
|
0f077a78ad3cc43e847bbd7d797b923d8bf8ea5c
|
<ide><path>tools/gyp/pylib/gyp/generator/make.py
<ide> def GetFlavor(params):
<ide> flavors = {
<ide> 'darwin': 'mac',
<ide> 'sunos5': 'solaris',
<add> 'freebsd7': 'freebsd',
<add> 'freebsd8': 'freebsd',
<ide> }
<ide> flavor = flavors.get(sys.platform, 'linux')
<ide> return params.get('flavor', flavor)
<ide> def CalculateMakefilePath(build_file, base_name):
<ide> 'flock_index': 2,
<ide> 'extra_commands': SHARED_HEADER_SUN_COMMANDS,
<ide> })
<add> elif flavor == 'freebsd':
<add> header_params.update({
<add> 'flock': 'lockf',
<add> })
<ide>
<ide> if flavor == 'android':
<ide> header_params.update({
| 1
|
Java
|
Java
|
add requestbuilder for async dispatches
|
c348be25116dfdf560763f4efbf3b07077e9de4f
|
<ide><path>spring-test-mvc/src/main/java/org/springframework/test/web/servlet/request/MockMvcRequestBuilders.java
<ide> */
<ide> package org.springframework.test.web.servlet.request;
<ide>
<add>import java.lang.reflect.Method;
<add>
<add>import javax.servlet.ServletContext;
<add>
<ide> import org.springframework.http.HttpMethod;
<add>import org.springframework.mock.web.MockHttpServletRequest;
<add>import org.springframework.test.web.servlet.MvcResult;
<ide> import org.springframework.test.web.servlet.RequestBuilder;
<add>import org.springframework.util.ReflectionUtils;
<ide>
<ide> /**
<ide> * Static factory methods for {@link RequestBuilder}s.
<ide> public static MockMultipartHttpServletRequestBuilder fileUpload(String urlTempla
<ide> return new MockMultipartHttpServletRequestBuilder(urlTemplate, urlVariables);
<ide> }
<ide>
<add> /**
<add> * Create a {@link RequestBuilder} for an async dispatch from the
<add> * {@link MvcResult} of the request that started async processing.
<add> *
<add> * <p>Usage involves performing one request first that starts async processing:
<add> * <pre>
<add> * MvcResult mvcResult = this.mockMvc.perform(get("/1"))
<add> * .andExpect(request().asyncStarted())
<add> * .andReturn();
<add> * </pre>
<add> *
<add> * <p>And then performing the async dispatch re-using the {@code MvcResult}:
<add> * <pre>
<add> * this.mockMvc.perform(asyncDispatch(mvcResult))
<add> * .andExpect(status().isOk())
<add> * .andExpect(content().contentType(MediaType.APPLICATION_JSON))
<add> * .andExpect(content().string("{\"name\":\"Joe\",\"someDouble\":0.0,\"someBoolean\":false}"));
<add> * </pre>
<add> *
<add> * @param mvcResult the result from the request that started async processing
<add> */
<add> public static RequestBuilder asyncDispatch(final MvcResult mvcResult) {
<add> return new RequestBuilder() {
<add> public MockHttpServletRequest buildRequest(ServletContext servletContext) {
<add> MockHttpServletRequest request = mvcResult.getRequest();
<add> Method method = ReflectionUtils.findMethod(request.getClass(), "setAsyncStarted", boolean.class);
<add> method.setAccessible(true);
<add> ReflectionUtils.invokeMethod(method, request, false);
<add> return request;
<add> }
<add> };
<add> }
<add>
<ide> }
<ide><path>spring-test-mvc/src/test/java/org/springframework/test/web/servlet/samples/standalone/AsyncTests.java
<ide> */
<ide> package org.springframework.test.web.servlet.samples.standalone;
<ide>
<add>import static org.springframework.test.web.servlet.request.MockMvcRequestBuilders.asyncDispatch;
<ide> import static org.springframework.test.web.servlet.request.MockMvcRequestBuilders.get;
<add>import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.content;
<ide> import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.request;
<ide> import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.status;
<ide> import static org.springframework.test.web.servlet.setup.MockMvcBuilders.standaloneSetup;
<ide>
<add>import java.util.Collection;
<ide> import java.util.concurrent.Callable;
<add>import java.util.concurrent.CopyOnWriteArrayList;
<ide>
<ide> import org.junit.Before;
<ide> import org.junit.Test;
<add>import org.springframework.http.MediaType;
<ide> import org.springframework.stereotype.Controller;
<ide> import org.springframework.test.web.Person;
<ide> import org.springframework.test.web.servlet.MockMvc;
<add>import org.springframework.test.web.servlet.MvcResult;
<add>import org.springframework.ui.Model;
<ide> import org.springframework.web.bind.annotation.RequestMapping;
<add>import org.springframework.web.bind.annotation.ResponseBody;
<ide> import org.springframework.web.context.request.async.DeferredResult;
<ide>
<ide> /**
<ide> public class AsyncTests {
<ide>
<ide> private MockMvc mockMvc;
<ide>
<add> private AsyncController asyncController;
<add>
<add>
<ide> @Before
<ide> public void setup() {
<del> this.mockMvc = standaloneSetup(new AsyncController()).build();
<add> this.asyncController = new AsyncController();
<add> this.mockMvc = standaloneSetup(this.asyncController).build();
<ide> }
<ide>
<ide> @Test
<del> public void testDeferredResult() throws Exception {
<del> this.mockMvc.perform(get("/1").param("deferredResult", "true"))
<add> public void testCallable() throws Exception {
<add> MvcResult mvcResult = this.mockMvc.perform(get("/1").param("callable", "true"))
<add> .andExpect(request().asyncStarted())
<add> .andExpect(request().asyncResult(new Person("Joe")))
<add> .andReturn();
<add>
<add> this.mockMvc.perform(asyncDispatch(mvcResult))
<ide> .andExpect(status().isOk())
<del> .andExpect(request().asyncStarted());
<add> .andExpect(content().contentType(MediaType.APPLICATION_JSON))
<add> .andExpect(content().string("{\"name\":\"Joe\",\"someDouble\":0.0,\"someBoolean\":false}"));
<ide> }
<ide>
<ide> @Test
<del> public void testCallable() throws Exception {
<del> this.mockMvc.perform(get("/1").param("callable", "true"))
<del> .andExpect(status().isOk())
<add> public void testDeferredResult() throws Exception {
<add> MvcResult mvcResult = this.mockMvc.perform(get("/1").param("deferredResult", "true"))
<ide> .andExpect(request().asyncStarted())
<del> .andExpect(request().asyncResult(new Person("Joe")));
<add> .andReturn();
<add>
<add> this.asyncController.onMessage("Joe");
<add>
<add> this.mockMvc.perform(asyncDispatch(mvcResult))
<add> .andExpect(status().isOk())
<add> .andExpect(content().contentType(MediaType.APPLICATION_JSON))
<add> .andExpect(content().string("{\"name\":\"Joe\",\"someDouble\":0.0,\"someBoolean\":false}"));
<ide> }
<ide>
<ide>
<ide> @Controller
<ide> private static class AsyncController {
<ide>
<del> @RequestMapping(value="/{id}", params="deferredResult", produces="application/json")
<del> public DeferredResult<Person> getDeferredResult() {
<del> return new DeferredResult<Person>();
<del> }
<add> private Collection<DeferredResult<Person>> deferredResults = new CopyOnWriteArrayList<DeferredResult<Person>>();
<add>
<ide>
<ide> @RequestMapping(value="/{id}", params="callable", produces="application/json")
<del> public Callable<Person> getCallable() {
<add> @ResponseBody
<add> public Callable<Person> getCallable(final Model model) {
<ide> return new Callable<Person>() {
<ide> public Person call() throws Exception {
<ide> return new Person("Joe");
<ide> }
<ide> };
<ide> }
<ide>
<add> @RequestMapping(value="/{id}", params="deferredResult", produces="application/json")
<add> @ResponseBody
<add> public DeferredResult<Person> getDeferredResult() {
<add> DeferredResult<Person> deferredResult = new DeferredResult<Person>();
<add> this.deferredResults.add(deferredResult);
<add> return deferredResult;
<add> }
<add>
<add> public void onMessage(String name) {
<add> for (DeferredResult<Person> deferredResult : this.deferredResults) {
<add> deferredResult.setResult(new Person(name));
<add> this.deferredResults.remove(deferredResult);
<add> }
<add> }
<ide> }
<ide>
<ide> }
| 2
|
Go
|
Go
|
fix logrus formatting
|
a72b45dbec3caeb3237d1af5aedd04adeb083571
|
<ide><path>api/client/hijack.go
<ide> func (cli *DockerCli) HoldHijackedConnection(ctx context.Context, tty bool, inpu
<ide> _, err = stdcopy.StdCopy(outputStream, errorStream, resp.Reader)
<ide> }
<ide>
<del> logrus.Debugf("[hijack] End of stdout")
<add> logrus.Debug("[hijack] End of stdout")
<ide> receiveStdout <- err
<ide> }()
<ide> }
<ide> func (cli *DockerCli) HoldHijackedConnection(ctx context.Context, tty bool, inpu
<ide> cli.restoreTerminal(inputStream)
<ide> })
<ide> }
<del> logrus.Debugf("[hijack] End of stdin")
<add> logrus.Debug("[hijack] End of stdin")
<ide> }
<ide>
<ide> if err := resp.CloseWrite(); err != nil {
<ide><path>api/server/server.go
<ide> func (s *Server) InitRouter(enableProfiler bool, routers ...router.Router) {
<ide> func (s *Server) createMux() *mux.Router {
<ide> m := mux.NewRouter()
<ide>
<del> logrus.Debugf("Registering routers")
<add> logrus.Debug("Registering routers")
<ide> for _, apiRouter := range s.routers {
<ide> for _, r := range apiRouter.Routes() {
<ide> f := s.makeHTTPHandler(r.Handler())
<ide><path>cmd/dockerd/service_windows.go
<ide> func (h *handler) Execute(_ []string, r <-chan svc.ChangeRequest, s chan<- svc.S
<ide> // Wait for initialization to complete.
<ide> failed := <-h.tosvc
<ide> if failed {
<del> logrus.Debugf("Aborting service start due to failure during initializtion")
<add> logrus.Debug("Aborting service start due to failure during initializtion")
<ide> return true, 1
<ide> }
<ide>
<ide> s <- svc.Status{State: svc.Running, Accepts: svc.AcceptStop | svc.AcceptShutdown | svc.Accepted(windows.SERVICE_ACCEPT_PARAMCHANGE)}
<del> logrus.Debugf("Service running")
<add> logrus.Debug("Service running")
<ide> Loop:
<ide> for {
<ide> select {
<ide><path>container/container.go
<ide> func AttachStreams(ctx context.Context, streamConfig *runconfig.StreamConfig, op
<ide> if stdin == nil || !openStdin {
<ide> return
<ide> }
<del> logrus.Debugf("attach: stdin: begin")
<add> logrus.Debug("attach: stdin: begin")
<ide>
<ide> var err error
<ide> if tty {
<ide> func AttachStreams(ctx context.Context, streamConfig *runconfig.StreamConfig, op
<ide> cStderr.Close()
<ide> }
<ide> }
<del> logrus.Debugf("attach: stdin: end")
<add> logrus.Debug("attach: stdin: end")
<ide> wg.Done()
<ide> }()
<ide>
<ide><path>container/health.go
<ide> func (s *Health) String() string {
<ide> // it returns nil.
<ide> func (s *Health) OpenMonitorChannel() chan struct{} {
<ide> if s.stop == nil {
<del> logrus.Debugf("OpenMonitorChannel")
<add> logrus.Debug("OpenMonitorChannel")
<ide> s.stop = make(chan struct{})
<ide> return s.stop
<ide> }
<ide> func (s *Health) OpenMonitorChannel() chan struct{} {
<ide> // CloseMonitorChannel closes any existing monitor channel.
<ide> func (s *Health) CloseMonitorChannel() {
<ide> if s.stop != nil {
<del> logrus.Debugf("CloseMonitorChannel: waiting for probe to stop")
<add> logrus.Debug("CloseMonitorChannel: waiting for probe to stop")
<ide> // This channel does not buffer. Once the write succeeds, the monitor
<ide> // has read the stop request and will not make any further updates
<ide> // to c.State.Health.
<ide> s.stop <- struct{}{}
<ide> s.stop = nil
<del> logrus.Debugf("CloseMonitorChannel done")
<add> logrus.Debug("CloseMonitorChannel done")
<ide> }
<ide> }
<ide><path>daemon/attach.go
<ide> func (daemon *Daemon) containerAttach(c *container.Container, stdin io.ReadClose
<ide> r, w := io.Pipe()
<ide> go func() {
<ide> defer w.Close()
<del> defer logrus.Debugf("Closing buffered stdin pipe")
<add> defer logrus.Debug("Closing buffered stdin pipe")
<ide> io.Copy(w, stdin)
<ide> }()
<ide> stdinPipe = r
<ide><path>daemon/exec.go
<ide> func (d *Daemon) ContainerExecStart(ctx context.Context, name string, stdin io.R
<ide> r, w := io.Pipe()
<ide> go func() {
<ide> defer w.Close()
<del> defer logrus.Debugf("Closing buffered stdin pipe")
<add> defer logrus.Debug("Closing buffered stdin pipe")
<ide> pools.Copy(w, stdin)
<ide> }()
<ide> cStdin = r
<ide><path>daemon/graphdriver/devmapper/deviceset.go
<ide> func (devices *DeviceSet) startDeviceDeletionWorker() {
<ide> return
<ide> }
<ide>
<del> logrus.Debugf("devmapper: Worker to cleanup deleted devices started")
<add> logrus.Debug("devmapper: Worker to cleanup deleted devices started")
<ide> for range devices.deletionWorkerTicker.C {
<ide> devices.cleanupDeletedDevices()
<ide> }
<ide> func (devices *DeviceSet) saveBaseDeviceUUID(baseInfo *devInfo) error {
<ide> }
<ide>
<ide> func (devices *DeviceSet) createBaseImage() error {
<del> logrus.Debugf("devmapper: Initializing base device-mapper thin volume")
<add> logrus.Debug("devmapper: Initializing base device-mapper thin volume")
<ide>
<ide> // Create initial device
<ide> info, err := devices.createRegisterDevice("")
<ide> if err != nil {
<ide> return err
<ide> }
<ide>
<del> logrus.Debugf("devmapper: Creating filesystem on base device-mapper thin volume")
<add> logrus.Debug("devmapper: Creating filesystem on base device-mapper thin volume")
<ide>
<ide> if err := devices.activateDeviceIfNeeded(info, false); err != nil {
<ide> return err
<ide> func (devices *DeviceSet) setupBaseImage() error {
<ide> return nil
<ide> }
<ide>
<del> logrus.Debugf("devmapper: Removing uninitialized base image")
<add> logrus.Debug("devmapper: Removing uninitialized base image")
<ide> // If previous base device is in deferred delete state,
<ide> // that needs to be cleaned up first. So don't try
<ide> // deferred deletion.
<ide> func (devices *DeviceSet) refreshTransaction(DeviceID int) error {
<ide>
<ide> func (devices *DeviceSet) closeTransaction() error {
<ide> if err := devices.updatePoolTransactionID(); err != nil {
<del> logrus.Debugf("devmapper: Failed to close Transaction")
<add> logrus.Debug("devmapper: Failed to close Transaction")
<ide> return err
<ide> }
<ide> return nil
<ide> func (devices *DeviceSet) initDevmapper(doInit bool) error {
<ide> if !devicemapper.LibraryDeferredRemovalSupport {
<ide> return fmt.Errorf("devmapper: Deferred removal can not be enabled as libdm does not support it")
<ide> }
<del> logrus.Debugf("devmapper: Deferred removal support enabled.")
<add> logrus.Debug("devmapper: Deferred removal support enabled.")
<ide> devices.deferredRemove = true
<ide> }
<ide>
<ide> if enableDeferredDeletion {
<ide> if !devices.deferredRemove {
<ide> return fmt.Errorf("devmapper: Deferred deletion can not be enabled as deferred removal is not enabled. Enable deferred removal using --storage-opt dm.use_deferred_removal=true parameter")
<ide> }
<del> logrus.Debugf("devmapper: Deferred deletion support enabled.")
<add> logrus.Debug("devmapper: Deferred deletion support enabled.")
<ide> devices.deferredDelete = true
<ide> }
<ide>
<ide> func (devices *DeviceSet) initDevmapper(doInit bool) error {
<ide>
<ide> // If the pool doesn't exist, create it
<ide> if !poolExists && devices.thinPoolDevice == "" {
<del> logrus.Debugf("devmapper: Pool doesn't exist. Creating it.")
<add> logrus.Debug("devmapper: Pool doesn't exist. Creating it.")
<ide>
<ide> var (
<ide> dataFile *os.File
<ide> func (devices *DeviceSet) DeleteDevice(hash string, syncDelete bool) error {
<ide> }
<ide>
<ide> func (devices *DeviceSet) deactivatePool() error {
<del> logrus.Debugf("devmapper: deactivatePool()")
<del> defer logrus.Debugf("devmapper: deactivatePool END")
<add> logrus.Debug("devmapper: deactivatePool()")
<add> defer logrus.Debug("devmapper: deactivatePool END")
<ide> devname := devices.getPoolDevName()
<ide>
<ide> devinfo, err := devicemapper.GetInfo(devname)
<ide> func (devices *DeviceSet) UnmountDevice(hash, mountPath string) error {
<ide> if err := syscall.Unmount(mountPath, syscall.MNT_DETACH); err != nil {
<ide> return err
<ide> }
<del> logrus.Debugf("devmapper: Unmount done")
<add> logrus.Debug("devmapper: Unmount done")
<ide>
<ide> if err := devices.deactivateDevice(info); err != nil {
<ide> return err
<ide><path>daemon/graphdriver/fsdiff.go
<ide> func (gdw *NaiveDiffDriver) ApplyDiff(id, parent string, diff archive.Reader) (s
<ide> options := &archive.TarOptions{UIDMaps: gdw.uidMaps,
<ide> GIDMaps: gdw.gidMaps}
<ide> start := time.Now().UTC()
<del> logrus.Debugf("Start untar layer")
<add> logrus.Debug("Start untar layer")
<ide> if size, err = ApplyUncompressedLayer(layerFs, diff, options); err != nil {
<ide> return
<ide> }
<ide><path>daemon/health.go
<ide> func monitor(d *Daemon, c *container.Container, stop chan struct{}, probe probe)
<ide> for {
<ide> select {
<ide> case <-stop:
<del> logrus.Debugf("Stop healthcheck monitoring (received while idle)")
<add> logrus.Debug("Stop healthcheck monitoring (received while idle)")
<ide> return
<ide> case <-time.After(probeInterval):
<del> logrus.Debugf("Running health check...")
<add> logrus.Debug("Running health check...")
<ide> startTime := time.Now()
<ide> ctx, cancelProbe := context.WithTimeout(context.Background(), probeTimeout)
<ide> results := make(chan *types.HealthcheckResult)
<ide> func monitor(d *Daemon, c *container.Container, stop chan struct{}, probe probe)
<ide> }()
<ide> select {
<ide> case <-stop:
<del> logrus.Debugf("Stop healthcheck monitoring (received while probing)")
<add> logrus.Debug("Stop healthcheck monitoring (received while probing)")
<ide> // Stop timeout and kill probe, but don't wait for probe to exit.
<ide> cancelProbe()
<ide> return
<ide> func monitor(d *Daemon, c *container.Container, stop chan struct{}, probe probe)
<ide> // Stop timeout
<ide> cancelProbe()
<ide> case <-ctx.Done():
<del> logrus.Debugf("Health check taking too long")
<add> logrus.Debug("Health check taking too long")
<ide> handleProbeResult(d, c, &types.HealthcheckResult{
<ide> ExitCode: -1,
<ide> Output: fmt.Sprintf("Health check exceeded timeout (%v)", probeTimeout),
<ide><path>daemon/logs.go
<ide> func (daemon *Daemon) ContainerLogs(ctx context.Context, containerName string, c
<ide> return nil
<ide> case msg, ok := <-logs.Msg:
<ide> if !ok {
<del> logrus.Debugf("logs: end stream")
<add> logrus.Debug("logs: end stream")
<ide> logs.Close()
<ide> return nil
<ide> }
<ide><path>distribution/pull_v1.go
<ide> func (p *v1Puller) pullRepository(ctx context.Context, ref reference.Named) erro
<ide> return err
<ide> }
<ide>
<del> logrus.Debugf("Retrieving the tag list")
<add> logrus.Debug("Retrieving the tag list")
<ide> var tagsList map[string]string
<ide> if !isTagged {
<ide> tagsList, err = p.session.GetRemoteTags(repoData.Endpoints, p.repoInfo)
<ide><path>distribution/pull_v2.go
<ide> func (ld *v2LayerDescriptor) Download(ctx context.Context, progressOutput progre
<ide> size = 0
<ide> } else {
<ide> if size != 0 && offset > size {
<del> logrus.Debugf("Partial download is larger than full blob. Starting over")
<add> logrus.Debug("Partial download is larger than full blob. Starting over")
<ide> offset = 0
<ide> if err := ld.truncateDownloadFile(); err != nil {
<ide> return nil, 0, xfer.DoNotRetry{Err: err}
<ide><path>pkg/archive/archive.go
<ide> func DetectCompression(source []byte) Compression {
<ide> Xz: {0xFD, 0x37, 0x7A, 0x58, 0x5A, 0x00},
<ide> } {
<ide> if len(source) < len(m) {
<del> logrus.Debugf("Len too short")
<add> logrus.Debug("Len too short")
<ide> continue
<ide> }
<ide> if bytes.Compare(m, source[:len(m)]) == 0 {
<ide> func createTarFile(path, extractDir string, hdr *tar.Header, reader io.Reader, L
<ide> }
<ide>
<ide> case tar.TypeXGlobalHeader:
<del> logrus.Debugf("PAX Global Extended Headers found and ignored")
<add> logrus.Debug("PAX Global Extended Headers found and ignored")
<ide> return nil
<ide>
<ide> default:
<ide><path>pkg/authorization/response.go
<ide> func (rm *responseModifier) Hijack() (net.Conn, *bufio.ReadWriter, error) {
<ide> func (rm *responseModifier) CloseNotify() <-chan bool {
<ide> closeNotifier, ok := rm.rw.(http.CloseNotifier)
<ide> if !ok {
<del> logrus.Errorf("Internal response writer doesn't support the CloseNotifier interface")
<add> logrus.Error("Internal response writer doesn't support the CloseNotifier interface")
<ide> return nil
<ide> }
<ide> return closeNotifier.CloseNotify()
<ide> func (rm *responseModifier) CloseNotify() <-chan bool {
<ide> func (rm *responseModifier) Flush() {
<ide> flusher, ok := rm.rw.(http.Flusher)
<ide> if !ok {
<del> logrus.Errorf("Internal response writer doesn't support the Flusher interface")
<add> logrus.Error("Internal response writer doesn't support the Flusher interface")
<ide> return
<ide> }
<ide>
<ide><path>pkg/devicemapper/devmapper.go
<ide> func LogInit(logger DevmapperLogger) {
<ide> // SetDevDir sets the dev folder for the device mapper library (usually /dev).
<ide> func SetDevDir(dir string) error {
<ide> if res := DmSetDevDir(dir); res != 1 {
<del> logrus.Debugf("devicemapper: Error dm_set_dev_dir")
<add> logrus.Debug("devicemapper: Error dm_set_dev_dir")
<ide> return ErrSetDevDir
<ide> }
<ide> return nil
<ide><path>pkg/loopback/attach_loopback.go
<ide> func openNextAvailableLoopback(index int, sparseFile *os.File) (loopFile *os.Fil
<ide> fi, err := os.Stat(target)
<ide> if err != nil {
<ide> if os.IsNotExist(err) {
<del> logrus.Errorf("There are no more loopback devices available.")
<add> logrus.Error("There are no more loopback devices available.")
<ide> }
<ide> return nil, ErrAttachLoopbackDevice
<ide> }
<ide> func AttachLoopDevice(sparseName string) (loop *os.File, err error) {
<ide>
<ide> // If the call failed, then free the loopback device
<ide> if err := ioctlLoopClrFd(loopFile.Fd()); err != nil {
<del> logrus.Errorf("Error while cleaning up the loopback device")
<add> logrus.Error("Error while cleaning up the loopback device")
<ide> }
<ide> loopFile.Close()
<ide> return nil, ErrAttachLoopbackDevice
<ide><path>pkg/signal/trap.go
<ide> func Trap(cleanup func()) {
<ide> }
<ide> } else {
<ide> // 3 SIGTERM/INT signals received; force exit without cleanup
<del> logrus.Infof("Forcing docker daemon shutdown without cleanup; 3 interrupts received")
<add> logrus.Info("Forcing docker daemon shutdown without cleanup; 3 interrupts received")
<ide> }
<ide> case syscall.SIGQUIT:
<ide> DumpStacks()
<del> logrus.Infof("Forcing docker daemon shutdown without cleanup on SIGQUIT")
<add> logrus.Info("Forcing docker daemon shutdown without cleanup on SIGQUIT")
<ide> }
<ide> //for the SIGINT/TERM, and SIGQUIT non-clean shutdown case, exit with 128 + signal #
<ide> os.Exit(128 + int(sig.(syscall.Signal)))
<ide><path>registry/session.go
<ide> func (r *Session) GetRemoteImageLayer(imgID, registry string, imgSize int64) (io
<ide> }
<ide>
<ide> if res.Header.Get("Accept-Ranges") == "bytes" && imgSize > 0 {
<del> logrus.Debugf("server supports resume")
<add> logrus.Debug("server supports resume")
<ide> return httputils.ResumableRequestReaderWithInitialResponse(r.client, req, 5, imgSize, res), nil
<ide> }
<del> logrus.Debugf("server doesn't support resume")
<add> logrus.Debug("server doesn't support resume")
<ide> return res.Body, nil
<ide> }
<ide>
| 19
|
Python
|
Python
|
use current_app.dag_bag instead of global variable
|
50318f8519d7dd2f8215d7e977347739ecd3e408
|
<ide><path>airflow/www/app.py
<ide> # specific language governing permissions and limitations
<ide> # under the License.
<ide> #
<add>
<ide> from datetime import timedelta
<ide> from typing import Optional
<ide>
<ide> from airflow.utils.json import AirflowJsonEncoder
<ide> from airflow.www.extensions.init_appbuilder import init_appbuilder
<ide> from airflow.www.extensions.init_appbuilder_links import init_appbuilder_links
<add>from airflow.www.extensions.init_dagbag import init_dagbag
<ide> from airflow.www.extensions.init_jinja_globals import init_jinja_globals
<ide> from airflow.www.extensions.init_manifest_files import configure_manifest_files
<ide> from airflow.www.extensions.init_security import init_api_experimental_auth, init_xframe_protection
<ide> def create_app(config=None, testing=False, app_name="Airflow"):
<ide> db.session = settings.Session
<ide> db.init_app(flask_app)
<ide>
<add> init_dagbag(flask_app)
<add>
<ide> init_api_experimental_auth(flask_app)
<ide>
<ide> Cache(app=flask_app, config={'CACHE_TYPE': 'filesystem', 'CACHE_DIR': '/tmp'})
<ide><path>airflow/www/extensions/init_dagbag.py
<add># Licensed to the Apache Software Foundation (ASF) under one
<add># or more contributor license agreements. See the NOTICE file
<add># distributed with this work for additional information
<add># regarding copyright ownership. The ASF licenses this file
<add># to you under the Apache License, Version 2.0 (the
<add># "License"); you may not use this file except in compliance
<add># with the License. You may obtain a copy of the License at
<add>#
<add># http://www.apache.org/licenses/LICENSE-2.0
<add>#
<add># Unless required by applicable law or agreed to in writing,
<add># software distributed under the License is distributed on an
<add># "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
<add># KIND, either express or implied. See the License for the
<add># specific language governing permissions and limitations
<add># under the License.
<add>
<add>import os
<add>
<add>from airflow.models import DagBag
<add>from airflow.settings import DAGS_FOLDER, STORE_SERIALIZED_DAGS
<add>
<add>
<add>def init_dagbag(app):
<add> """
<add> Create global DagBag for webserver and API. To access it use
<add> ``flask.current_app.dag_bag``.
<add> """
<add> if os.environ.get('SKIP_DAGS_PARSING') == 'True':
<add> app.dag_bag = DagBag(os.devnull, include_examples=False)
<add> else:
<add> app.dag_bag = DagBag(DAGS_FOLDER, store_serialized_dags=STORE_SERIALIZED_DAGS)
<ide><path>airflow/www/views.py
<ide> import json
<ide> import logging
<ide> import math
<del>import os
<ide> import socket
<ide> import traceback
<ide> from collections import defaultdict
<ide> from airflow.models import Connection, DagModel, DagTag, Log, SlaMiss, TaskFail, XCom, errors
<ide> from airflow.models.dagcode import DagCode
<ide> from airflow.models.dagrun import DagRun, DagRunType
<del>from airflow.settings import STORE_SERIALIZED_DAGS
<ide> from airflow.ti_deps.dep_context import DepContext
<ide> from airflow.ti_deps.dependencies_deps import RUNNING_DEPS, SCHEDULER_QUEUED_DEPS
<ide> from airflow.utils import timezone
<ide> FILTER_TAGS_COOKIE = 'tags_filter'
<ide> FILTER_STATUS_COOKIE = 'dag_status_filter'
<ide>
<del>if os.environ.get('SKIP_DAGS_PARSING') != 'True':
<del> dagbag = models.DagBag(settings.DAGS_FOLDER, store_serialized_dags=STORE_SERIALIZED_DAGS)
<del>else:
<del> dagbag = models.DagBag(os.devnull, include_examples=False)
<del>
<ide>
<ide> def get_date_time_num_runs_dag_runs_form_data(request, session, dag):
<ide> dttm = request.args.get('execution_date')
<ide> def code(self, session=None):
<ide> @provide_session
<ide> def dag_details(self, session=None):
<ide> dag_id = request.args.get('dag_id')
<del> dag = dagbag.get_dag(dag_id)
<add> dag = current_app.dag_bag.get_dag(dag_id)
<ide> title = "DAG details"
<ide> root = request.args.get('root', '')
<ide>
<ide> def rendered(self):
<ide> root = request.args.get('root', '')
<ide>
<ide> logging.info("Retrieving rendered templates.")
<del> dag = dagbag.get_dag(dag_id)
<add> dag = current_app.dag_bag.get_dag(dag_id)
<ide>
<ide> task = copy.copy(dag.get_task(task_id))
<ide> ti = models.TaskInstance(task=task, execution_date=dttm)
<ide> def _get_logs_with_metadata(try_number, metadata):
<ide>
<ide> try:
<ide> if ti is not None:
<del> dag = dagbag.get_dag(dag_id)
<add> dag = current_app.dag_bag.get_dag(dag_id)
<ide> if dag:
<ide> ti.task = dag.get_task(ti.task_id)
<ide> if response_format == 'json':
<ide> def task(self):
<ide> dttm = timezone.parse(execution_date)
<ide> form = DateTimeForm(data={'execution_date': dttm})
<ide> root = request.args.get('root', '')
<del> dag = dagbag.get_dag(dag_id)
<add> dag = current_app.dag_bag.get_dag(dag_id)
<ide>
<ide> if not dag or task_id not in dag.task_ids:
<ide> flash(
<ide> def run(self):
<ide> dag_id = request.form.get('dag_id')
<ide> task_id = request.form.get('task_id')
<ide> origin = request.form.get('origin')
<del> dag = dagbag.get_dag(dag_id)
<add> dag = current_app.dag_bag.get_dag(dag_id)
<ide> task = dag.get_task(task_id)
<ide>
<ide> execution_date = request.form.get('execution_date')
<ide> def trigger(self, session=None):
<ide> conf=conf
<ide> )
<ide>
<del> dag = dagbag.get_dag(dag_id)
<add> dag = current_app.dag_bag.get_dag(dag_id)
<ide> dag.create_dagrun(
<ide> run_type=DagRunType.MANUAL,
<ide> execution_date=execution_date,
<ide> def clear(self):
<ide> dag_id = request.form.get('dag_id')
<ide> task_id = request.form.get('task_id')
<ide> origin = request.form.get('origin')
<del> dag = dagbag.get_dag(dag_id)
<add> dag = current_app.dag_bag.get_dag(dag_id)
<ide>
<ide> execution_date = request.form.get('execution_date')
<ide> execution_date = timezone.parse(execution_date)
<ide> def dagrun_clear(self):
<ide> execution_date = request.form.get('execution_date')
<ide> confirmed = request.form.get('confirmed') == "true"
<ide>
<del> dag = dagbag.get_dag(dag_id)
<add> dag = current_app.dag_bag.get_dag(dag_id)
<ide> execution_date = timezone.parse(execution_date)
<ide> start_date = execution_date
<ide> end_date = execution_date
<ide> def blocked(self, session=None):
<ide> payload = []
<ide> for dag_id, active_dag_runs in dags:
<ide> max_active_runs = 0
<del> dag = dagbag.get_dag(dag_id)
<add> dag = current_app.dag_bag.get_dag(dag_id)
<ide> if dag:
<ide> # TODO: Make max_active_runs a column so we can query for it directly
<ide> max_active_runs = dag.max_active_runs
<ide> def _mark_dagrun_state_as_failed(self, dag_id, execution_date, confirmed, origin
<ide> return redirect(origin)
<ide>
<ide> execution_date = timezone.parse(execution_date)
<del> dag = dagbag.get_dag(dag_id)
<add> dag = current_app.dag_bag.get_dag(dag_id)
<ide>
<ide> if not dag:
<ide> flash('Cannot find DAG: {}'.format(dag_id), 'error')
<ide> def _mark_dagrun_state_as_success(self, dag_id, execution_date, confirmed, origi
<ide> return redirect(origin)
<ide>
<ide> execution_date = timezone.parse(execution_date)
<del> dag = dagbag.get_dag(dag_id)
<add> dag = current_app.dag_bag.get_dag(dag_id)
<ide>
<ide> if not dag:
<ide> flash('Cannot find DAG: {}'.format(dag_id), 'error')
<ide> def dagrun_success(self):
<ide> def _mark_task_instance_state(self, dag_id, task_id, origin, execution_date,
<ide> confirmed, upstream, downstream,
<ide> future, past, state):
<del> dag = dagbag.get_dag(dag_id)
<add> dag = current_app.dag_bag.get_dag(dag_id)
<ide> task = dag.get_task(task_id)
<ide> task.dag = dag
<ide>
<ide> def success(self):
<ide> def tree(self):
<ide> dag_id = request.args.get('dag_id')
<ide> blur = conf.getboolean('webserver', 'demo_mode')
<del> dag = dagbag.get_dag(dag_id)
<add> dag = current_app.dag_bag.get_dag(dag_id)
<ide> if not dag:
<ide> flash('DAG "{0}" seems to be missing from DagBag.'.format(dag_id), "error")
<ide> return redirect(url_for('Airflow.index'))
<ide> def recurse_nodes(task, visited):
<ide> def graph(self, session=None):
<ide> dag_id = request.args.get('dag_id')
<ide> blur = conf.getboolean('webserver', 'demo_mode')
<del> dag = dagbag.get_dag(dag_id)
<add> dag = current_app.dag_bag.get_dag(dag_id)
<ide> if not dag:
<ide> flash('DAG "{0}" seems to be missing.'.format(dag_id), "error")
<ide> return redirect(url_for('Airflow.index'))
<ide> class GraphForm(DateTimeWithNumRunsWithDagRunsForm):
<ide> def duration(self, session=None):
<ide> default_dag_run = conf.getint('webserver', 'default_dag_run_display_number')
<ide> dag_id = request.args.get('dag_id')
<del> dag = dagbag.get_dag(dag_id)
<add> dag = current_app.dag_bag.get_dag(dag_id)
<ide> base_date = request.args.get('base_date')
<ide> num_runs = request.args.get('num_runs')
<ide> num_runs = int(num_runs) if num_runs else default_dag_run
<ide> def duration(self, session=None):
<ide> def tries(self, session=None):
<ide> default_dag_run = conf.getint('webserver', 'default_dag_run_display_number')
<ide> dag_id = request.args.get('dag_id')
<del> dag = dagbag.get_dag(dag_id)
<add> dag = current_app.dag_bag.get_dag(dag_id)
<ide> base_date = request.args.get('base_date')
<ide> num_runs = request.args.get('num_runs')
<ide> num_runs = int(num_runs) if num_runs else default_dag_run
<ide> def tries(self, session=None):
<ide> def landing_times(self, session=None):
<ide> default_dag_run = conf.getint('webserver', 'default_dag_run_display_number')
<ide> dag_id = request.args.get('dag_id')
<del> dag = dagbag.get_dag(dag_id)
<add> dag = current_app.dag_bag.get_dag(dag_id)
<ide> base_date = request.args.get('base_date')
<ide> num_runs = request.args.get('num_runs')
<ide> num_runs = int(num_runs) if num_runs else default_dag_run
<ide> def refresh(self, session=None):
<ide> session.merge(orm_dag)
<ide> session.commit()
<ide>
<del> dag = dagbag.get_dag(dag_id)
<add> dag = current_app.dag_bag.get_dag(dag_id)
<ide> # sync dag permission
<ide> current_app.appbuilder.sm.sync_perm_for_dag(dag_id, dag.access_control)
<ide>
<ide> def refresh(self, session=None):
<ide> @provide_session
<ide> def gantt(self, session=None):
<ide> dag_id = request.args.get('dag_id')
<del> dag = dagbag.get_dag(dag_id)
<add> dag = current_app.dag_bag.get_dag(dag_id)
<ide> demo_mode = conf.getboolean('webserver', 'demo_mode')
<ide>
<ide> root = request.args.get('root')
<ide> def extra_links(self):
<ide> execution_date = request.args.get('execution_date')
<ide> link_name = request.args.get('link_name')
<ide> dttm = timezone.parse(execution_date)
<del> dag = dagbag.get_dag(dag_id)
<add> dag = current_app.dag_bag.get_dag(dag_id)
<ide>
<ide> if not dag or task_id not in dag.task_ids:
<ide> response = jsonify(
<ide> def extra_links(self):
<ide> @action_logging
<ide> def task_instances(self):
<ide> dag_id = request.args.get('dag_id')
<del> dag = dagbag.get_dag(dag_id)
<add> dag = current_app.dag_bag.get_dag(dag_id)
<ide>
<ide> dttm = request.args.get('execution_date')
<ide> if dttm:
<ide> def action_set_failed(self, drs, session=None):
<ide> dirty_ids.append(dr.dag_id)
<ide> count += 1
<ide> altered_tis += \
<del> set_dag_run_state_to_failed(dagbag.get_dag(dr.dag_id),
<add> set_dag_run_state_to_failed(current_app.dag_bag.get_dag(dr.dag_id),
<ide> dr.execution_date,
<ide> commit=True,
<ide> session=session)
<ide> def action_set_success(self, drs, session=None):
<ide> dirty_ids.append(dr.dag_id)
<ide> count += 1
<ide> altered_tis += \
<del> set_dag_run_state_to_success(dagbag.get_dag(dr.dag_id),
<add> set_dag_run_state_to_success(current_app.dag_bag.get_dag(dr.dag_id),
<ide> dr.execution_date,
<ide> commit=True,
<ide> session=session)
<ide> def action_clear(self, tis, session=None):
<ide> dag_to_tis = {}
<ide>
<ide> for ti in tis:
<del> dag = dagbag.get_dag(ti.dag_id)
<add> dag = current_app.dag_bag.get_dag(ti.dag_id)
<ide> tis = dag_to_tis.setdefault(dag, [])
<ide> tis.append(ti)
<ide>
<ide><path>tests/www/test_views.py
<ide> class TestAirflowBaseViews(TestBase):
<ide> def setUpClass(cls):
<ide> super().setUpClass()
<ide> cls.dagbag = models.DagBag(include_examples=True)
<add> cls.app.dag_bag = cls.dagbag
<ide> DAG.bulk_sync_to_db(cls.dagbag.dags.values())
<ide>
<ide> def setUp(self):
<ide> def test_dag_details(self):
<ide> self.check_content_in_response('DAG details', resp)
<ide>
<ide> @parameterized.expand(["graph", "tree", "dag_details"])
<del> @mock.patch('airflow.www.views.dagbag.get_dag')
<del> def test_view_uses_existing_dagbag(self, endpoint, mock_get_dag):
<add> def test_view_uses_existing_dagbag(self, endpoint):
<ide> """
<ide> Test that Graph, Tree & Dag Details View uses the DagBag already created in views.py
<ide> instead of creating a new one.
<ide> """
<del> mock_get_dag.return_value = DAG(dag_id='example_bash_operator')
<ide> url = f'{endpoint}?dag_id=example_bash_operator'
<ide> resp = self.client.get(url, follow_redirects=True)
<del> mock_get_dag.assert_called_once_with('example_bash_operator')
<ide> self.check_content_in_response('example_bash_operator', resp)
<ide>
<ide> @parameterized.expand([
<ide> def setUp(self):
<ide> settings.configure_orm()
<ide> self.login()
<ide>
<del> from airflow.www.views import dagbag
<add> dagbag = self.app.dag_bag
<ide> dag = DAG(self.DAG_ID, start_date=self.DEFAULT_DATE)
<ide> dag.sync_to_db()
<ide> dag_removed = DAG(self.DAG_ID_REMOVED, start_date=self.DEFAULT_DATE)
<ide> def __init__(self, test, endpoint):
<ide> self.runs = []
<ide>
<ide> def setup(self):
<del> from airflow.www.views import dagbag
<add> dagbag = self.test.app.dag_bag
<ide> dag = DAG(self.DAG_ID, start_date=self.DEFAULT_DATE)
<ide> dagbag.bag_dag(dag, parent_dag=dag, root_dag=dag)
<ide> for run_data in self.RUNS_DATA:
<ide> def test_start_date_filter(self):
<ide> class TestRenderedView(TestBase):
<ide>
<ide> def setUp(self):
<del> super().setUp()
<add>
<ide> self.default_date = datetime(2020, 3, 1)
<ide> self.dag = DAG(
<ide> "testdag",
<ide> def setUp(self):
<ide> with create_session() as session:
<ide> session.query(RTIF).delete()
<ide>
<add> self.app.dag_bag = mock.MagicMock(**{'get_dag.return_value': self.dag})
<add> super().setUp()
<add>
<ide> def tearDown(self) -> None:
<ide> super().tearDown()
<ide> with create_session() as session:
<ide> session.query(RTIF).delete()
<ide>
<del> @mock.patch('airflow.www.views.STORE_SERIALIZED_DAGS', True)
<del> @mock.patch('airflow.models.taskinstance.STORE_SERIALIZED_DAGS', True)
<del> @mock.patch('airflow.www.views.dagbag.get_dag')
<del> def test_rendered_view(self, get_dag_function):
<add> def test_rendered_view(self):
<ide> """
<ide> Test that the Rendered View contains the values from RenderedTaskInstanceFields
<ide> """
<del> get_dag_function.return_value = SerializedDagModel.get(self.dag.dag_id).dag
<del>
<ide> self.assertEqual(self.task1.bash_command, '{{ task_instance_key_str }}')
<ide> ti = TaskInstance(self.task1, self.default_date)
<ide>
<ide> def test_rendered_view(self, get_dag_function):
<ide> resp = self.client.get(url, follow_redirects=True)
<ide> self.check_content_in_response("testdag__task1__20200301", resp)
<ide>
<del> @mock.patch('airflow.www.views.STORE_SERIALIZED_DAGS', True)
<del> @mock.patch('airflow.models.taskinstance.STORE_SERIALIZED_DAGS', True)
<del> @mock.patch('airflow.www.views.dagbag.get_dag')
<del> def test_rendered_view_for_unexecuted_tis(self, get_dag_function):
<add> def test_rendered_view_for_unexecuted_tis(self):
<ide> """
<ide> Test that the Rendered View is able to show rendered values
<ide> even for TIs that have not yet executed
<ide> """
<del> get_dag_function.return_value = SerializedDagModel.get(self.dag.dag_id).dag
<del>
<ide> self.assertEqual(self.task1.bash_command, '{{ task_instance_key_str }}')
<ide>
<ide> url = ('rendered?task_id=task1&dag_id=task1&execution_date={}'
<ide> def test_rendered_view_for_unexecuted_tis(self, get_dag_function):
<ide> resp = self.client.get(url, follow_redirects=True)
<ide> self.check_content_in_response("testdag__task1__20200301", resp)
<ide>
<del> @mock.patch('airflow.www.views.STORE_SERIALIZED_DAGS', True)
<ide> @mock.patch('airflow.models.taskinstance.STORE_SERIALIZED_DAGS', True)
<del> @mock.patch('airflow.www.views.dagbag.get_dag')
<del> def test_user_defined_filter_and_macros_raise_error(self, get_dag_function):
<add> def test_user_defined_filter_and_macros_raise_error(self):
<ide> """
<ide> Test that the Rendered View is able to show rendered values
<ide> even for TIs that have not yet executed
<ide> """
<del> get_dag_function.return_value = SerializedDagModel.get(self.dag.dag_id).dag
<del>
<add> self.app.dag_bag = mock.MagicMock(
<add> **{'get_dag.return_value': SerializedDagModel.get(self.dag.dag_id).dag}
<add> )
<ide> self.assertEqual(self.task2.bash_command,
<ide> 'echo {{ fullname("Apache", "Airflow") | hello }}')
<ide>
<ide> def test_trigger_dag_form(self):
<ide> self.assertEqual(resp.status_code, 200)
<ide> self.check_content_in_response('Trigger DAG: {}'.format(test_dag_id), resp)
<ide>
<del> @mock.patch('airflow.www.views.dagbag.get_dag')
<del> def test_trigger_endpoint_uses_existing_dagbag(self, mock_get_dag):
<add> def test_trigger_endpoint_uses_existing_dagbag(self):
<ide> """
<ide> Test that Trigger Endpoint uses the DagBag already created in views.py
<ide> instead of creating a new one.
<ide> """
<del> mock_get_dag.return_value = DAG(dag_id='example_bash_operator')
<ide> url = 'trigger?dag_id=example_bash_operator'
<ide> resp = self.client.post(url, data={}, follow_redirects=True)
<del> mock_get_dag.assert_called_once_with('example_bash_operator')
<ide> self.check_content_in_response('example_bash_operator', resp)
<ide>
<ide>
<ide> class TestExtraLinks(TestBase):
<ide> def setUp(self):
<ide> from tests.test_utils.mock_operators import Dummy3TestOperator
<ide> from tests.test_utils.mock_operators import Dummy2TestOperator
<del> super().setUp()
<add>
<ide> self.endpoint = "extra_links"
<ide> self.default_date = datetime(2017, 1, 1)
<ide>
<ide> class DummyTestOperator(BaseOperator):
<ide> self.task_2 = Dummy2TestOperator(task_id="some_dummy_task_2", dag=self.dag)
<ide> self.task_3 = Dummy3TestOperator(task_id="some_dummy_task_3", dag=self.dag)
<ide>
<del> @mock.patch('airflow.www.views.dagbag.get_dag')
<del> def test_extra_links_works(self, get_dag_function):
<del> get_dag_function.return_value = self.dag
<add> self.app.dag_bag = mock.MagicMock(**{'get_dag.return_value': self.dag})
<add> super().setUp()
<ide>
<add> def test_extra_links_works(self):
<ide> response = self.client.get(
<ide> "{0}?dag_id={1}&task_id={2}&execution_date={3}&link_name=foo-bar"
<ide> .format(self.endpoint, self.dag.dag_id, self.task.task_id, self.default_date),
<ide> def test_extra_links_works(self, get_dag_function):
<ide> 'error': None
<ide> })
<ide>
<del> @mock.patch('airflow.www.views.dagbag.get_dag')
<del> def test_global_extra_links_works(self, get_dag_function):
<del> get_dag_function.return_value = self.dag
<del>
<add> def test_global_extra_links_works(self):
<ide> response = self.client.get(
<ide> "{0}?dag_id={1}&task_id={2}&execution_date={3}&link_name=github"
<ide> .format(self.endpoint, self.dag.dag_id, self.task.task_id, self.default_date),
<ide> def test_global_extra_links_works(self, get_dag_function):
<ide> 'error': None
<ide> })
<ide>
<del> @mock.patch('airflow.www.views.dagbag.get_dag')
<del> def test_extra_link_in_gantt_view(self, get_dag_function):
<del> get_dag_function.return_value = self.dag
<del>
<add> def test_extra_link_in_gantt_view(self):
<ide> exec_date = dates.days_ago(2)
<ide> start_date = datetime(2020, 4, 10, 2, 0, 0)
<ide> end_date = exec_date + timedelta(seconds=30)
<ide> def test_extra_link_in_gantt_view(self, get_dag_function):
<ide> self.assertIn('airflow', extra_links)
<ide> self.assertIn('github', extra_links)
<ide>
<del> @mock.patch('airflow.www.views.dagbag.get_dag')
<del> def test_operator_extra_link_override_global_extra_link(self, get_dag_function):
<del> get_dag_function.return_value = self.dag
<del>
<add> def test_operator_extra_link_override_global_extra_link(self):
<ide> response = self.client.get(
<ide> "{0}?dag_id={1}&task_id={2}&execution_date={3}&link_name=airflow".format(
<ide> self.endpoint, self.dag.dag_id, self.task.task_id, self.default_date),
<ide> def test_operator_extra_link_override_global_extra_link(self, get_dag_function):
<ide> 'error': None
<ide> })
<ide>
<del> @mock.patch('airflow.www.views.dagbag.get_dag')
<del> def test_extra_links_error_raised(self, get_dag_function):
<del> get_dag_function.return_value = self.dag
<del>
<add> def test_extra_links_error_raised(self):
<ide> response = self.client.get(
<ide> "{0}?dag_id={1}&task_id={2}&execution_date={3}&link_name=raise_error"
<ide> .format(self.endpoint, self.dag.dag_id, self.task.task_id, self.default_date),
<ide> def test_extra_links_error_raised(self, get_dag_function):
<ide> 'url': None,
<ide> 'error': 'This is an error'})
<ide>
<del> @mock.patch('airflow.www.views.dagbag.get_dag')
<del> def test_extra_links_no_response(self, get_dag_function):
<del> get_dag_function.return_value = self.dag
<del>
<add> def test_extra_links_no_response(self):
<ide> response = self.client.get(
<ide> "{0}?dag_id={1}&task_id={2}&execution_date={3}&link_name=no_response"
<ide> .format(self.endpoint, self.dag.dag_id, self.task.task_id, self.default_date),
<ide> def test_extra_links_no_response(self, get_dag_function):
<ide> 'url': None,
<ide> 'error': 'No URL found for no_response'})
<ide>
<del> @mock.patch('airflow.www.views.dagbag.get_dag')
<del> def test_operator_extra_link_override_plugin(self, get_dag_function):
<add> def test_operator_extra_link_override_plugin(self):
<ide> """
<ide> This tests checks if Operator Link (AirflowLink) defined in the Dummy2TestOperator
<ide> is overriden by Airflow Plugin (AirflowLink2).
<ide>
<ide> AirflowLink returns 'https://airflow.apache.org/' link
<ide> AirflowLink2 returns 'https://airflow.apache.org/1.10.5/' link
<ide> """
<del> get_dag_function.return_value = self.dag
<del>
<ide> response = self.client.get(
<ide> "{0}?dag_id={1}&task_id={2}&execution_date={3}&link_name=airflow".format(
<ide> self.endpoint, self.dag.dag_id, self.task_2.task_id, self.default_date),
<ide> def test_operator_extra_link_override_plugin(self, get_dag_function):
<ide> 'error': None
<ide> })
<ide>
<del> @mock.patch('airflow.www.views.dagbag.get_dag')
<del> def test_operator_extra_link_multiple_operators(self, get_dag_function):
<add> def test_operator_extra_link_multiple_operators(self):
<ide> """
<ide> This tests checks if Operator Link (AirflowLink2) defined in
<ide> Airflow Plugin (AirflowLink2) is attached to all the list of
<ide> def test_operator_extra_link_multiple_operators(self, get_dag_function):
<ide> AirflowLink2 returns 'https://airflow.apache.org/1.10.5/' link
<ide> GoogleLink returns 'https://www.google.com'
<ide> """
<del> get_dag_function.return_value = self.dag
<del>
<ide> response = self.client.get(
<ide> "{0}?dag_id={1}&task_id={2}&execution_date={3}&link_name=airflow".format(
<ide> self.endpoint, self.dag.dag_id, self.task_2.task_id, self.default_date),
| 4
|
Javascript
|
Javascript
|
convert errormonitor to a normal property
|
ed8007af0bc8fd4fa575217b50e8ca611a7e8679
|
<ide><path>lib/events.js
<ide> ObjectDefineProperty(EventEmitter, 'captureRejections', {
<ide> enumerable: true
<ide> });
<ide>
<del>ObjectDefineProperty(EventEmitter, 'errorMonitor', {
<del> value: kErrorMonitor,
<del> writable: false,
<del> configurable: true,
<del> enumerable: true
<del>});
<add>EventEmitter.errorMonitor = kErrorMonitor;
<ide>
<ide> // The default for captureRejections is false
<ide> ObjectDefineProperty(EventEmitter.prototype, kCapture, {
| 1
|
Text
|
Text
|
fix mistakes in the deployment doc
|
3707701cc9e9d03016dfc830d2d80a4de8517718
|
<ide><path>docs/deployment.md
<ide> description: Compile and deploy your Next.js app to production with ZEIT Now and
<ide>
<ide> # Deployment
<ide>
<del>To go to production Next.js has a `next build` command. When ran it will compile your project and automatically apply numerous optimizations.
<add>To go to production Next.js has a `next build` command. When run, it will compile your project and automatically apply numerous optimizations.
<ide>
<ide> ## Prepare your package.json
<ide>
<ide> The [hybrid pages](/docs/basic-features/pages.md) approach is fully supported ou
<ide>
<ide> In case of [Static Generation](/docs/basic-features/pages.md#static-generation) the page will automatically be served from the ZEIT Now Smart CDN.
<ide>
<del>When the page is using [Server-Side Rendering](/docs/basic-features/pages.md#server-side-rendering) it will become an isolated serverless function automatically. This allows the page rendering to scale automatically and be independent, errors on one page won't affect another.
<add>When the page is using [Server-Side Rendering](/docs/basic-features/pages.md#server-side-rendering) it will become an isolated serverless function automatically. This allows the page rendering to scale automatically and be independent—errors on one page won't affect another.
<ide>
<del>API routes will also become separate serverless functions that execute and scale separately from eachother.
<add>API routes will also become separate serverless functions that execute and scale separately from each other.
<ide>
<ide> ### CDN + HTTPS by default
<ide>
<ide> HTTPS is enabled by default and doesn't require extra configuration.
<ide>
<ide> #### From a git repository
<ide>
<del>You can link your project in [GitHub](https://zeit.co/new), [GitLab](https://zeit.co/new), or [Bitbucket](https://zeit.co/new) through the [web interface](https://zeit.co/new). This will automatically set up deployment previews for pull-requests and commits.
<add>You can link your project in [GitHub](https://zeit.co/new), [GitLab](https://zeit.co/new), or [Bitbucket](https://zeit.co/new) through the [web interface](https://zeit.co/new). This will automatically set up deployment previews for pull requests and commits.
<ide>
<ide> #### Through the ZEIT Now CLI
<ide>
<ide> Generally you'll have to follow these steps to deploy to production:
<ide> - Potentially copy the `.next`, `node_modules`, and `package.json` to your server.
<ide> - Run `npm run start` (runs `next start`) on the server
<ide>
<del>In case you're doing a full static export using `next export` the steps are slightly different and doesn't involve using `next start`:
<add>In case you're doing a full static export using `next export` the steps are slightly different and don't involve using `next start`:
<ide>
<ide> - Run `npm install`
<ide> - Run `npm run build` (runs `next build && next export`)
| 1
|
Javascript
|
Javascript
|
add "finish" test
|
921cfea20a44805b3aecfc1c88c6bfa7bafa0e24
|
<ide><path>src/renderers/webgl/WebGLRenderLists.js
<ide> function WebGLRenderList() {
<ide> }
<ide>
<ide> return {
<add> renderItems: renderItems,
<ide> opaque: opaque,
<ide> transparent: transparent,
<ide>
<ide><path>test/unit/src/renderers/webgl/WebGLRenderLists.tests.js
<ide> export default QUnit.module( 'Renderers', () => {
<ide>
<ide> } );
<ide>
<del> QUnit.todo( 'finish', ( assert ) => {
<add> QUnit.test( 'finish', ( assert ) => {
<add>
<add> var list = new WebGLRenderList();
<add> var obj = { id: 'A', renderOrder: 0 };
<add> var mat = { transparent: false, program: { id: 0 } };
<add> var geom = {};
<add>
<add> assert.ok( list.renderItems.length === 0, 'Render items length defaults to 0.' );
<add>
<add> list.push( obj, geom, mat, 0, 0, {} );
<add> list.push( obj, geom, mat, 0, 0, {} );
<add> list.push( obj, geom, mat, 0, 0, {} );
<add> assert.ok( list.renderItems.length === 3, 'Render items length expands as items are added.' );
<add>
<add> list.finish();
<add> assert.deepEqual(
<add> list.renderItems.map( item => item.object ),
<add> [ obj, obj, obj ],
<add> 'Render items are not cleaned if they are being used.'
<add> );
<add> assert.deepEqual(
<add> list.renderItems[ 1 ],
<add> {
<add> id: 'A',
<add> object: obj,
<add> geometry: geom,
<add> material: mat,
<add> program: mat.program,
<add> groupOrder: 0,
<add> renderOrder: 0,
<add> z: 0,
<add> group: {}
<add> },
<add> 'Unused render item is structured correctly before clearing.'
<add> );
<add>
<add> list.init();
<add> list.push( obj, geom, mat, 0, 0, {} );
<add> assert.ok( list.renderItems.length === 3, 'Render items length does not shrink.' );
<add>
<add> list.finish();
<add> assert.deepEqual(
<add> list.renderItems.map( item => item.object ),
<add> [ obj, null, null ],
<add> 'Render items are cleaned if they are not being used.'
<add> );
<add>
<add> assert.deepEqual(
<add> list.renderItems[ 1 ],
<add> {
<add> id: null,
<add> object: null,
<add> geometry: null,
<add> material: null,
<add> program: null,
<add> groupOrder: 0,
<add> renderOrder: 0,
<add> z: 0,
<add> group: null
<add> },
<add> 'Unused render item is structured correctly before clearing.'
<add> );
<ide>
<del> assert.ok( false, "everything's gonna be alright" );
<ide>
<ide> } );
<ide>
| 2
|
Javascript
|
Javascript
|
allow showing fabric indicator for appregistry
|
66492e7f9b2459e0aa384bd897ed7436a0b7b046
|
<ide><path>Libraries/ReactNative/AppRegistry.js
<ide> let componentProviderInstrumentationHook: ComponentProviderInstrumentationHook =
<ide> ) => component();
<ide>
<ide> let wrapperComponentProvider: ?WrapperComponentProvider;
<add>let showFabricIndicator = false;
<ide>
<ide> /**
<ide> * `AppRegistry` is the JavaScript entry point to running all React Native apps.
<ide> const AppRegistry = {
<ide> wrapperComponentProvider = provider;
<ide> },
<ide>
<add> enableFabricIndicator(enabled: boolean): void {
<add> showFabricIndicator = enabled;
<add> },
<add>
<ide> registerConfig(config: Array<AppConfig>): void {
<ide> config.forEach(appConfig => {
<ide> if (appConfig.run) {
<ide> const AppRegistry = {
<ide> appParameters.rootTag,
<ide> wrapperComponentProvider && wrapperComponentProvider(appParameters),
<ide> appParameters.fabric,
<del> false,
<add> showFabricIndicator,
<ide> scopedPerformanceLogger,
<ide> );
<ide> },
| 1
|
PHP
|
PHP
|
add type hinting to dispatch filters
|
5741ac1828830557d50a127b1d8ef65b74aacd28
|
<ide><path>lib/Cake/Routing/DispatcherFilter.php
<ide> public function implementedEvents() {
<ide> * keys in the data property.
<ide> * @return CakeResponse|boolean
<ide> **/
<del> public function beforeDispatch($event) {
<add> public function beforeDispatch(CakeEvent $event) {
<ide> }
<ide>
<ide> /**
<ide> public function beforeDispatch($event) {
<ide> * keys in the data property.
<ide> * @return mixed boolean to stop the event dispatching or null to continue
<ide> **/
<del> public function afterDispatch($event) {
<add> public function afterDispatch(CakeEvent $event) {
<ide> }
<ide> }
<ide><path>lib/Cake/Routing/Filter/AssetDispatcher.php
<ide> class AssetDispatcher extends DispatcherFilter {
<ide> * @param CakeEvent $event containing the request and response object
<ide> * @return CakeResponse if the client is requesting a recognized asset, null otherwise
<ide> */
<del> public function beforeDispatch($event) {
<add> public function beforeDispatch(CakeEvent $event) {
<ide> $url = $event->data['request']->url;
<ide> if (strpos($url, '..') !== false || strpos($url, '.') === false) {
<ide> return;
<ide> public function beforeDispatch($event) {
<ide> * @param CakeEvent $event containing the request and response object
<ide> * @return CakeResponse if the client is requesting a recognized asset, null otherwise
<ide> */
<del> protected function _filterAsset($event) {
<add> protected function _filterAsset(CakeEvent $event) {
<ide> $url = $event->data['request']->url;
<ide> $response = $event->data['response'];
<ide> $filters = Configure::read('Asset.filter');
<ide><path>lib/Cake/Routing/Filter/CacheDispatcher.php
<ide> class CacheDispatcher extends DispatcherFilter {
<ide> * @param CakeEvent $event containing the request and response object
<ide> * @return CakeResponse with cached content if found, null otherwise
<ide> */
<del> public function beforeDispatch($event) {
<add> public function beforeDispatch(CakeEvent $event) {
<ide> if (Configure::read('Cache.check') !== true) {
<ide> return;
<ide> }
| 3
|
Ruby
|
Ruby
|
return nil for out-of-bound parameters
|
09b46c7dbea6ac2ac09ba4a41fae0bf628a775f3
|
<ide><path>activesupport/lib/active_support/multibyte/chars.rb
<ide> def split(*args)
<ide> # string.mb_chars.slice!(0..3) # => #<ActiveSupport::Multibyte::Chars:0x00000002eb80a0 @wrapped_string="Welo">
<ide> # string # => 'me'
<ide> def slice!(*args)
<del> chars(@wrapped_string.slice!(*args))
<add> string_sliced = @wrapped_string.slice!(*args)
<add> string_sliced ? chars(string_sliced) : nil
<ide> end
<ide>
<ide> # Reverses all characters in the string.
<ide><path>activesupport/test/multibyte_chars_test.rb
<ide> def test_slice_bang_returns_sliced_out_substring
<ide> assert_equal 'にち', @chars.slice!(1..2)
<ide> end
<ide>
<add> def test_slice_bang_returns_nil_on_out_of_bound_arguments
<add> assert_equal nil, @chars.mb_chars.slice!(9..10)
<add> end
<add>
<ide> def test_slice_bang_removes_the_slice_from_the_receiver
<ide> chars = 'úüù'.mb_chars
<ide> chars.slice!(0,2)
| 2
|
PHP
|
PHP
|
change method order
|
e86f58a62ef54d5e28f7f53abe928bc94116d17c
|
<ide><path>src/Illuminate/Routing/CreatesRegularExpressionRouteConstraints.php
<ide> trait CreatesRegularExpressionRouteConstraints
<ide> {
<ide> /**
<del> * Specify that the given route parameters must be numeric.
<add> * Specify that the given route parameters must be alphabetic.
<ide> *
<ide> * @param array|string $parameters
<ide> * @return $this
<ide> */
<del> public function whereNumber($parameters)
<add> public function whereAlpha($parameters)
<ide> {
<del> return $this->assignExpressionToParameters($parameters, '[0-9]+');
<add> return $this->assignExpressionToParameters($parameters, '[a-zA-Z]+');
<ide> }
<ide>
<ide> /**
<del> * Specify that the given route parameters must be alphabetic.
<add> * Specify that the given route parameters must be numeric.
<ide> *
<ide> * @param array|string $parameters
<ide> * @return $this
<ide> */
<del> public function whereAlpha($parameters)
<add> public function whereNumber($parameters)
<ide> {
<del> return $this->assignExpressionToParameters($parameters, '[a-zA-Z]+');
<add> return $this->assignExpressionToParameters($parameters, '[0-9]+');
<ide> }
<ide>
<ide> /**
| 1
|
Python
|
Python
|
improve clip docstring
|
663bc5680b1d1d4954f71e7f02f40d0daf6b4209
|
<ide><path>numpy/core/fromnumeric.py
<ide> def clip(a, a_min, a_max, out=None):
<ide> a_min : scalar or array_like
<ide> Minimum value.
<ide> a_max : scalar or array_like
<del> Maximum value. If `a_min` or `a_max` are array_like, then they will
<del> be broadcasted to the shape of `a`.
<add> Maximum value. If `a_min` or `a_max` are array_like, then the
<add> three arrays will be broadcasted to match their shapes.
<ide> out : ndarray, optional
<ide> The results will be placed in this array. It may be the input
<ide> array for in-place clipping. `out` must be of the right shape
<ide> def clip(a, a_min, a_max, out=None):
<ide> >>> a = np.arange(10)
<ide> >>> a
<ide> array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
<del> >>> np.clip(a, [3,4,1,1,1,4,4,4,4,4], 8)
<add> >>> np.clip(a, [3, 4, 1, 1, 1, 4, 4, 4, 4, 4], 8)
<ide> array([3, 4, 2, 3, 4, 5, 6, 7, 8, 8])
<ide>
<ide> """
| 1
|
Go
|
Go
|
use prefix naming for build tests
|
66cd3640f16f5b91893dacd6de3f8c5ae55e2f2c
|
<ide><path>integration-cli/docker_cli_build_test.go
<ide> func TestBuildSixtySteps(t *testing.T) {
<ide> logDone("build - build an image with sixty build steps")
<ide> }
<ide>
<del>func TestAddSingleFileToRoot(t *testing.T) {
<add>func TestBuildAddSingleFileToRoot(t *testing.T) {
<ide> testDirName := "SingleFileToRoot"
<ide> sourceDirectory := filepath.Join(workingDirectory, "build_tests", "TestAdd", testDirName)
<ide> buildDirectory, err := ioutil.TempDir("", "test-build-add")
<ide> func TestAddSingleFileToRoot(t *testing.T) {
<ide> }
<ide>
<ide> // Issue #3960: "ADD src ." hangs
<del>func TestAddSingleFileToWorkdir(t *testing.T) {
<add>func TestBuildAddSingleFileToWorkdir(t *testing.T) {
<ide> testDirName := "SingleFileToWorkdir"
<ide> sourceDirectory := filepath.Join(workingDirectory, "build_tests", "TestAdd", testDirName)
<ide> buildDirectory, err := ioutil.TempDir("", "test-build-add")
<ide> func TestAddSingleFileToWorkdir(t *testing.T) {
<ide> logDone("build - add single file to workdir")
<ide> }
<ide>
<del>func TestAddSingleFileToExistDir(t *testing.T) {
<add>func TestBuildAddSingleFileToExistDir(t *testing.T) {
<ide> buildDirectory := filepath.Join(workingDirectory, "build_tests", "TestAdd")
<ide> out, exitCode, err := dockerCmdInDir(t, buildDirectory, "build", "-t", "testaddimg", "SingleFileToExistDir")
<ide> errorOut(err, t, fmt.Sprintf("build failed to complete: %v %v", out, err))
<ide> func TestAddSingleFileToExistDir(t *testing.T) {
<ide> logDone("build - add single file to existing dir")
<ide> }
<ide>
<del>func TestMultipleFiles(t *testing.T) {
<add>func TestBuildCopyAddMultipleFiles(t *testing.T) {
<ide> buildDirectory := filepath.Join(workingDirectory, "build_tests", "TestCopy")
<ide> out, exitCode, err := dockerCmdInDir(t, buildDirectory, "build", "-t", "testaddimg", "MultipleFiles")
<ide> errorOut(err, t, fmt.Sprintf("build failed to complete: %v %v", out, err))
<ide> func TestMultipleFiles(t *testing.T) {
<ide> logDone("build - mulitple file copy/add tests")
<ide> }
<ide>
<del>func TestAddMultipleFilesToFile(t *testing.T) {
<add>func TestBuildAddMultipleFilesToFile(t *testing.T) {
<ide> name := "testaddmultiplefilestofile"
<ide> defer deleteImages(name)
<ide> ctx, err := fakeContext(`FROM scratch
<ide> func TestAddMultipleFilesToFile(t *testing.T) {
<ide> logDone("build - multiple add files to file")
<ide> }
<ide>
<del>func TestCopyMultipleFilesToFile(t *testing.T) {
<add>func TestBuildCopyMultipleFilesToFile(t *testing.T) {
<ide> name := "testcopymultiplefilestofile"
<ide> defer deleteImages(name)
<ide> ctx, err := fakeContext(`FROM scratch
<ide> func TestCopyMultipleFilesToFile(t *testing.T) {
<ide> logDone("build - multiple copy files to file")
<ide> }
<ide>
<del>func TestAddSingleFileToNonExistDir(t *testing.T) {
<add>func TestBuildAddSingleFileToNonExistDir(t *testing.T) {
<ide> buildDirectory := filepath.Join(workingDirectory, "build_tests", "TestAdd")
<ide> out, exitCode, err := dockerCmdInDir(t, buildDirectory, "build", "-t", "testaddimg", "SingleFileToNonExistDir")
<ide> errorOut(err, t, fmt.Sprintf("build failed to complete: %v %v", out, err))
<ide> func TestAddSingleFileToNonExistDir(t *testing.T) {
<ide> logDone("build - add single file to non-existing dir")
<ide> }
<ide>
<del>func TestAddDirContentToRoot(t *testing.T) {
<add>func TestBuildAddDirContentToRoot(t *testing.T) {
<ide> buildDirectory := filepath.Join(workingDirectory, "build_tests", "TestAdd")
<ide> out, exitCode, err := dockerCmdInDir(t, buildDirectory, "build", "-t", "testaddimg", "DirContentToRoot")
<ide> errorOut(err, t, fmt.Sprintf("build failed to complete: %v %v", out, err))
<ide> func TestAddDirContentToRoot(t *testing.T) {
<ide> logDone("build - add directory contents to root")
<ide> }
<ide>
<del>func TestAddDirContentToExistDir(t *testing.T) {
<add>func TestBuildAddDirContentToExistDir(t *testing.T) {
<ide> buildDirectory := filepath.Join(workingDirectory, "build_tests", "TestAdd")
<ide> out, exitCode, err := dockerCmdInDir(t, buildDirectory, "build", "-t", "testaddimg", "DirContentToExistDir")
<ide> errorOut(err, t, fmt.Sprintf("build failed to complete: %v %v", out, err))
<ide> func TestAddDirContentToExistDir(t *testing.T) {
<ide> logDone("build - add directory contents to existing dir")
<ide> }
<ide>
<del>func TestAddWholeDirToRoot(t *testing.T) {
<add>func TestBuildAddWholeDirToRoot(t *testing.T) {
<ide> testDirName := "WholeDirToRoot"
<ide> sourceDirectory := filepath.Join(workingDirectory, "build_tests", "TestAdd", testDirName)
<ide> buildDirectory, err := ioutil.TempDir("", "test-build-add")
<ide> func TestAddWholeDirToRoot(t *testing.T) {
<ide> logDone("build - add whole directory to root")
<ide> }
<ide>
<del>func TestAddEtcToRoot(t *testing.T) {
<add>func TestBuildAddEtcToRoot(t *testing.T) {
<ide> buildDirectory := filepath.Join(workingDirectory, "build_tests", "TestAdd")
<ide> out, exitCode, err := dockerCmdInDir(t, buildDirectory, "build", "-t", "testaddimg", "EtcToRoot")
<ide> errorOut(err, t, fmt.Sprintf("build failed to complete: %v %v", out, err))
<ide> func testContextTar(t *testing.T, compression archive.Compression) {
<ide> logDone(fmt.Sprintf("build - build an image with a context tar, compression: %v", compression))
<ide> }
<ide>
<del>func TestContextTarGzip(t *testing.T) {
<add>func TestBuildContextTarGzip(t *testing.T) {
<ide> testContextTar(t, archive.Gzip)
<ide> }
<ide>
<del>func TestContextTarNoCompression(t *testing.T) {
<add>func TestBuildContextTarNoCompression(t *testing.T) {
<ide> testContextTar(t, archive.Uncompressed)
<ide> }
<ide>
<ide> docker.com>"
<ide> logDone("build - validate escaping whitespace")
<ide> }
<ide>
<del>func TestDockerignore(t *testing.T) {
<add>func TestBuildDockerignore(t *testing.T) {
<ide> name := "testbuilddockerignore"
<ide> defer deleteImages(name)
<ide> dockerfile := `
<ide> func TestDockerignore(t *testing.T) {
<ide> logDone("build - test .dockerignore")
<ide> }
<ide>
<del>func TestDockerignoringDockerfile(t *testing.T) {
<add>func TestBuildDockerignoringDockerfile(t *testing.T) {
<ide> name := "testbuilddockerignoredockerfile"
<ide> defer deleteImages(name)
<ide> dockerfile := `
<ide> func TestDockerignoringDockerfile(t *testing.T) {
<ide> logDone("build - test .dockerignore of Dockerfile")
<ide> }
<ide>
<del>func TestDockerignoringWholeDir(t *testing.T) {
<add>func TestBuildDockerignoringWholeDir(t *testing.T) {
<ide> name := "testbuilddockerignorewholedir"
<ide> defer deleteImages(name)
<ide> dockerfile := `
| 1
|
Javascript
|
Javascript
|
fix conflict with type in externalmodule
|
77bd911b2d2de57201af3bfe509dd67273bb046f
|
<ide><path>lib/ExternalModule.js
<ide> class ExternalModule extends Module {
<ide>
<ide> // Info from Factory
<ide> this.request = request;
<del> this.type = type;
<add> this.externalType = type;
<ide> this.userRequest = userRequest;
<ide> this.external = true;
<ide> }
<ide> class ExternalModule extends Module {
<ide> }
<ide>
<ide> getSourceString() {
<del> const request = typeof this.request === "object" ? this.request[this.type] : this.request;
<del> switch(this.type) {
<add> const request = typeof this.request === "object" ? this.request[this.externalType] : this.request;
<add> switch(this.externalType) {
<ide> case "this":
<ide> case "window":
<ide> case "global":
<del> return this.getSourceForGlobalVariableExternal(request, this.type);
<add> return this.getSourceForGlobalVariableExternal(request, this.externalType);
<ide> case "commonjs":
<ide> case "commonjs2":
<ide> return this.getSourceForCommonJsExternal(request);
<ide> class ExternalModule extends Module {
<ide> }
<ide>
<ide> updateHash(hash) {
<del> hash.update(this.type);
<add> hash.update(this.externalType);
<ide> hash.update(JSON.stringify(this.request));
<ide> hash.update(JSON.stringify(Boolean(this.optional)));
<ide> super.updateHash(hash);
<ide><path>lib/UmdMainTemplatePlugin.js
<ide> class UmdMainTemplatePlugin {
<ide> apply(compilation) {
<ide> const mainTemplate = compilation.mainTemplate;
<ide> compilation.templatesPlugin("render-with-entry", (source, chunk, hash) => {
<del> let externals = chunk.getModules().filter(m => m.external && (m.type === "umd" || m.type === "umd2"));
<add> let externals = chunk.getModules().filter(m => m.external && (m.externalType === "umd" || m.externalType === "umd2"));
<ide> const optionalExternals = [];
<ide> let requiredExternals = [];
<ide> if(this.optionalAmdExternalAsGlobal) {
| 2
|
Java
|
Java
|
use latches instead of sleep for unit test
|
9791c2d1e3ad84808d470f584eb4a45abf4c5cb9
|
<ide><path>rxjava-core/src/test/java/rx/observers/SerializedObserverTest.java
<ide>
<ide> import static org.junit.Assert.assertEquals;
<ide> import static org.junit.Assert.assertFalse;
<add>import static org.junit.Assert.assertSame;
<ide> import static org.junit.Assert.assertTrue;
<ide> import static org.junit.Assert.fail;
<ide> import static org.mockito.Matchers.any;
<ide> public void runConcurrencyTest() {
<ide> }
<ide> }
<ide>
<add> /**
<add> * Test that a notification does not get delayed in the queue waiting for the next event to push it through.
<add> *
<add> * @throws InterruptedException
<add> */
<ide> @Test
<del> public void testNotificationDelay() {
<add> public void testNotificationDelay() throws InterruptedException {
<ide> ExecutorService tp = Executors.newFixedThreadPool(2);
<ide>
<add> final CountDownLatch onNextCount = new CountDownLatch(1);
<add> final CountDownLatch latch = new CountDownLatch(1);
<add>
<ide> TestSubscriber<String> to = new TestSubscriber<String>(new Observer<String>() {
<ide>
<ide> @Override
<ide> public void onError(Throwable e) {
<ide>
<ide> @Override
<ide> public void onNext(String t) {
<del> // force it to take time when delivering
<del> // so the second thread will asynchronously enqueue
<add> // know when the first thread gets in
<add> onNextCount.countDown();
<add> // force it to take time when delivering so the second one is enqueued
<ide> try {
<del> Thread.sleep(50);
<add> latch.await();
<ide> } catch (InterruptedException e) {
<del> e.printStackTrace();
<ide> }
<ide> }
<ide>
<ide> public void onNext(String t) {
<ide> Future<?> f1 = tp.submit(new OnNextThread(o, 1));
<ide> Future<?> f2 = tp.submit(new OnNextThread(o, 1));
<ide>
<add> onNextCount.await();
<add>
<add> Thread t1 = to.getLastSeenThread();
<add> System.out.println("first onNext on thread: " + t1);
<add>
<add> latch.countDown();
<add>
<ide> waitOnThreads(f1, f2);
<ide> // not completed yet
<ide>
<ide> assertEquals(2, to.getOnNextEvents().size());
<add>
<add> Thread t2 = to.getLastSeenThread();
<add> System.out.println("second onNext on thread: " + t2);
<add>
<add> assertSame(t1, t2);
<add>
<ide> System.out.println(to.getOnNextEvents());
<ide> o.onCompleted();
<ide> System.out.println(to.getOnNextEvents());
| 1
|
Text
|
Text
|
enable chinese link
|
3dd33d87edd7ef4dfd762db16ac418a6bded25d8
|
<ide><path>docs/_translations.md
<ide>
<ide> <div class='i18n-lang-list'>
<ide>
<add>- [Chinese](/i18n/chinese/index.md)
<ide> - [English](/index.md)
<ide> - [Español](/i18n/espanol/index.md)
<ide>
| 1
|
Javascript
|
Javascript
|
fix incorrect `typeof` import
|
a2d9a41354d40c2737a64c94308bd488562fd713
|
<ide><path>packages/ember-views/lib/views/core_view.js
<ide> import ActionHandler from "ember-runtime/mixins/action_handler";
<ide>
<ide> import { get } from "ember-metal/property_get";
<ide>
<del>import { typeOf } from "ember-metal/utils";
<add>import { typeOf } from "ember-runtime/utils";
<ide> import { internal } from "htmlbars-runtime";
<ide>
<ide> function K() { return this; }
| 1
|
Ruby
|
Ruby
|
add build error checks
|
262eaca56e9efbb21a20be2fe83af563c9b9289e
|
<ide><path>Library/Homebrew/diagnostic.rb
<ide> def fatal_development_tools_checks
<ide> %w[
<ide> ].freeze
<ide> end
<add>
<add> def build_error_checks
<add> (development_tools_checks + %w[
<add> ]).freeze
<ide> end
<ide>
<ide> def check_for_installed_developer_tools
<ide><path>Library/Homebrew/extend/os/mac/diagnostic.rb
<ide> def fatal_development_tools_checks
<ide> check_clt_minimum_version
<ide> ].freeze
<ide> end
<add>
<add> def build_error_checks
<add> (development_tools_checks + %w[
<add> check_for_unsupported_macos
<add> ]).freeze
<ide> end
<ide>
<ide> def check_for_unsupported_macos
| 2
|
Python
|
Python
|
add additional assert
|
f93432e6e227f721d07f5ec5343a5e5915d7bce3
|
<ide><path>libcloud/test/dns/test_route53.py
<ide> def test_list_records(self):
<ide> self.assertEqual(record.type, RecordType.A)
<ide> self.assertEqual(record.data, '208.111.35.173')
<ide>
<del> mx_record = records[3]
<del> self.assertEqual(mx_record.type, RecordType.MX)
<del> self.assertEqual(mx_record.data, 'ASPMX.L.GOOGLE.COM.')
<del> self.assertEqual(mx_record.extra['priority'], 1)
<add> record = records[3]
<add> self.assertEqual(record.type, RecordType.MX)
<add> self.assertEqual(record.data, 'ASPMX.L.GOOGLE.COM.')
<add> self.assertEqual(record.extra['priority'], 1)
<add>
<add> record = records[4]
<add> self.assertEqual(record.type, RecordType.MX)
<add> self.assertEqual(record.data, 'ALT1.ASPMX.L.GOOGLE.COM.')
<add> self.assertEqual(record.extra['priority'], 5)
<ide>
<ide> def test_get_zone(self):
<ide> zone = self.driver.get_zone(zone_id='47234')
| 1
|
Javascript
|
Javascript
|
add assertion on 'class' for attributebindings
|
e5d76eec9e857f3eabe47ccbeac7768ba61f6916
|
<ide><path>packages/ember-views/lib/system/build-component-template.js
<ide> function normalizeComponentAttributes(component, attrs) {
<ide> var attr = attributeBindings[i];
<ide> var colonIndex = attr.indexOf(':');
<ide>
<add> var attrName, expression;
<ide> if (colonIndex !== -1) {
<ide> var attrProperty = attr.substring(0, colonIndex);
<del> var attrName = attr.substring(colonIndex + 1);
<del> normalized[attrName] = ['get', 'view.' + attrProperty];
<add> attrName = attr.substring(colonIndex + 1);
<add> expression = ['get', 'view.' + attrProperty];
<ide> } else if (attrs[attr]) {
<ide> // TODO: For compatibility with 1.x, we probably need to `set`
<ide> // the component's attribute here if it is a CP, but we also
<ide> // probably want to suspend observers and allow the
<ide> // willUpdateAttrs logic to trigger observers at the correct time.
<del> normalized[attr] = ['value', attrs[attr]];
<add> attrName = attr;
<add> expression = ['value', attrs[attr]];
<ide> } else {
<del> normalized[attr] = ['get', 'view.' + attr];
<add> attrName = attr;
<add> expression = ['get', 'view.' + attr];
<ide> }
<add>
<add> Ember.assert('You cannot use class as an attributeBinding, use classNameBindings instead.', attrName !== 'class');
<add>
<add> normalized[attrName] = expression;
<ide> }
<ide> }
<ide>
<ide><path>packages/ember-views/tests/views/view/attribute_bindings_test.js
<ide> QUnit.test("attributeBindings should not fail if view has been destroyed", funct
<ide> ok(!error, error);
<ide> });
<ide>
<del>QUnit.skip("asserts if an attributeBinding is setup on class", function() {
<add>QUnit.test("asserts if an attributeBinding is setup on class", function() {
<ide> view = EmberView.create({
<ide> attributeBindings: ['class']
<ide> });
<ide>
<ide> expectAssertion(function() {
<ide> appendView();
<ide> }, 'You cannot use class as an attributeBinding, use classNameBindings instead.');
<add>
<add> // Remove render node to avoid "Render node exists without concomitant env"
<add> // assertion on teardown.
<add> view.renderNode = null;
<ide> });
<ide>
<ide> QUnit.test("blacklists href bindings based on protocol", function() {
| 2
|
Mixed
|
Go
|
forbid client piping to tty enabled container
|
67e3ddb75ff27b8de0022e330413b4308ec5b010
|
<ide><path>api/client/cli.go
<ide> package client
<ide> import (
<ide> "crypto/tls"
<ide> "encoding/json"
<add> "errors"
<ide> "fmt"
<ide> "io"
<ide> "net"
<ide> func (cli *DockerCli) LoadConfigFile() (err error) {
<ide> return err
<ide> }
<ide>
<add>func (cli *DockerCli) CheckTtyInput(attachStdin, ttyMode bool) error {
<add> // In order to attach to a container tty, input stream for the client must
<add> // be a tty itself: redirecting or piping the client standard input is
<add> // incompatible with `docker run -t`, `docker exec -t` or `docker attach`.
<add> if ttyMode && attachStdin && !cli.isTerminalIn {
<add> return errors.New("cannot enable tty mode on non tty input")
<add> }
<add> return nil
<add>}
<add>
<ide> func NewDockerCli(in io.ReadCloser, out, err io.Writer, key libtrust.PrivateKey, proto, addr string, tlsConfig *tls.Config) *DockerCli {
<ide> var (
<ide> inFd uintptr
<ide><path>api/client/commands.go
<ide> func (cli *DockerCli) CmdAttach(args ...string) error {
<ide> tty = config.GetBool("Tty")
<ide> )
<ide>
<add> if err := cli.CheckTtyInput(!*noStdin, tty); err != nil {
<add> return err
<add> }
<add>
<ide> if tty && cli.isTerminalOut {
<ide> if err := cli.monitorTtySize(cmd.Arg(0), false); err != nil {
<ide> log.Debugf("Error monitoring TTY size: %s", err)
<ide> func (cli *DockerCli) CmdRun(args ...string) error {
<ide> return nil
<ide> }
<ide>
<del> if *flDetach {
<add> if !*flDetach {
<add> if err := cli.CheckTtyInput(config.AttachStdin, config.Tty); err != nil {
<add> return err
<add> }
<add> } else {
<ide> if fl := cmd.Lookup("attach"); fl != nil {
<ide> flAttach = fl.Value.(*opts.ListOpts)
<ide> if flAttach.Len() != 0 {
<ide> func (cli *DockerCli) CmdExec(args ...string) error {
<ide> return nil
<ide> }
<ide>
<del> if execConfig.Detach {
<add> if !execConfig.Detach {
<add> if err := cli.CheckTtyInput(execConfig.AttachStdin, execConfig.Tty); err != nil {
<add> return err
<add> }
<add> } else {
<ide> if _, _, err := readBody(cli.call("POST", "/exec/"+execID+"/start", execConfig, false)); err != nil {
<ide> return err
<ide> }
<ide><path>docs/man/docker-attach.1.md
<ide> container, or `CTRL-\` to get a stacktrace of the Docker client when it quits.
<ide> When you detach from a container the exit code will be returned to
<ide> the client.
<ide>
<add>It is forbidden to redirect the standard input of a docker attach command while
<add>attaching to a tty-enabled container (i.e.: launched with -t`).
<add>
<ide> # OPTIONS
<ide> **--no-stdin**=*true*|*false*
<ide> Do not attach STDIN. The default is *false*.
<ide><path>docs/man/docker-exec.1.md
<ide> container is unpaused, and then run
<ide> **-t**, **--tty**=*true*|*false*
<ide> Allocate a pseudo-TTY. The default is *false*.
<ide>
<add>The **-t** option is incompatible with a redirection of the docker client
<add>standard input.
<add>
<ide> # HISTORY
<ide> November 2014, updated by Sven Dowideit <SvenDowideit@home.org.au>
<ide><path>docs/man/docker-run.1.md
<ide> outside of a container on the host.
<ide> input of any container. This can be used, for example, to run a throwaway
<ide> interactive shell. The default is value is false.
<ide>
<add>The **-t** option is incompatible with a redirection of the docker client
<add>standard input.
<add>
<ide> **-u**, **--user**=""
<ide> Username or UID
<ide>
<ide><path>docs/sources/reference/run.md
<ide> specify to which of the three standard streams (`STDIN`, `STDOUT`,
<ide>
<ide> $ sudo docker run -a stdin -a stdout -i -t ubuntu /bin/bash
<ide>
<del>For interactive processes (like a shell) you will typically want a tty
<del>as well as persistent standard input (`STDIN`), so you'll use `-i -t`
<del>together in most interactive cases.
<add>For interactive processes (like a shell), you must use `-i -t` together in
<add>order to allocate a tty for the container process. Specifying `-t` is however
<add>forbidden when the client standard output is redirected or pipe, such as in:
<add>`echo test | docker run -i busybox cat`.
<ide>
<ide> ## Container identification
<ide>
<ide><path>integration-cli/docker_cli_attach_test.go
<ide> func TestAttachMultipleAndRestart(t *testing.T) {
<ide>
<ide> logDone("attach - multiple attach")
<ide> }
<add>
<add>func TestAttachTtyWithoutStdin(t *testing.T) {
<add> defer deleteAllContainers()
<add>
<add> cmd := exec.Command(dockerBinary, "run", "-d", "-ti", "busybox")
<add> out, _, err := runCommandWithOutput(cmd)
<add> if err != nil {
<add> t.Fatalf("failed to start container: %v (%v)", out, err)
<add> }
<add>
<add> id := strings.TrimSpace(out)
<add> if err := waitRun(id); err != nil {
<add> t.Fatal(err)
<add> }
<add>
<add> defer func() {
<add> cmd := exec.Command(dockerBinary, "kill", id)
<add> if out, _, err := runCommandWithOutput(cmd); err != nil {
<add> t.Fatalf("failed to kill container: %v (%v)", out, err)
<add> }
<add> }()
<add>
<add> done := make(chan struct{})
<add> go func() {
<add> defer close(done)
<add>
<add> cmd := exec.Command(dockerBinary, "attach", id)
<add> if _, err := cmd.StdinPipe(); err != nil {
<add> t.Fatal(err)
<add> }
<add>
<add> expected := "cannot enable tty mode"
<add> if out, _, err := runCommandWithOutput(cmd); err == nil {
<add> t.Fatal("attach should have failed")
<add> } else if !strings.Contains(out, expected) {
<add> t.Fatal("attach failed with error %q: expected %q", out, expected)
<add> }
<add> }()
<add>
<add> select {
<add> case <-done:
<add> case <-time.After(attachWait):
<add> t.Fatal("attach is running but should have failed")
<add> }
<add>
<add> logDone("attach - forbid piped stdin to tty enabled container")
<add>}
<ide><path>integration-cli/docker_cli_exec_test.go
<ide> func TestExecTtyCloseStdin(t *testing.T) {
<ide> t.Fatal(out, err)
<ide> }
<ide>
<del> cmd = exec.Command(dockerBinary, "exec", "-it", "exec_tty_stdin", "cat")
<add> cmd = exec.Command(dockerBinary, "exec", "-i", "exec_tty_stdin", "cat")
<ide> stdinRw, err := cmd.StdinPipe()
<ide> if err != nil {
<ide> t.Fatal(err)
<ide> func TestExecTtyCloseStdin(t *testing.T) {
<ide>
<ide> logDone("exec - stdin is closed properly with tty enabled")
<ide> }
<add>
<add>func TestExecTtyWithoutStdin(t *testing.T) {
<add> defer deleteAllContainers()
<add>
<add> cmd := exec.Command(dockerBinary, "run", "-d", "-ti", "busybox")
<add> out, _, err := runCommandWithOutput(cmd)
<add> if err != nil {
<add> t.Fatalf("failed to start container: %v (%v)", out, err)
<add> }
<add>
<add> id := strings.TrimSpace(out)
<add> if err := waitRun(id); err != nil {
<add> t.Fatal(err)
<add> }
<add>
<add> defer func() {
<add> cmd := exec.Command(dockerBinary, "kill", id)
<add> if out, _, err := runCommandWithOutput(cmd); err != nil {
<add> t.Fatalf("failed to kill container: %v (%v)", out, err)
<add> }
<add> }()
<add>
<add> done := make(chan struct{})
<add> go func() {
<add> defer close(done)
<add>
<add> cmd := exec.Command(dockerBinary, "exec", "-ti", id, "true")
<add> if _, err := cmd.StdinPipe(); err != nil {
<add> t.Fatal(err)
<add> }
<add>
<add> expected := "cannot enable tty mode"
<add> if out, _, err := runCommandWithOutput(cmd); err == nil {
<add> t.Fatal("exec should have failed")
<add> } else if !strings.Contains(out, expected) {
<add> t.Fatal("exec failed with error %q: expected %q", out, expected)
<add> }
<add> }()
<add>
<add> select {
<add> case <-done:
<add> case <-time.After(3 * time.Second):
<add> t.Fatal("exec is running but should have failed")
<add> }
<add>
<add> logDone("exec - forbid piped stdin to tty enabled container")
<add>}
<ide><path>integration-cli/docker_cli_run_test.go
<ide> func TestRunPortFromDockerRangeInUse(t *testing.T) {
<ide>
<ide> logDone("run - find another port if port from autorange already bound")
<ide> }
<add>
<add>func TestRunTtyWithPipe(t *testing.T) {
<add> defer deleteAllContainers()
<add>
<add> done := make(chan struct{})
<add> go func() {
<add> defer close(done)
<add>
<add> cmd := exec.Command(dockerBinary, "run", "-ti", "busybox", "true")
<add> if _, err := cmd.StdinPipe(); err != nil {
<add> t.Fatal(err)
<add> }
<add>
<add> expected := "cannot enable tty mode"
<add> if out, _, err := runCommandWithOutput(cmd); err == nil {
<add> t.Fatal("run should have failed")
<add> } else if !strings.Contains(out, expected) {
<add> t.Fatal("run failed with error %q: expected %q", out, expected)
<add> }
<add> }()
<add>
<add> select {
<add> case <-done:
<add> case <-time.After(3 * time.Second):
<add> t.Fatal("container is running but should have failed")
<add> }
<add>
<add> logDone("run - forbid piped stdin with tty")
<add>}
<ide><path>integration/commands_test.go
<ide> import (
<ide> "github.com/docker/docker/pkg/term"
<ide> "github.com/docker/docker/utils"
<ide> "github.com/docker/libtrust"
<add> "github.com/kr/pty"
<ide> )
<ide>
<ide> func closeWrap(args ...io.Closer) error {
<ide> func TestRunDisconnect(t *testing.T) {
<ide> })
<ide> }
<ide>
<del>// Expected behaviour: the process stay alive when the client disconnects
<del>// but the client detaches.
<del>func TestRunDisconnectTty(t *testing.T) {
<del>
<del> stdin, stdinPipe := io.Pipe()
<add>// TestRunDetach checks attaching and detaching with the escape sequence.
<add>func TestRunDetach(t *testing.T) {
<ide> stdout, stdoutPipe := io.Pipe()
<del> key, err := libtrust.GenerateECP256PrivateKey()
<add> cpty, tty, err := pty.Open()
<ide> if err != nil {
<ide> t.Fatal(err)
<ide> }
<ide>
<del> cli := client.NewDockerCli(stdin, stdoutPipe, ioutil.Discard, key, testDaemonProto, testDaemonAddr, nil)
<del> defer cleanup(globalEngine, t)
<del>
<del> c1 := make(chan struct{})
<del> go func() {
<del> defer close(c1)
<del> // We're simulating a disconnect so the return value doesn't matter. What matters is the
<del> // fact that CmdRun returns.
<del> if err := cli.CmdRun("-i", "-t", unitTestImageID, "/bin/cat"); err != nil {
<del> log.Debugf("Error CmdRun: %s", err)
<del> }
<del> }()
<del>
<del> container := waitContainerStart(t, 10*time.Second)
<del>
<del> state := setRaw(t, container)
<del> defer unsetRaw(t, container, state)
<del>
<del> // Client disconnect after run -i should keep stdin out in TTY mode
<del> setTimeout(t, "Read/Write assertion timed out", 2*time.Second, func() {
<del> if err := assertPipe("hello\n", "hello", stdout, stdinPipe, 150); err != nil {
<del> t.Fatal(err)
<del> }
<del> })
<del>
<del> // Close pipes (simulate disconnect)
<del> if err := closeWrap(stdin, stdinPipe, stdout, stdoutPipe); err != nil {
<del> t.Fatal(err)
<del> }
<del>
<del> // wait for CmdRun to return
<del> setTimeout(t, "Waiting for CmdRun timed out", 5*time.Second, func() {
<del> <-c1
<del> })
<del>
<del> // In tty mode, we expect the process to stay alive even after client's stdin closes.
<del>
<del> // Give some time to monitor to do his thing
<del> container.WaitStop(500 * time.Millisecond)
<del> if !container.IsRunning() {
<del> t.Fatalf("/bin/cat should still be running after closing stdin (tty mode)")
<del> }
<del>}
<del>
<del>// TestRunDetach checks attaching and detaching with the escape sequence.
<del>func TestRunDetach(t *testing.T) {
<del>
<del> stdin, stdinPipe := io.Pipe()
<del> stdout, stdoutPipe := io.Pipe()
<ide> key, err := libtrust.GenerateECP256PrivateKey()
<ide> if err != nil {
<ide> t.Fatal(err)
<ide> }
<ide>
<del> cli := client.NewDockerCli(stdin, stdoutPipe, ioutil.Discard, key, testDaemonProto, testDaemonAddr, nil)
<add> cli := client.NewDockerCli(tty, stdoutPipe, ioutil.Discard, key, testDaemonProto, testDaemonAddr, nil)
<ide> defer cleanup(globalEngine, t)
<ide>
<ide> ch := make(chan struct{})
<ide> func TestRunDetach(t *testing.T) {
<ide> defer unsetRaw(t, container, state)
<ide>
<ide> setTimeout(t, "First read/write assertion timed out", 2*time.Second, func() {
<del> if err := assertPipe("hello\n", "hello", stdout, stdinPipe, 150); err != nil {
<add> if err := assertPipe("hello\n", "hello", stdout, cpty, 150); err != nil {
<ide> t.Fatal(err)
<ide> }
<ide> })
<ide>
<ide> setTimeout(t, "Escape sequence timeout", 5*time.Second, func() {
<del> stdinPipe.Write([]byte{16})
<add> cpty.Write([]byte{16})
<ide> time.Sleep(100 * time.Millisecond)
<del> stdinPipe.Write([]byte{17})
<add> cpty.Write([]byte{17})
<ide> })
<ide>
<ide> // wait for CmdRun to return
<ide> setTimeout(t, "Waiting for CmdRun timed out", 15*time.Second, func() {
<ide> <-ch
<ide> })
<del> closeWrap(stdin, stdinPipe, stdout, stdoutPipe)
<add> closeWrap(cpty, stdout, stdoutPipe)
<ide>
<ide> time.Sleep(500 * time.Millisecond)
<ide> if !container.IsRunning() {
<ide> func TestRunDetach(t *testing.T) {
<ide>
<ide> // TestAttachDetach checks that attach in tty mode can be detached using the long container ID
<ide> func TestAttachDetach(t *testing.T) {
<del> stdin, stdinPipe := io.Pipe()
<ide> stdout, stdoutPipe := io.Pipe()
<add> cpty, tty, err := pty.Open()
<add> if err != nil {
<add> t.Fatal(err)
<add> }
<add>
<ide> key, err := libtrust.GenerateECP256PrivateKey()
<ide> if err != nil {
<ide> t.Fatal(err)
<ide> }
<ide>
<del> cli := client.NewDockerCli(stdin, stdoutPipe, ioutil.Discard, key, testDaemonProto, testDaemonAddr, nil)
<add> cli := client.NewDockerCli(tty, stdoutPipe, ioutil.Discard, key, testDaemonProto, testDaemonAddr, nil)
<ide> defer cleanup(globalEngine, t)
<ide>
<ide> ch := make(chan struct{})
<ide> func TestAttachDetach(t *testing.T) {
<ide> state := setRaw(t, container)
<ide> defer unsetRaw(t, container, state)
<ide>
<del> stdin, stdinPipe = io.Pipe()
<ide> stdout, stdoutPipe = io.Pipe()
<del> cli = client.NewDockerCli(stdin, stdoutPipe, ioutil.Discard, key, testDaemonProto, testDaemonAddr, nil)
<add> cpty, tty, err = pty.Open()
<add> if err != nil {
<add> t.Fatal(err)
<add> }
<add>
<add> cli = client.NewDockerCli(tty, stdoutPipe, ioutil.Discard, key, testDaemonProto, testDaemonAddr, nil)
<ide>
<ide> ch = make(chan struct{})
<ide> go func() {
<ide> func TestAttachDetach(t *testing.T) {
<ide> }()
<ide>
<ide> setTimeout(t, "First read/write assertion timed out", 2*time.Second, func() {
<del> if err := assertPipe("hello\n", "hello", stdout, stdinPipe, 150); err != nil {
<add> if err := assertPipe("hello\n", "hello", stdout, cpty, 150); err != nil {
<ide> if err != io.ErrClosedPipe {
<ide> t.Fatal(err)
<ide> }
<ide> }
<ide> })
<ide>
<ide> setTimeout(t, "Escape sequence timeout", 5*time.Second, func() {
<del> stdinPipe.Write([]byte{16})
<add> cpty.Write([]byte{16})
<ide> time.Sleep(100 * time.Millisecond)
<del> stdinPipe.Write([]byte{17})
<add> cpty.Write([]byte{17})
<ide> })
<ide>
<ide> // wait for CmdRun to return
<ide> setTimeout(t, "Waiting for CmdAttach timed out", 15*time.Second, func() {
<ide> <-ch
<ide> })
<ide>
<del> closeWrap(stdin, stdinPipe, stdout, stdoutPipe)
<add> closeWrap(cpty, stdout, stdoutPipe)
<ide>
<ide> time.Sleep(500 * time.Millisecond)
<ide> if !container.IsRunning() {
<ide> func TestAttachDetach(t *testing.T) {
<ide>
<ide> // TestAttachDetachTruncatedID checks that attach in tty mode can be detached
<ide> func TestAttachDetachTruncatedID(t *testing.T) {
<del> stdin, stdinPipe := io.Pipe()
<ide> stdout, stdoutPipe := io.Pipe()
<add> cpty, tty, err := pty.Open()
<add> if err != nil {
<add> t.Fatal(err)
<add> }
<add>
<ide> key, err := libtrust.GenerateECP256PrivateKey()
<ide> if err != nil {
<ide> t.Fatal(err)
<ide> }
<ide>
<del> cli := client.NewDockerCli(stdin, stdoutPipe, ioutil.Discard, key, testDaemonProto, testDaemonAddr, nil)
<add> cli := client.NewDockerCli(tty, stdoutPipe, ioutil.Discard, key, testDaemonProto, testDaemonAddr, nil)
<ide> defer cleanup(globalEngine, t)
<ide>
<ide> // Discard the CmdRun output
<ide> func TestAttachDetachTruncatedID(t *testing.T) {
<ide> state := setRaw(t, container)
<ide> defer unsetRaw(t, container, state)
<ide>
<del> stdin, stdinPipe = io.Pipe()
<ide> stdout, stdoutPipe = io.Pipe()
<del> cli = client.NewDockerCli(stdin, stdoutPipe, ioutil.Discard, key, testDaemonProto, testDaemonAddr, nil)
<add> cpty, tty, err = pty.Open()
<add> if err != nil {
<add> t.Fatal(err)
<add> }
<add>
<add> cli = client.NewDockerCli(tty, stdoutPipe, ioutil.Discard, key, testDaemonProto, testDaemonAddr, nil)
<ide>
<ide> ch := make(chan struct{})
<ide> go func() {
<ide> func TestAttachDetachTruncatedID(t *testing.T) {
<ide> }()
<ide>
<ide> setTimeout(t, "First read/write assertion timed out", 2*time.Second, func() {
<del> if err := assertPipe("hello\n", "hello", stdout, stdinPipe, 150); err != nil {
<add> if err := assertPipe("hello\n", "hello", stdout, cpty, 150); err != nil {
<ide> if err != io.ErrClosedPipe {
<ide> t.Fatal(err)
<ide> }
<ide> }
<ide> })
<ide>
<ide> setTimeout(t, "Escape sequence timeout", 5*time.Second, func() {
<del> stdinPipe.Write([]byte{16})
<add> cpty.Write([]byte{16})
<ide> time.Sleep(100 * time.Millisecond)
<del> stdinPipe.Write([]byte{17})
<add> cpty.Write([]byte{17})
<ide> })
<ide>
<ide> // wait for CmdRun to return
<ide> setTimeout(t, "Waiting for CmdAttach timed out", 15*time.Second, func() {
<ide> <-ch
<ide> })
<del> closeWrap(stdin, stdinPipe, stdout, stdoutPipe)
<add> closeWrap(cpty, stdout, stdoutPipe)
<ide>
<ide> time.Sleep(500 * time.Millisecond)
<ide> if !container.IsRunning() {
<ide> func TestAttachDetachTruncatedID(t *testing.T) {
<ide>
<ide> // Expected behaviour, the process stays alive when the client disconnects
<ide> func TestAttachDisconnect(t *testing.T) {
<del> stdin, stdinPipe := io.Pipe()
<ide> stdout, stdoutPipe := io.Pipe()
<add> cpty, tty, err := pty.Open()
<add> if err != nil {
<add> t.Fatal(err)
<add> }
<add>
<ide> key, err := libtrust.GenerateECP256PrivateKey()
<ide> if err != nil {
<ide> t.Fatal(err)
<ide> }
<ide>
<del> cli := client.NewDockerCli(stdin, stdoutPipe, ioutil.Discard, key, testDaemonProto, testDaemonAddr, nil)
<add> cli := client.NewDockerCli(tty, stdoutPipe, ioutil.Discard, key, testDaemonProto, testDaemonAddr, nil)
<ide> defer cleanup(globalEngine, t)
<ide>
<ide> go func() {
<ide> func TestAttachDisconnect(t *testing.T) {
<ide> }()
<ide>
<ide> setTimeout(t, "First read/write assertion timed out", 2*time.Second, func() {
<del> if err := assertPipe("hello\n", "hello", stdout, stdinPipe, 150); err != nil {
<add> if err := assertPipe("hello\n", "hello", stdout, cpty, 150); err != nil {
<ide> t.Fatal(err)
<ide> }
<ide> })
<ide> // Close pipes (client disconnects)
<del> if err := closeWrap(stdin, stdinPipe, stdout, stdoutPipe); err != nil {
<add> if err := closeWrap(cpty, stdout, stdoutPipe); err != nil {
<ide> t.Fatal(err)
<ide> }
<ide>
| 10
|
Mixed
|
Python
|
fix silent evaluation
|
b9f59118bf8782e97ce71c472152f46d52b1f501
|
<ide><path>spacy/cli/_util.py
<ide> import sys
<ide> import shutil
<ide> from pathlib import Path
<del>from wasabi import msg
<add>from wasabi import msg, Printer
<ide> import srsly
<ide> import hashlib
<ide> import typer
<ide> def string_to_list(value: str, intify: bool = False) -> Union[List[str], List[in
<ide> return result
<ide>
<ide>
<del>def setup_gpu(use_gpu: int) -> None:
<add>def setup_gpu(use_gpu: int, silent=None) -> None:
<ide> """Configure the GPU and log info."""
<add> if silent is not None:
<add> msg = Printer(no_print=silent, pretty=not silent)
<ide> if use_gpu >= 0:
<ide> msg.info(f"Using GPU: {use_gpu}")
<ide> require_gpu(use_gpu)
<ide><path>spacy/cli/evaluate.py
<ide> def evaluate(
<ide> ) -> Dict[str, Any]:
<ide> msg = Printer(no_print=silent, pretty=not silent)
<ide> fix_random_seed()
<del> setup_gpu(use_gpu)
<add> setup_gpu(use_gpu, silent=silent)
<ide> data_path = util.ensure_path(data_path)
<ide> output_path = util.ensure_path(output)
<ide> displacy_path = util.ensure_path(displacy_path)
<ide><path>website/docs/usage/processing-pipelines.md
<ide> While you could use a registered function or a file loader like
<ide> [`srsly.read_json.v1`](/api/top-level#file_readers) as an argument of the
<ide> component factory, this approach is problematic: the component factory runs
<ide> **every time the component is created**. This means it will run when creating
<del>the `nlp` object before training, but also every a user loads your pipeline. So
<del>your runtime pipeline would either depend on a local path on your file system,
<del>or it's loaded twice: once when the component is created, and then again when
<del>the data is by `from_disk`.
<add>the `nlp` object before training, but also every time a user loads your
<add>pipeline. So your runtime pipeline would either depend on a local path on your
<add>file system, or it's loaded twice: once when the component is created, and then
<add>again when the data is by `from_disk`.
<ide>
<ide> > ```ini
<ide> > ### config.cfg
| 3
|
Javascript
|
Javascript
|
show function names
|
8e246acd0e80d35356be6c592289487549a49300
|
<ide><path>lib/sys.js
<ide> exports.inspect = function (obj, showHidden, depth, colors) {
<ide> if (isRegExp(value)) {
<ide> return stylize('' + value, 'regexp');
<ide> } else {
<del> return stylize('[Function]', 'special');
<add> return stylize('[Function'+ (value.name ? ': '+ value.name : '')+ ']', 'special');
<ide> }
<ide> }
<ide>
<ide> exports.inspect = function (obj, showHidden, depth, colors) {
<ide>
<ide> // Make functions say that they are functions
<ide> if (typeof value === 'function') {
<del> base = (isRegExp(value)) ? ' ' + value : ' [Function]';
<add> base = (isRegExp(value)) ? ' ' + value : ' [Function'+ (value.name ? ': '+ value.name : '')+ ']';
<ide> } else {
<ide> base = "";
<ide> }
| 1
|
Ruby
|
Ruby
|
suggest full path to xcode 4.3 /developer
|
5b0d97efc741019378ee5b40c09796a3c31d2fcb
|
<ide><path>Library/Homebrew/cmd/doctor.rb
<ide> def check_xcode_select_path
<ide> path = `xcode-select -print-path 2>/dev/null`.chomp
<ide> unless File.directory? path and File.file? "#{path}/usr/bin/xcodebuild"
<ide> # won't guess at the path they should use because it's too hard to get right
<add> # We specify /Applications/Xcode.app/Contents/Developer even though
<add> # /Applications/Xcode.app should work because people don't install the new CLI
<add> # tools and then it doesn't work. Lets hope the location doesn't change in the
<add> # future.
<add>
<ide> <<-EOS.undent
<ide> Your Xcode is configured with an invalid path.
<ide> You should change it to the correct path. Please note that there is no correct
<ide> def check_xcode_select_path
<ide> these is (probably) what you want:
<ide>
<ide> sudo xcode-select -switch /Developer
<del> sudo xcode-select -switch /Applications/Xcode.app
<add> sudo xcode-select -switch /Applications/Xcode.app/Contents/Developer
<ide> EOS
<ide> end
<ide> end
| 1
|
Python
|
Python
|
handle non-string input for ip fields
|
aa349fe76729dbea1b8becf1846ce58c70871f35
|
<ide><path>rest_framework/fields.py
<ide> def __init__(self, protocol='both', **kwargs):
<ide> self.validators.extend(validators)
<ide>
<ide> def to_internal_value(self, data):
<del> if data and ':' in data:
<add> if not isinstance(data, six.string_types):
<add> self.fail('invalid', value=data)
<add>
<add> if ':' in data:
<ide> try:
<ide> if self.protocol in ('both', 'ipv6'):
<ide> return clean_ipv6_address(data, self.unpack_ipv4)
<ide><path>tests/test_fields.py
<ide> class TestIPAddressField(FieldValues):
<ide> '127.122.111.2231': ['Enter a valid IPv4 or IPv6 address.'],
<ide> '2001:::9652': ['Enter a valid IPv4 or IPv6 address.'],
<ide> '2001:0db8:85a3:0042:1000:8a2e:0370:73341': ['Enter a valid IPv4 or IPv6 address.'],
<add> 1000: ['Enter a valid IPv4 or IPv6 address.'],
<ide> }
<ide> outputs = {}
<ide> field = serializers.IPAddressField()
| 2
|
PHP
|
PHP
|
increase time comparison range
|
280ff385caae57b71c798e18cf249f1cb94b81f6
|
<ide><path>tests/TestCase/Database/QueryTest.php
<ide> function ($q) {
<ide> $this->assertWithinRange(
<ide> date('U'),
<ide> (new DateTime($result->fetchAll('assoc')[0]['d']))->format('U'),
<del> 5
<add> 10
<ide> );
<ide>
<ide> $query = new Query($this->connection);
| 1
|
Text
|
Text
|
fix docs for
|
631f3a4f593c2d6b3286b528732982391a51830a
|
<ide><path>docs/docs/ref-04-tags-and-attributes.md
<ide> There is also the React-specific attribute `dangerouslySetInnerHTML` ([more here
<ide> ### SVG Attributes
<ide>
<ide> ```
<del>clip-path cx cy d dx dy fill fillOpacity fontFamily fontSize fx fy gradientTransform
<del>gradientUnits markerEnd markerMid markerStart offset opacity
<add>clipPath cx cy d dx dy fill fillOpacity fontFamily fontSize fx fy
<add>gradientTransform gradientUnits markerEnd markerMid markerStart offset opacity
<ide> patternContentUnits patternUnits points preserveAspectRatio r rx ry
<ide> spreadMethod stopColor stopOpacity stroke strokeDasharray strokeLinecap
<ide> strokeOpacity strokeWidth textAnchor transform version viewBox x1 x2 x y1 y2 y
| 1
|
Python
|
Python
|
add more models to common tests
|
4e10acb3e59f5ef52f383e5a82987e672f17b1fd
|
<ide><path>src/transformers/modeling_distilbert.py
<ide> def forward(
<ide> sequence_output = self.dropout(sequence_output)
<ide> logits = self.classifier(sequence_output)
<ide>
<del> outputs = (logits,) + outputs[2:] # add hidden states and attention if they are here
<add> outputs = (logits,) + outputs[1:] # add hidden states and attention if they are here
<ide> if labels is not None:
<ide> loss_fct = CrossEntropyLoss()
<ide> # Only keep active parts of the loss
<ide><path>src/transformers/modeling_electra.py
<ide> def forward(
<ide> sequence_output = discriminator_hidden_states[0]
<ide> logits = self.classifier(sequence_output)
<ide>
<del> outputs = (logits,) + discriminator_hidden_states[2:] # add hidden states and attention if they are here
<add> outputs = (logits,) + discriminator_hidden_states[1:] # add hidden states and attention if they are here
<ide>
<ide> if labels is not None:
<ide> if self.num_labels == 1:
<ide><path>src/transformers/modeling_longformer.py
<ide> def __init__(self, config):
<ide> self.longformer = LongformerModel(config)
<ide> self.classifier = LongformerClassificationHead(config)
<ide>
<add> self.init_weights()
<add>
<ide> @add_start_docstrings_to_callable(LONGFORMER_INPUTS_DOCSTRING.format("(batch_size, sequence_length)"))
<ide> def forward(
<ide> self,
<ide> def forward(
<ide> token_type_ids=token_type_ids,
<ide> position_ids=position_ids,
<ide> inputs_embeds=inputs_embeds,
<add> output_attentions=output_attentions,
<ide> )
<ide> sequence_output = outputs[0]
<ide> logits = self.classifier(sequence_output)
<ide> def __init__(self, config):
<ide> @add_start_docstrings_to_callable(LONGFORMER_INPUTS_DOCSTRING.format("(batch_size, sequence_length)"))
<ide> def forward(
<ide> self,
<del> input_ids,
<add> input_ids=None,
<ide> attention_mask=None,
<ide> global_attention_mask=None,
<ide> token_type_ids=None,
<ide> def forward(
<ide> token_type_ids=token_type_ids,
<ide> position_ids=position_ids,
<ide> inputs_embeds=inputs_embeds,
<add> output_attentions=output_attentions,
<ide> )
<ide>
<ide> sequence_output = outputs[0]
<ide> def forward(
<ide> token_type_ids=flat_token_type_ids,
<ide> attention_mask=flat_attention_mask,
<ide> global_attention_mask=flat_global_attention_mask,
<add> output_attentions=output_attentions,
<ide> )
<ide> pooled_output = outputs[1]
<ide>
<ide><path>src/transformers/modeling_roberta.py
<ide> def __init__(self, config):
<ide> self.roberta = RobertaModel(config)
<ide> self.classifier = RobertaClassificationHead(config)
<ide>
<add> self.init_weights()
<add>
<ide> @add_start_docstrings_to_callable(ROBERTA_INPUTS_DOCSTRING.format("(batch_size, sequence_length)"))
<ide> def forward(
<ide> self,
<ide> def __init__(self, config):
<ide> @add_start_docstrings_to_callable(ROBERTA_INPUTS_DOCSTRING.format("(batch_size, sequence_length)"))
<ide> def forward(
<ide> self,
<del> input_ids,
<add> input_ids=None,
<ide> attention_mask=None,
<ide> token_type_ids=None,
<ide> position_ids=None,
<ide><path>tests/test_modeling_distilbert.py
<ide> class DistilBertModelTest(ModelTesterMixin, unittest.TestCase):
<ide>
<ide> all_model_classes = (
<del> (DistilBertModel, DistilBertForMaskedLM, DistilBertForQuestionAnswering, DistilBertForSequenceClassification)
<add> (
<add> DistilBertModel,
<add> DistilBertForMaskedLM,
<add> DistilBertForQuestionAnswering,
<add> DistilBertForSequenceClassification,
<add> DistilBertForTokenClassification,
<add> )
<ide> if is_torch_available()
<ide> else None
<ide> )
<ide><path>tests/test_modeling_electra.py
<ide> class ElectraModelTest(ModelTesterMixin, unittest.TestCase):
<ide>
<ide> all_model_classes = (
<del> (ElectraModel, ElectraForMaskedLM, ElectraForTokenClassification,) if is_torch_available() else ()
<add> (
<add> ElectraModel,
<add> ElectraForPreTraining,
<add> ElectraForMaskedLM,
<add> ElectraForTokenClassification,
<add> ElectraForSequenceClassification,
<add> )
<add> if is_torch_available()
<add> else ()
<ide> )
<ide>
<ide> class ElectraModelTester(object):
<ide><path>tests/test_modeling_longformer.py
<ide> class LongformerModelTest(ModelTesterMixin, unittest.TestCase):
<ide> test_headmasking = False # head masking is not supported
<ide> test_torchscript = False
<ide>
<del> all_model_classes = (LongformerModel, LongformerForMaskedLM,) if is_torch_available() else ()
<add> all_model_classes = (
<add> (
<add> LongformerModel,
<add> LongformerForMaskedLM,
<add> # TODO: make tests pass for those models
<add> # LongformerForSequenceClassification,
<add> # LongformerForQuestionAnswering,
<add> # LongformerForTokenClassification,
<add> # LongformerForMultipleChoice,
<add> )
<add> if is_torch_available()
<add> else ()
<add> )
<ide>
<ide> def setUp(self):
<ide> self.model_tester = LongformerModelTester(self)
<ide><path>tests/test_modeling_roberta.py
<ide> RobertaConfig,
<ide> RobertaModel,
<ide> RobertaForMaskedLM,
<add> RobertaForMultipleChoice,
<add> RobertaForQuestionAnswering,
<ide> RobertaForSequenceClassification,
<ide> RobertaForTokenClassification,
<ide> )
<del> from transformers.modeling_roberta import RobertaEmbeddings, RobertaForMultipleChoice, RobertaForQuestionAnswering
<add> from transformers.modeling_roberta import RobertaEmbeddings
<ide> from transformers.modeling_roberta import ROBERTA_PRETRAINED_MODEL_ARCHIVE_LIST
<ide> from transformers.modeling_utils import create_position_ids_from_input_ids
<ide>
<ide>
<ide> @require_torch
<ide> class RobertaModelTest(ModelTesterMixin, unittest.TestCase):
<ide>
<del> all_model_classes = (RobertaForMaskedLM, RobertaModel) if is_torch_available() else ()
<add> all_model_classes = (
<add> (
<add> RobertaForMaskedLM,
<add> RobertaModel,
<add> RobertaForSequenceClassification,
<add> RobertaForTokenClassification,
<add> RobertaForMultipleChoice,
<add> RobertaForQuestionAnswering,
<add> )
<add> if is_torch_available()
<add> else ()
<add> )
<ide>
<ide> class RobertaModelTester(object):
<ide> def __init__(
<ide><path>tests/test_modeling_xlnet.py
<ide> XLNetConfig,
<ide> XLNetModel,
<ide> XLNetLMHeadModel,
<add> XLNetForMultipleChoice,
<ide> XLNetForSequenceClassification,
<ide> XLNetForTokenClassification,
<ide> XLNetForQuestionAnswering,
<ide> class XLNetModelTest(ModelTesterMixin, unittest.TestCase):
<ide> XLNetForTokenClassification,
<ide> XLNetForSequenceClassification,
<ide> XLNetForQuestionAnswering,
<add> XLNetForMultipleChoice,
<ide> )
<ide> if is_torch_available()
<ide> else ()
<ide> def __init__(
<ide> bos_token_id=1,
<ide> eos_token_id=2,
<ide> pad_token_id=5,
<add> num_choices=4,
<ide> ):
<ide> self.parent = parent
<ide> self.batch_size = batch_size
<ide> def __init__(
<ide> self.bos_token_id = bos_token_id
<ide> self.pad_token_id = pad_token_id
<ide> self.eos_token_id = eos_token_id
<add> self.num_choices = num_choices
<ide>
<ide> def prepare_config_and_inputs(self):
<ide> input_ids_1 = ids_tensor([self.batch_size, self.seq_length], self.vocab_size)
| 9
|
Javascript
|
Javascript
|
update hu.js locale
|
b0b015b4a16d61efc48fbe9435f1379f09662fde
|
<ide><path>src/locale/hu.js
<ide> export default moment.defineLocale('hu', {
<ide> ordinal : '%d.',
<ide> week : {
<ide> dow : 1, // Monday is the first day of the week.
<del> doy : 7 // The week that contains Jan 1st is the first week of the year.
<add> doy : 4 // The week that contains Jan 4th is the first week of the year.
<ide> }
<ide> });
<ide>
<ide><path>src/test/locale/hu.js
<ide> test('format', function (assert) {
<ide> ['D Do DD', '14 14. 14'],
<ide> ['d do dddd ddd dd', '0 0. vasárnap vas v'],
<ide> ['DDD DDDo DDDD', '45 45. 045'],
<del> ['w wo ww', '7 7. 07'],
<add> ['w wo ww', '6 6. 06'],
<ide> ['H HH', '15 15'],
<ide> ['m mm', '25 25'],
<ide> ['s ss', '50 50'],
<ide> test('calendar all else', function (assert) {
<ide> });
<ide>
<ide> test('weeks year starting sunday formatted', function (assert) {
<del> assert.equal(moment([2011, 11, 26]).format('w ww wo'), '1 01 1.', 'Dec 26 2011 should be week 1');
<del> assert.equal(moment([2012, 0, 1]).format('w ww wo'), '1 01 1.', 'Jan 1 2012 should be week 1');
<del> assert.equal(moment([2012, 0, 2]).format('w ww wo'), '2 02 2.', 'Jan 2 2012 should be week 2');
<del> assert.equal(moment([2012, 0, 8]).format('w ww wo'), '2 02 2.', 'Jan 8 2012 should be week 2');
<del> assert.equal(moment([2012, 0, 9]).format('w ww wo'), '3 03 3.', 'Jan 9 2012 should be week 3');
<add> assert.equal(moment([2011, 11, 26]).format('w ww wo'), '52 52 52.', 'Dec 26 2011 should be week 52');
<add> assert.equal(moment([2012, 0, 1]).format('w ww wo'), '52 52 52.', 'Jan 1 2012 should be week 52');
<add> assert.equal(moment([2012, 0, 2]).format('w ww wo'), '1 01 1.', 'Jan 2 2012 should be week 1');
<add> assert.equal(moment([2012, 0, 8]).format('w ww wo'), '1 01 1.', 'Jan 8 2012 should be week 1');
<add> assert.equal(moment([2012, 0, 9]).format('w ww wo'), '2 02 2.', 'Jan 9 2012 should be week 2');
<ide> });
<ide>
| 2
|
Java
|
Java
|
improve error message
|
30c06163846f8bc5801d3f754def135ed79eb38b
|
<ide><path>spring-webflux/src/main/java/org/springframework/web/reactive/socket/adapter/ReactorNettyWebSocketSession.java
<ide> public Mono<Void> send(Publisher<WebSocketMessage> messages) {
<ide> @Override
<ide> public Mono<Void> close(CloseStatus status) {
<ide> return Mono.error(new UnsupportedOperationException(
<del> "Currently in Reactor Netty applications are expected to use the " +
<del> "Cancellation returned from subscribing to the \"receive\"-side Flux " +
<del> "in order to close the WebSocket session."));
<add> "Reactor Netty does not support closing the session from anywhere. " +
<add> "You will need to work with the Flux returned from receive() method, " +
<add> "either subscribing to it and using the returned Disposable, " +
<add> "or using an operator that cancels (e.g. take)."));
<ide> }
<ide>
<ide>
| 1
|
Text
|
Text
|
add toc and proper noun section to challenge guide
|
1ef2f224b33ece60a474a9d35fa8e251b466e79b
|
<ide><path>seed/challenge-style-guide.md
<ide> # A guide to designing freeCodeCamp coding challenges
<ide>
<del>> “Talk is cheap. Show me the code.” — Linus Torvalds
<add>> "Talk is cheap. Show me the code." — Linus Torvalds
<ide>
<ide> freeCodeCamp offers 1,200 hours of interactive coding challenges. These are 100% focused on the practical skill of building software. You code the entire time. You learn to code by coding.
<ide>
<ide> You can learn theory through free online university courses. freeCodeCamp will focus instead on helping you learn to code and practice by building apps.
<ide>
<ide> With that practical focus in mind, let’s talk about the requirements for our coding challenges. (Note that these requirements do not apply to our algorithm challenges, checkpoint challenges, or projects.)
<ide>
<add>**Table of Contents**
<add>
<add>- [Proper nouns](#proper-nouns)
<add>- [The 2 minute rule](#the-2-minute-rule)
<add>- [Modularity](#modularity)
<add>- [Naming challenges](#naming-challenges)
<add>- [Writing tests](#writing-tests)
<add>- [Writing instructions](#writing-instructions)
<add>- [Formatting challenge text](#formatting-challenge-text)
<add>- [Formatting seed code](#formatting-seed-code)
<add>- [Why do we have all these rules?](#why-do-we-have-all-these-rules)
<add>
<add>## Proper nouns
<add>
<add>Proper nouns should use correct capitalization when possible. Below is a list of words as they should appear in the challenges.
<add>
<add>- JavaScript (capital letters in "J" and "S" and no abbreviations)
<add>- Node.js
<add>
<add>Front-end development (adjective form with a dash) is when you working on the front end (noun form with no dash). The same goes with the back end, full stack, and many other compound terms.
<add>
<ide> ## The 2 minute rule
<ide>
<ide> Each challenge should be solvable within 120 seconds by a native English speaker who has completed the challenges leading up to it. This includes the amount of time it takes to read the directions, understand the seeded code, write their own code, and get all the tests to pass.
<ide> Here are specific formatting guidelines for challenge text and examples:
<ide> - Multi-line code examples go in `<blockquote>` tags, and use the `<br>` tag to separate lines. For HTML examples, remember to use escape characters to represent the angle brackets
<ide> - A single horizontal rules (`<hr>` tag) should separate the text discussing the challenge concept and the challenge instructions
<ide> - Additional information in the form of a note should be formatted `<strong>Note</strong><br>Rest of note text...`
<add>- Use double quotes where applicable
<ide>
<ide> ## Formatting seed code
<ide>
<ide> Here are specific formatting guidelines for the challenge seed code:
<ide>
<ide> - Use two spaces to indent
<ide> - JavaScript statements end with a semicolon
<add>- Use double quotes where applicable
<ide>
<ide> ## Why do we have all these rules?
<ide>
| 1
|
Javascript
|
Javascript
|
add test for csv
|
aa6401693df7fff0ef637b3927545a62cf4e147c
|
<ide><path>test/csv/csv-test.js
<add>require("../env");
<add>require("../../d3");
<add>require("../../d3.csv");
<add>
<add>var vows = require("vows"),
<add> assert = require("assert");
<add>
<add>var suite = vows.describe("d3.csv");
<add>
<add>suite.addBatch({
<add> "csv": {
<add> topic: function() {
<add> var cb = this.callback;
<add> return d3.csv("examples/data/sample.csv", function(csv) {
<add> cb(null, csv);
<add> });
<add> },
<add> "invokes the callback with the parsed CSV": function(csv) {
<add> assert.deepEqual(csv, [{"Hello":42,"World":"\"fish\""}]);
<add> },
<add> "overrides the mime type to text/csv": function(csv) {
<add> assert.equal(XMLHttpRequest._last._info.mimeType, "text/csv");
<add> },
<add> "": {
<add> topic: function() {
<add> var cb = this.callback;
<add> return d3.csv("//does/not/exist.csv", function(csv) {
<add> cb(null, csv);
<add> });
<add> },
<add> "invokes the callback with null when an error occurs": function(csv) {
<add> assert.isNull(csv);
<add> }
<add> }
<add> }
<add>});
<add>
<add>suite.export(module);
| 1
|
PHP
|
PHP
|
fix incorrect variable name
|
41ed5871ddae4031c20cc8dac7bdcf8f65234a9a
|
<ide><path>tests/TestCase/View/ViewVarsTraitTest.php
<ide> public function testUndefinedValidViewOptions() {
<ide> $result = $this->subject->viewOptions();
<ide>
<ide> $this->assertTrue(is_array($result));
<del> $this->assertTrue(empty($resulit));
<add> $this->assertTrue(empty($result));
<ide> }
<ide>
<ide> }
| 1
|
Mixed
|
Javascript
|
improve error message for policy failures
|
5b95f0128467d096e6e7ac9948939ae3f061604d
|
<ide><path>test/common/README.md
<ide> const { spawn } = require('child_process');
<ide> spawn(...common.pwdCommand, { stdio: ['pipe'] });
<ide> ```
<ide>
<add>### `requireNoPackageJSONAbove()`
<add>
<add>Throws an `AssertionError` if a `package.json` file is in any ancestor
<add>directory. Such files may interfere with proper test functionality.
<add>
<ide> ### `runWithInvalidFD(func)`
<ide>
<ide> * `func` [<Function>][]
<ide><path>test/common/index.js
<ide> function gcUntil(name, condition) {
<ide> });
<ide> }
<ide>
<add>function requireNoPackageJSONAbove() {
<add> let possiblePackage = path.join(__dirname, '..', 'package.json');
<add> let lastPackage = null;
<add> while (possiblePackage !== lastPackage) {
<add> if (fs.existsSync(possiblePackage)) {
<add> assert.fail(
<add> 'This test shouldn\'t load properties from a package.json above ' +
<add> `its file location. Found package.json at ${possiblePackage}.`);
<add> }
<add> lastPackage = possiblePackage;
<add> possiblePackage = path.join(possiblePackage, '..', '..', 'package.json');
<add> }
<add>}
<add>
<ide> const common = {
<ide> allowGlobals,
<ide> buildType,
<ide> const common = {
<ide> platformTimeout,
<ide> printSkipMessage,
<ide> pwdCommand,
<add> requireNoPackageJSONAbove,
<ide> runWithInvalidFD,
<ide> skip,
<ide> skipIf32Bits,
<ide><path>test/parallel/test-policy-dependencies.js
<ide> const common = require('../common');
<ide> if (!common.hasCrypto)
<ide> common.skip('missing crypto');
<add>common.requireNoPackageJSONAbove();
<ide>
<ide> const fixtures = require('../common/fixtures');
<ide>
<ide><path>test/parallel/test-policy-dependency-conditions.js
<ide> const common = require('../common');
<ide>
<ide> if (!common.hasCrypto) common.skip('missing crypto');
<add>common.requireNoPackageJSONAbove();
<ide>
<ide> const Manifest = require('internal/policy/manifest').Manifest;
<ide>
<ide><path>test/parallel/test-policy-integrity-flag.js
<ide> const common = require('../common');
<ide> if (!common.hasCrypto)
<ide> common.skip('missing crypto');
<add>common.requireNoPackageJSONAbove();
<ide>
<ide> const fixtures = require('../common/fixtures');
<ide>
<ide><path>test/parallel/test-policy-parse-integrity.js
<ide>
<ide> const common = require('../common');
<ide> if (!common.hasCrypto) common.skip('missing crypto');
<add>common.requireNoPackageJSONAbove();
<ide>
<ide> const tmpdir = require('../common/tmpdir');
<ide> const assert = require('assert');
<ide><path>test/parallel/test-policy-scopes-dependencies.js
<ide> const common = require('../common');
<ide>
<ide> if (!common.hasCrypto) common.skip('missing crypto');
<add>common.requireNoPackageJSONAbove();
<ide>
<ide> const Manifest = require('internal/policy/manifest').Manifest;
<ide> const assert = require('assert');
<ide><path>test/parallel/test-policy-scopes-integrity.js
<ide> const common = require('../common');
<ide>
<ide> if (!common.hasCrypto) common.skip('missing crypto');
<add>common.requireNoPackageJSONAbove();
<ide>
<ide> const Manifest = require('internal/policy/manifest').Manifest;
<ide> const assert = require('assert');
<ide><path>test/parallel/test-policy-scopes.js
<ide> const common = require('../common');
<ide> if (!common.hasCrypto)
<ide> common.skip('missing crypto');
<add>common.requireNoPackageJSONAbove();
<ide>
<ide> const fixtures = require('../common/fixtures');
<ide>
<ide><path>test/pummel/test-policy-integrity.js
<ide>
<ide> const common = require('../common');
<ide> if (!common.hasCrypto) common.skip('missing crypto');
<add>common.requireNoPackageJSONAbove();
<ide>
<ide> const { debuglog } = require('util');
<ide> const debug = debuglog('test');
| 10
|
Ruby
|
Ruby
|
remove all references to `where_values` in tests
|
17b1b5d77342db8fe3aa064d848d46052cb4695c
|
<ide><path>activerecord/test/cases/associations/association_scope_test.rb
<ide> class AssociationScopeTest < ActiveRecord::TestCase
<ide> test 'does not duplicate conditions' do
<ide> scope = AssociationScope.scope(Author.new.association(:welcome_posts),
<ide> Author.connection)
<del> wheres = scope.where_values.map(&:right)
<del> binds = scope.bind_values.map(&:last)
<del> wheres = scope.where_values.map(&:right).reject { |node|
<add> wheres = scope.where_clause.predicates.map(&:right)
<add> binds = scope.where_clause.binds.map(&:last)
<add> wheres.reject! { |node|
<ide> Arel::Nodes::BindParam === node
<ide> }
<ide> assert_equal wheres.uniq, wheres
<ide><path>activerecord/test/cases/associations/has_many_associations_test.rb
<ide> def test_association_protect_foreign_key
<ide> # would be convenient), because this would cause that scope to be applied to any callbacks etc.
<ide> def test_build_and_create_should_not_happen_within_scope
<ide> car = cars(:honda)
<del> scoped_count = car.foo_bulbs.where_values.count
<add> scoped_count = car.foo_bulbs.where_clause.predicates.count
<ide>
<ide> bulb = car.foo_bulbs.build
<del> assert_not_equal scoped_count, bulb.scope_after_initialize.where_values.count
<add> assert_not_equal scoped_count, bulb.scope_after_initialize.where_clause.predicates.count
<ide>
<ide> bulb = car.foo_bulbs.create
<del> assert_not_equal scoped_count, bulb.scope_after_initialize.where_values.count
<add> assert_not_equal scoped_count, bulb.scope_after_initialize.where_clause.predicates.count
<ide>
<ide> bulb = car.foo_bulbs.create!
<del> assert_not_equal scoped_count, bulb.scope_after_initialize.where_values.count
<add> assert_not_equal scoped_count, bulb.scope_after_initialize.where_clause.predicates.count
<ide> end
<ide>
<ide> def test_no_sql_should_be_fired_if_association_already_loaded
<ide><path>activerecord/test/cases/associations/has_one_associations_test.rb
<ide> def test_building_the_associated_object_with_an_unrelated_type
<ide>
<ide> def test_build_and_create_should_not_happen_within_scope
<ide> pirate = pirates(:blackbeard)
<del> scoped_count = pirate.association(:foo_bulb).scope.where_values.count
<add> scoped_count = pirate.association(:foo_bulb).scope.where_clause.predicates.count
<ide>
<ide> bulb = pirate.build_foo_bulb
<del> assert_not_equal scoped_count, bulb.scope_after_initialize.where_values.count
<add> assert_not_equal scoped_count, bulb.scope_after_initialize.where_clause.predicates.count
<ide>
<ide> bulb = pirate.create_foo_bulb
<del> assert_not_equal scoped_count, bulb.scope_after_initialize.where_values.count
<add> assert_not_equal scoped_count, bulb.scope_after_initialize.where_clause.predicates.count
<ide>
<ide> bulb = pirate.create_foo_bulb!
<del> assert_not_equal scoped_count, bulb.scope_after_initialize.where_values.count
<add> assert_not_equal scoped_count, bulb.scope_after_initialize.where_clause.predicates.count
<ide> end
<ide>
<ide> def test_create_association
<ide><path>activerecord/test/cases/associations_test.rb
<ide> def test_proxy_association_accessor
<ide> end
<ide>
<ide> def test_scoped_allows_conditions
<del> assert developers(:david).projects.merge!(where: 'foo').where_values.include?('foo')
<add> assert developers(:david).projects.merge!(where: 'foo').where_clause.predicates.include?('foo')
<ide> end
<ide>
<ide> test "getting a scope from an association" do
<ide><path>activerecord/test/cases/relation/mutation_test.rb
<ide> def relation
<ide> end
<ide>
<ide> test 'test_merge!' do
<del> assert relation.merge!(where: :foo).equal?(relation)
<del> assert_equal [:foo], relation.where_values
<add> assert relation.merge!(select: :foo).equal?(relation)
<add> assert_equal [:foo], relation.select_values
<ide> end
<ide>
<ide> test 'merge with a proc' do
<del> assert_equal [:foo], relation.merge(-> { where(:foo) }).where_values
<add> assert_equal [:foo], relation.merge(-> { select(:foo) }).select_values
<ide> end
<ide>
<ide> test 'none!' do
<ide><path>activerecord/test/cases/relation/where_chain_test.rb
<ide> def setup
<ide> def test_not_eq
<ide> relation = Post.where.not(title: 'hello')
<ide>
<del> assert_equal 1, relation.where_values.length
<add> assert_equal 1, relation.where_clause.predicates.length
<ide>
<del> value = relation.where_values.first
<del> bind = relation.bind_values.first
<add> value = relation.where_clause.predicates.first
<add> bind = relation.where_clause.binds.first
<ide>
<ide> assert_bound_ast value, Post.arel_table[@name], Arel::Nodes::NotEqual
<ide> assert_equal 'hello', bind.last
<ide> def test_not_eq
<ide> def test_not_null
<ide> expected = Post.arel_table[@name].not_eq(nil)
<ide> relation = Post.where.not(title: nil)
<del> assert_equal([expected], relation.where_values)
<add> assert_equal([expected], relation.where_clause.predicates)
<ide> end
<ide>
<ide> def test_not_with_nil
<ide> def test_not_with_nil
<ide> def test_not_in
<ide> expected = Post.arel_table[@name].not_in(%w[hello goodbye])
<ide> relation = Post.where.not(title: %w[hello goodbye])
<del> assert_equal([expected], relation.where_values)
<add> assert_equal([expected], relation.where_clause.predicates)
<ide> end
<ide>
<ide> def test_association_not_eq
<ide> expected = Comment.arel_table[@name].not_eq(Arel::Nodes::BindParam.new)
<ide> relation = Post.joins(:comments).where.not(comments: {title: 'hello'})
<del> assert_equal(expected.to_sql, relation.where_values.first.to_sql)
<add> assert_equal(expected.to_sql, relation.where_clause.predicates.first.to_sql)
<ide> end
<ide>
<ide> def test_not_eq_with_preceding_where
<ide> relation = Post.where(title: 'hello').where.not(title: 'world')
<ide>
<del> value = relation.where_values.first
<del> bind = relation.bind_values.first
<add> value = relation.where_clause.predicates.first
<add> bind = relation.where_clause.binds.first
<ide> assert_bound_ast value, Post.arel_table[@name], Arel::Nodes::Equality
<ide> assert_equal 'hello', bind.last
<ide>
<del> value = relation.where_values.last
<del> bind = relation.bind_values.last
<add> value = relation.where_clause.predicates.last
<add> bind = relation.where_clause.binds.last
<ide> assert_bound_ast value, Post.arel_table[@name], Arel::Nodes::NotEqual
<ide> assert_equal 'world', bind.last
<ide> end
<ide>
<ide> def test_not_eq_with_succeeding_where
<ide> relation = Post.where.not(title: 'hello').where(title: 'world')
<ide>
<del> value = relation.where_values.first
<del> bind = relation.bind_values.first
<add> value = relation.where_clause.predicates.first
<add> bind = relation.where_clause.binds.first
<ide> assert_bound_ast value, Post.arel_table[@name], Arel::Nodes::NotEqual
<ide> assert_equal 'hello', bind.last
<ide>
<del> value = relation.where_values.last
<del> bind = relation.bind_values.last
<add> value = relation.where_clause.predicates.last
<add> bind = relation.where_clause.binds.last
<ide> assert_bound_ast value, Post.arel_table[@name], Arel::Nodes::Equality
<ide> assert_equal 'world', bind.last
<ide> end
<ide>
<ide> def test_not_eq_with_string_parameter
<ide> expected = Arel::Nodes::Not.new("title = 'hello'")
<ide> relation = Post.where.not("title = 'hello'")
<del> assert_equal([expected], relation.where_values)
<add> assert_equal([expected], relation.where_clause.predicates)
<ide> end
<ide>
<ide> def test_not_eq_with_array_parameter
<ide> expected = Arel::Nodes::Not.new("title = 'hello'")
<ide> relation = Post.where.not(['title = ?', 'hello'])
<del> assert_equal([expected], relation.where_values)
<add> assert_equal([expected], relation.where_clause.predicates)
<ide> end
<ide>
<ide> def test_chaining_multiple
<ide> relation = Post.where.not(author_id: [1, 2]).where.not(title: 'ruby on rails')
<ide>
<ide> expected = Post.arel_table['author_id'].not_in([1, 2])
<del> assert_equal(expected, relation.where_values[0])
<add> assert_equal(expected, relation.where_clause.predicates[0])
<ide>
<del> value = relation.where_values[1]
<del> bind = relation.bind_values.first
<add> value = relation.where_clause.predicates[1]
<add> bind = relation.where_clause.binds.first
<ide>
<ide> assert_bound_ast value, Post.arel_table[@name], Arel::Nodes::NotEqual
<ide> assert_equal 'ruby on rails', bind.last
<ide> def test_chaining_multiple
<ide> def test_rewhere_with_one_condition
<ide> relation = Post.where(title: 'hello').where(title: 'world').rewhere(title: 'alone')
<ide>
<del> assert_equal 1, relation.where_values.size
<del> value = relation.where_values.first
<del> bind = relation.bind_values.first
<add> assert_equal 1, relation.where_clause.predicates.size
<add> value = relation.where_clause.predicates.first
<add> bind = relation.where_clause.binds.first
<ide> assert_bound_ast value, Post.arel_table[@name], Arel::Nodes::Equality
<ide> assert_equal 'alone', bind.last
<ide> end
<ide>
<ide> def test_rewhere_with_multiple_overwriting_conditions
<ide> relation = Post.where(title: 'hello').where(body: 'world').rewhere(title: 'alone', body: 'again')
<ide>
<del> assert_equal 2, relation.where_values.size
<add> assert_equal 2, relation.where_clause.predicates.size
<ide>
<del> value = relation.where_values.first
<del> bind = relation.bind_values.first
<add> value = relation.where_clause.predicates.first
<add> bind = relation.where_clause.binds.first
<ide> assert_bound_ast value, Post.arel_table['title'], Arel::Nodes::Equality
<ide> assert_equal 'alone', bind.last
<ide>
<del> value = relation.where_values[1]
<del> bind = relation.bind_values[1]
<add> value = relation.where_clause.predicates[1]
<add> bind = relation.where_clause.binds[1]
<ide> assert_bound_ast value, Post.arel_table['body'], Arel::Nodes::Equality
<ide> assert_equal 'again', bind.last
<ide> end
<ide> def assert_bound_ast value, table, type
<ide> def test_rewhere_with_one_overwriting_condition_and_one_unrelated
<ide> relation = Post.where(title: 'hello').where(body: 'world').rewhere(title: 'alone')
<ide>
<del> assert_equal 2, relation.where_values.size
<add> assert_equal 2, relation.where_clause.predicates.size
<ide>
<del> value = relation.where_values.first
<del> bind = relation.bind_values.first
<add> value = relation.where_clause.predicates.first
<add> bind = relation.where_clause.binds.first
<ide>
<ide> assert_bound_ast value, Post.arel_table['body'], Arel::Nodes::Equality
<ide> assert_equal 'world', bind.last
<ide>
<del> value = relation.where_values.second
<del> bind = relation.bind_values.second
<add> value = relation.where_clause.predicates.second
<add> bind = relation.where_clause.binds.second
<ide>
<ide> assert_bound_ast value, Post.arel_table['title'], Arel::Nodes::Equality
<ide> assert_equal 'alone', bind.last
<ide> def test_rewhere_with_one_overwriting_condition_and_one_unrelated
<ide> def test_rewhere_with_range
<ide> relation = Post.where(comments_count: 1..3).rewhere(comments_count: 3..5)
<ide>
<del> assert_equal 1, relation.where_values.size
<add> assert_equal 1, relation.where_clause.predicates.size
<ide> assert_equal Post.where(comments_count: 3..5), relation
<ide> end
<ide>
<ide> def test_rewhere_with_infinite_upper_bound_range
<ide> relation = Post.where(comments_count: 1..Float::INFINITY).rewhere(comments_count: 3..5)
<ide>
<del> assert_equal 1, relation.where_values.size
<add> assert_equal 1, relation.where_clause.predicates.size
<ide> assert_equal Post.where(comments_count: 3..5), relation
<ide> end
<ide>
<ide> def test_rewhere_with_infinite_lower_bound_range
<ide> relation = Post.where(comments_count: -Float::INFINITY..1).rewhere(comments_count: 3..5)
<ide>
<del> assert_equal 1, relation.where_values.size
<add> assert_equal 1, relation.where_clause.predicates.size
<ide> assert_equal Post.where(comments_count: 3..5), relation
<ide> end
<ide>
<ide> def test_rewhere_with_infinite_range
<ide> relation = Post.where(comments_count: -Float::INFINITY..Float::INFINITY).rewhere(comments_count: 3..5)
<ide>
<del> assert_equal 1, relation.where_values.size
<add> assert_equal 1, relation.where_clause.predicates.size
<ide> assert_equal Post.where(comments_count: 3..5), relation
<ide> end
<ide> end
<ide><path>activerecord/test/cases/relation_test.rb
<ide> def test_references_values_dont_duplicate
<ide> relation = Relation.new(FakeKlass, :b, nil)
<ide> relation = relation.merge where: :lol, readonly: true
<ide>
<del> assert_equal [:lol], relation.where_values
<add> assert_equal [:lol], relation.where_clause.predicates
<ide> assert_equal true, relation.readonly_value
<ide> end
<ide>
<ide> test 'merging an empty hash into a relation' do
<del> assert_equal [], Relation.new(FakeKlass, :b, nil).merge({}).where_values
<add> assert_equal Relation::WhereClause.empty, Relation.new(FakeKlass, :b, nil).merge({}).where_clause
<ide> end
<ide>
<ide> test 'merging a hash with unknown keys raises' do
<ide> def test_references_values_dont_duplicate
<ide> values = relation.values
<ide>
<ide> values[:where] = nil
<del> assert_not_nil relation.where_values
<add> assert_not_nil relation.where_clause
<ide> end
<ide>
<ide> test 'relations can be created with a values hash' do
<ide> def self.sanitize_sql(args)
<ide>
<ide> relation = Relation.new(klass, :b, nil)
<ide> relation.merge!(where: ['foo = ?', 'bar'])
<del> assert_equal ['foo = bar'], relation.where_values
<add> assert_equal ['foo = bar'], relation.where_clause.predicates
<ide> end
<ide>
<ide> def test_merging_readonly_false
<ide><path>activerecord/test/cases/scoping/default_scoping_test.rb
<ide> def test_unscope_errors_with_non_symbol_or_hash_arguments
<ide>
<ide> def test_unscope_merging
<ide> merged = Developer.where(name: "Jamis").merge(Developer.unscope(:where))
<del> assert merged.where_values.empty?
<del> assert !merged.where(name: "Jon").where_values.empty?
<add> assert merged.where_clause.empty?
<add> assert !merged.where(name: "Jon").where_clause.empty?
<ide> end
<ide>
<ide> def test_order_in_default_scope_should_not_prevail
<ide> def test_default_scope_is_threadsafe
<ide>
<ide> test "additional conditions are ANDed with the default scope" do
<ide> scope = DeveloperCalledJamis.where(name: "David")
<del> assert_equal 2, scope.where_values.length
<add> assert_equal 2, scope.where_clause.predicates.length
<ide> assert_equal [], scope.to_a
<ide> end
<ide>
<ide> test "additional conditions in a scope are ANDed with the default scope" do
<ide> scope = DeveloperCalledJamis.david
<del> assert_equal 2, scope.where_values.length
<add> assert_equal 2, scope.where_clause.predicates.length
<ide> assert_equal [], scope.to_a
<ide> end
<ide>
<ide> test "a scope can remove the condition from the default scope" do
<ide> scope = DeveloperCalledJamis.david2
<del> assert_equal 1, scope.where_values.length
<add> assert_equal 1, scope.where_clause.predicates.length
<ide> assert_equal Developer.where(name: "David").map(&:id), scope.map(&:id)
<ide> end
<ide> end
<ide><path>activerecord/test/cases/scoping/named_scoping_test.rb
<ide> def test_size_should_use_length_when_results_are_loaded
<ide> end
<ide>
<ide> def test_should_not_duplicates_where_values
<del> where_values = Topic.where("1=1").scope_with_lambda.where_values
<add> where_values = Topic.where("1=1").scope_with_lambda.where_clause.predicates
<ide> assert_equal ["1=1"], where_values
<ide> end
<ide>
<ide><path>activerecord/test/cases/scoping/relation_scoping_test.rb
<ide> def test_ensure_that_method_scoping_is_correctly_restored
<ide> rescue
<ide> end
<ide>
<del> assert !Developer.all.where_values.include?("name = 'Jamis'")
<add> assert !Developer.all.where_clause.predicates.include?("name = 'Jamis'")
<ide> end
<ide>
<ide> def test_default_scope_filters_on_joins
| 10
|
Ruby
|
Ruby
|
use brewed curl for homepage check when needed
|
bbfa52fcaa582a182a967651c934e6e5e97f6a22
|
<ide><path>Library/Homebrew/formula_auditor.rb
<ide> def audit_homepage
<ide>
<ide> return unless DevelopmentTools.curl_handles_most_https_certificates?
<ide>
<add> use_homebrew_curl = false
<add> %w[Stable HEAD].each do |name|
<add> spec_name = name.downcase.to_sym
<add> next unless (spec = formula.send(spec_name))
<add>
<add> use_homebrew_curl = spec.using == :homebrew_curl
<add> break if use_homebrew_curl
<add> end
<add>
<ide> if (http_content_problem = curl_check_http_content(homepage,
<ide> "homepage URL",
<del> user_agents: [:browser, :default],
<del> check_content: true,
<del> strict: @strict))
<add> user_agents: [:browser, :default],
<add> check_content: true,
<add> strict: @strict,
<add> use_homebrew_curl: use_homebrew_curl))
<ide> problem http_content_problem
<ide> end
<ide> end
| 1
|
Javascript
|
Javascript
|
add app to showcase with source link
|
464273374306bcbe81400e782d1fe1c5c7129a07
|
<ide><path>website/src/react-native/showcase.js
<ide> var apps = [
<ide> icon: 'https://lh3.googleusercontent.com/5N0WYat5WuFbhi5yR2ccdbqmiZ0wbTtKRG9GhT3YK7Z-qRvmykZyAgk0HNElOxD2JOPr=w300-rw',
<ide> link: 'https://play.google.com/store/apps/details?id=com.rhyble.nalathekerala',
<ide> author: 'Rhyble',
<add> },
<add> {
<add> name: 'No Fluff: Hiragana',
<add> icon: 'https://lh3.googleusercontent.com/kStXwjpbPsu27E1nIEU1gfG0I8j9t5bAR_20OMhGZvu0j2vab3EbBV7O_KNZChjflZ_O',
<add> link: 'https://play.google.com/store/apps/details?id=com.hiragana',
<add> author: 'Matthias Sieber',
<add> source: 'https://github.com/manonthemat/no-fluff-hiragana'
<ide> },
<ide> {
<ide> name: 'Night Light',
<ide> var AppList = React.createClass({
<ide> {app.linkAppStore && app.linkPlayStore ? this._renderLinks(app) : null}
<ide> <p>By {app.author}</p>
<ide> {this._renderBlogPosts(app)}
<add> {this._renderSourceLink(app)}
<ide> {this._renderVideos(app)}
<ide> </div>
<ide> );
<ide> var AppList = React.createClass({
<ide>
<ide> return (
<ide> <div className="showcase" key={i}>
<del> <a href={app.link} target="blank">
<add> <a href={app.link} target="_blank">
<ide> {inner}
<ide> </a>
<ide> </div>
<ide> var AppList = React.createClass({
<ide>
<ide> if (app.blogs.length === 1) {
<ide> return (
<del> <p><a href={app.blogs[0]} target="blank">Blog post</a></p>
<add> <p><a href={app.blogs[0]} target="_blank">Blog post</a></p>
<ide> );
<ide> } else if (app.blogs.length > 1) {
<ide> return (
<ide> var AppList = React.createClass({
<ide>
<ide> _renderBlogPost: function(url, i) {
<ide> return (
<del> <a href={url} target="blank">
<add> <a href={url} target="_blank">
<ide> {i + 1}
<ide> </a>
<ide> );
<ide> },
<ide>
<add> _renderSourceLink: function(app) {
<add> if (!app.source) {
<add> return;
<add> }
<add>
<add> return (
<add> <p><a href={app.source} target="_blank">Source</a></p>
<add> );
<add> },
<add>
<ide> _renderVideos: function(app) {
<ide> if (!app.videos) {
<ide> return;
<ide> }
<ide>
<ide> if (app.videos.length === 1) {
<ide> return (
<del> <p><a href={app.videos[0]} target="blank">Video</a></p>
<add> <p><a href={app.videos[0]} target="_blank">Video</a></p>
<ide> );
<ide> } else if (app.videos.length > 1) {
<ide> return (
<ide> var AppList = React.createClass({
<ide>
<ide> _renderVideo: function(url, i) {
<ide> return (
<del> <a href={url} target="blank">
<add> <a href={url} target="_blank">
<ide> {i + 1}
<ide> </a>
<ide> );
<ide> var AppList = React.createClass({
<ide> _renderLinks: function(app) {
<ide> return (
<ide> <p>
<del> <a href={app.linkAppStore} target="blank">iOS</a>
<del> {" - "}
<del> <a href={app.linkPlayStore} target="blank">Android</a>
<add> <a href={app.linkAppStore} target="_blank">iOS</a> -
<add> <a href={app.linkPlayStore} target="_blank">Android</a>
<ide> </p>
<ide> );
<ide> },
| 1
|
Ruby
|
Ruby
|
reduce hash allocations
|
5c07e1a3f465fb157ee5bf72eaeb8ad44b52c856
|
<ide><path>actionpack/lib/action_dispatch/journey/router.rb
<ide> def serve(req)
<ide> req.path_info = "/" + req.path_info unless req.path_info.start_with? "/"
<ide> end
<ide>
<del> parameters = route.defaults.merge parameters.each_value { |val|
<del> val.force_encoding(::Encoding::UTF_8)
<add> tmp_params = set_params.merge route.defaults
<add> parameters.each_pair { |key, val|
<add> tmp_params[key] = val.force_encoding(::Encoding::UTF_8)
<ide> }
<ide>
<del> req.path_parameters = set_params.merge parameters
<add> req.path_parameters = tmp_params
<ide>
<ide> status, headers, body = route.app.serve(req)
<ide>
| 1
|
Text
|
Text
|
fix font-optimization.md syntax errors
|
6edeb9d43ee93c6b37265c025be39dcf0643c2cf
|
<ide><path>docs/basic-features/font-optimization.md
<ide> Import the font you would like to use from `@next/font/google` as a function. We
<ide>
<ide> To use the font in all your pages, add it to [`_app.js` file](https://nextjs.org/docs/advanced-features/custom-app) under `/pages` as shown below:
<ide>
<del>```js:pages/_app.js
<del>import { Inter } from '@next/font/google';
<add>```js
<add>// pages/_app.js
<add>import { Inter } from '@next/font/google'
<ide>
<ide> // If loading a variable font, you don't need to specify the font weight
<ide> const inter = Inter()
<ide> export default function MyApp({ Component, pageProps }) {
<ide>
<ide> If you can't use a variable font, you will **need to specify a weight**:
<ide>
<del>```js:pages/_app.js
<del>import { Roboto } from '@next/font/google';
<add>```js
<add>// pages/_app.js
<add>import { Roboto } from '@next/font/google'
<ide>
<ide> const roboto = Roboto({
<ide> weight: '400',
<ide> export default function MyApp({ Component, pageProps }) {
<ide>
<ide> You can also use the font without a wrapper and `className` by injecting it inside the `<head>` as follows:
<ide>
<del>```js:pages/_app.js
<del>import { Inter } from '@next/font/google';
<add>```js
<add>// pages/_app.js
<add>import { Inter } from '@next/font/google'
<ide>
<del>const inter = Inter();
<add>const inter = Inter()
<ide>
<ide> export default function MyApp({ Component, pageProps }) {
<ide> return (
<ide> This can be done in 2 ways:
<ide>
<ide> - On a font per font basis by adding it to the function call
<ide>
<del> ```js:pages/_app.js
<del> const inter = Inter({ subsets: ["latin"] });
<add> ```js
<add> // pages/_app.js
<add> const inter = Inter({ subsets: ['latin'] })
<ide> ```
<ide>
<ide> - Globally for all your fonts in your `next.config.js`
<ide> View the [Font API Reference](/docs/api-reference/next/font.md#nextfontgoogle) f
<ide>
<ide> Import `@next/font/local` and specify the `src` of your local font file. We recommend using [**variable fonts**](https://fonts.google.com/variablefonts) for the best performance and flexibility.
<ide>
<del>```js:pages/_app.js
<del>import localFont from '@next/font/local';
<add>```js
<add>// pages/_app.js
<add>import localFont from '@next/font/local'
<ide>
<ide> // Font files can be colocated inside of `pages`
<del>const myFont = localFont({ src: './my-font.woff2' });
<add>const myFont = localFont({ src: './my-font.woff2' })
<ide>
<ide> export default function MyApp({ Component, pageProps }) {
<ide> return (
| 1
|
Text
|
Text
|
add user definition
|
3a6a90713fad9000712304be4810b4b031036fbd
|
<ide><path>guide/english/linux/common-terms-every-linux-user-should-know/index.md
<ide> title: common terms every Linux user should know.
<ide>
<ide> * <strong>Tux:</strong> it is the official mascot of Linux. That is the penguin that is usually associated with Linux – if you’ve seen the yellow and black penguin online, then you have seen tux.
<ide>
<del>* <strong>Root:</strong> also known as the super-user, is the "default" username for the administrator of a linux machine. It is usually represented on the linux terminal with the <strong>"#"</strong> symbol.
<add>* <strong>User:</strong> since Linux is a multi-user based system, it 's common to have different users, each other with his own account. Are called users also, some services running in background. Informations about the users of the system are located in the “/etc/passwd” file.
<ide>
<ide> * <strong>Commands:</strong> are text inputs or instruction given to the linux machine (by typing it in the terminal) to tell it what to do (that is, for a required outcome).
<ide>
<del>
<ide> * <strong>Repository:</strong> a repository (or “repo” for short) is a collection of software packages for a distro usually hosted online. Software programs can be installed from both the default repositories provided by the distro and third-party ones when they’re added to the package manager.
<ide>
<ide> * <strong>Package Manager:</strong> is a software program that enables you to search, install, update, and remove apps and other application management functions. Every distro has graphic from end package managers (like the Ubuntu Software Centre) and command line package management tools like the “apt-get”.
| 1
|
PHP
|
PHP
|
add missing import
|
616caeace33a082becfc318b0d8d37f8e6d0cbdf
|
<ide><path>tests/TestCase/Console/ConsoleInputTest.php
<ide> namespace Cake\Test\TestCase\Console;
<ide>
<ide> use Cake\Console\ConsoleInput;
<add>use Cake\Console\Exception\ConsoleException;
<ide> use Cake\TestSuite\TestCase;
<ide>
<ide> /**
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.