content_type
stringclasses
8 values
main_lang
stringclasses
7 values
message
stringlengths
1
50
sha
stringlengths
40
40
patch
stringlengths
52
962k
file_count
int64
1
300
Text
Text
add annthurium updates
cbda39eb58deac68973142345544bb713b16802f
<ide><path>docs/focus/2018-05-07.md <ide> - Prevent FilePath pane from popping up all the time due to an overeager mouseup handler [atom/github#1435](https://github.com/atom/github/pull/1435) <ide> - Clear the branch name after a successful checkout [atom/github#1438](https://github.com/atom/github/pull/1438) <ide> - Improve readability of console git diagnostic messages [atom/github#1439](https://github.com/atom/github/pull/1439) <add> - Finalize q2 roadmap <ide> - Teletype <ide> - Shipped [Teletype 0.13.2](https://github.com/atom/teletype/releases/tag/v0.13.2) to fix an issue that would sometimes occur when closing the WebRTC connection ([atom/teletype#368](https://github.com/atom/teletype/issues/368)) <ide> - Reactor Duty
1
Javascript
Javascript
add license headers to new files
83101b878ed55b9c43ffd49311654e02bd738c9d
<ide><path>src/core/__tests__/ReactIdentity-test.js <ide> /** <add> * Copyright 2013 Facebook, Inc. <add> * <add> * Licensed under the Apache License, Version 2.0 (the "License"); <add> * you may not use this file except in compliance with the License. <add> * You may obtain a copy of the License at <add> * <add> * http://www.apache.org/licenses/LICENSE-2.0 <add> * <add> * Unless required by applicable law or agreed to in writing, software <add> * distributed under the License is distributed on an "AS IS" BASIS, <add> * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. <add> * See the License for the specific language governing permissions and <add> * limitations under the License. <add> * <ide> * @jsx React.DOM <ide> * @emails react-core <ide> */ <ide><path>src/core/__tests__/ReactMount-test.js <ide> /** <add> * Copyright 2013 Facebook, Inc. <add> * <add> * Licensed under the Apache License, Version 2.0 (the "License"); <add> * you may not use this file except in compliance with the License. <add> * You may obtain a copy of the License at <add> * <add> * http://www.apache.org/licenses/LICENSE-2.0 <add> * <add> * Unless required by applicable law or agreed to in writing, software <add> * distributed under the License is distributed on an "AS IS" BASIS, <add> * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. <add> * See the License for the specific language governing permissions and <add> * limitations under the License. <add> * <ide> * @jsx React.DOM <ide> * @emails react-core <ide> */ <ide><path>src/utils/__tests__/mapChildren-test.js <ide> /** <add> * Copyright 2013 Facebook, Inc. <add> * <add> * Licensed under the Apache License, Version 2.0 (the "License"); <add> * you may not use this file except in compliance with the License. <add> * You may obtain a copy of the License at <add> * <add> * http://www.apache.org/licenses/LICENSE-2.0 <add> * <add> * Unless required by applicable law or agreed to in writing, software <add> * distributed under the License is distributed on an "AS IS" BASIS, <add> * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. <add> * See the License for the specific language governing permissions and <add> * limitations under the License. <add> * <ide> * @emails react-core <ide> * @jsx React.DOM <ide> */ <ide><path>src/utils/mapChildren.js <ide> /** <add> * Copyright 2013 Facebook, Inc. <add> * <add> * Licensed under the Apache License, Version 2.0 (the "License"); <add> * you may not use this file except in compliance with the License. <add> * You may obtain a copy of the License at <add> * <add> * http://www.apache.org/licenses/LICENSE-2.0 <add> * <add> * Unless required by applicable law or agreed to in writing, software <add> * distributed under the License is distributed on an "AS IS" BASIS, <add> * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. <add> * See the License for the specific language governing permissions and <add> * limitations under the License. <add> * <ide> * @providesModule mapChildren <ide> */ <ide>
4
Javascript
Javascript
fix selectnode in fabric
6163029d4af7528b441a6db153bb6449f16951cf
<ide><path>Libraries/Inspector/DevtoolsOverlay.js <ide> export default function DevtoolsOverlay({ <ide> locationX, <ide> locationY, <ide> viewData => { <del> const {touchedViewTag} = viewData; <del> if (touchedViewTag != null) { <add> const {touchedViewTag, closestInstance} = viewData; <add> if (closestInstance != null) { <add> // Fabric <add> agent.selectNode(closestInstance); <add> return true; <add> } else if (touchedViewTag != null) { <ide> agent.selectNode(findNodeHandle(touchedViewTag)); <ide> return true; <ide> } <ide><path>Libraries/Inspector/Inspector.js <ide> class Inspector extends React.Component< <ide> frame, <ide> pointerY, <ide> touchedViewTag, <add> closestInstance, <ide> } = viewData; <ide> <ide> // Sync the touched view with React DevTools. <ide> // Note: This is Paper only. To support Fabric, <ide> // DevTools needs to be updated to not rely on view tags. <del> if (this.state.devtoolsAgent && touchedViewTag) { <del> this.state.devtoolsAgent.selectNode(findNodeHandle(touchedViewTag)); <add> if (this.state.devtoolsAgent) { <add> if (closestInstance != null) { <add> // Fabric <add> this.state.devtoolsAgent.selectNode(closestInstance); <add> } else if (touchedViewTag != null) { <add> this.state.devtoolsAgent.selectNode(findNodeHandle(touchedViewTag)); <add> } <ide> } <ide> <ide> this.setState({
2
Ruby
Ruby
remove warning in namespaced generator test
573448f0114a9fe1c80cfb6b84ced842fa4ea63a
<ide><path>railties/test/generators/namespaced_generators_test.rb <ide> def test_namespaced_controller_dont_indent_blank_lines <ide> run_generator <ide> assert_file "app/controllers/test_app/account_controller.rb" do |content| <ide> content.split("\n").each do |line| <del> assert_no_match /^\s+$/, line, "Don't indent blank lines" <add> assert_no_match(/^\s+$/, line, "Don't indent blank lines") <ide> end <ide> end <ide> end
1
Python
Python
add better debug logging to k8sexec and k8spodop
eee4e30f2caf02e16088ff5d1af1ea380a73e982
<ide><path>airflow/executors/base_executor.py <ide> def queue_task_instance( <ide> pool=pool, <ide> pickle_id=pickle_id, <ide> cfg_path=cfg_path) <add> self.log.debug("created command %s", command_list_to_run) <ide> self.queue_command( <ide> task_instance, <ide> command_list_to_run, <ide><path>airflow/executors/kubernetes_executor.py <ide> def _make_kube_watcher(self) -> KubernetesJobWatcher: <ide> <ide> def _health_check_kube_watcher(self): <ide> if self.kube_watcher.is_alive(): <del> pass <add> self.log.debug("KubeJobWatcher alive, continuing") <ide> else: <ide> self.log.error( <ide> 'Error while health checking kube watcher process. ' <ide> def run_next(self, next_job: KubernetesJobType) -> None: <ide> def delete_pod(self, pod_id: str, namespace: str) -> None: <ide> """Deletes POD""" <ide> try: <add> self.log.debug("Deleting pod %s in namespace %s", pod_id, namespace) <ide> self.kube_client.delete_namespaced_pod( <ide> pod_id, namespace, body=client.V1DeleteOptions(**self.kube_config.delete_option_kwargs), <ide> **self.kube_config.kube_client_request_args) <ide> def sync(self) -> None: <ide> :return: <ide> <ide> """ <add> self.log.debug("Syncing KubernetesExecutor") <ide> self._health_check_kube_watcher() <ide> while True: <ide> try: <ide> task = self.watcher_queue.get_nowait() <ide> try: <add> self.log.debug("Processing task %s", task) <ide> self.process_watcher_task(task) <ide> finally: <ide> self.watcher_queue.task_done() <ide> def process_watcher_task(self, task: KubernetesWatchType) -> None: <ide> self.result_queue.put((key, state, pod_id, namespace, resource_version)) <ide> <ide> def _annotations_to_key(self, annotations: Dict[str, str]) -> Optional[TaskInstanceKey]: <add> self.log.debug("Creating task key for annotations %s", annotations) <ide> dag_id = annotations['dag_id'] <ide> task_id = annotations['task_id'] <ide> try_number = int(annotations['try_number']) <ide> def clear_not_launched_queued_tasks(self, session=None) -> None: <ide> proper support <ide> for State.LAUNCHED <ide> """ <add> self.log.debug("Clearing tasks that have not been launched") <ide> if not self.kube_client: <ide> raise AirflowException(NOT_STARTED_MESSAGE) <ide> queued_tasks = session \ <ide> def clear_not_launched_queued_tasks(self, session=None) -> None: <ide> <ide> for task in queued_tasks: <ide> # pylint: disable=protected-access <add> self.log.debug("Checking task %s", task) <ide> dict_string = ( <ide> "dag_id={},task_id={},execution_date={},airflow-worker={}".format( <ide> pod_generator.make_safe_label_value(task.dag_id), <ide><path>airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py <ide> def create_pod_request_obj(self) -> k8s.V1Pod: <ide> will supersede all other values. <ide> <ide> """ <add> self.log.debug("Creating pod for K8sPodOperator task %s", self.task_id) <ide> if self.pod_template_file: <add> self.log.debug("Pod template file found, will parse for base pod") <ide> pod_template = pod_generator.PodGenerator.deserialize_model_file(self.pod_template_file) <ide> else: <ide> pod_template = k8s.V1Pod(metadata=k8s.V1ObjectMeta(name="name")) <ide> def create_pod_request_obj(self) -> k8s.V1Pod: <ide> pod = PodGenerator.reconcile_pods(pod_template, pod) <ide> <ide> for secret in self.secrets: <add> self.log.debug("Adding secret to task %s", self.task_id) <ide> pod = secret.attach_to_pod(pod) <ide> if self.do_xcom_push: <add> self.log.debug("Adding xcom sidecar to task %s", self.task_id) <ide> pod = PodGenerator.add_xcom_sidecar(pod) <ide> return pod <ide> <ide> def create_new_pod_for_operator(self, labels, launcher) -> Tuple[State, k8s.V1Po <ide> if not (self.full_pod_spec or self.pod_template_file): <ide> # Add Airflow Version to the label <ide> # And a label to identify that pod is launched by KubernetesPodOperator <add> self.log.debug("Adding k8spodoperator labels to pod before launch for task %s", self.task_id) <ide> self.labels.update( <ide> { <ide> 'airflow_version': airflow_version.replace('+', '-'), <ide> def create_new_pod_for_operator(self, labels, launcher) -> Tuple[State, k8s.V1Po <ide> raise <ide> finally: <ide> if self.is_delete_operator_pod: <add> self.log.debug("Deleting pod for task %s", self.task_id) <ide> launcher.delete_pod(self.pod) <ide> return final_state, self.pod, result <ide>
3
Text
Text
update replit links to clone from repos
d23feadc1f000fdeb8d3ef994157bfeef8d233be
<ide><path>curriculum/challenges/english/07-scientific-computing-with-python/scientific-computing-with-python-projects/arithmetic-formatter.md <ide> challengeType: 10 <ide> <section id='description'> <ide> Create a function that receives a list of strings that are arithmetic problems and returns the problems arranged vertically and side-by-side. <ide> <del>You can access <a href='https://repl.it/@freeCodeCamp/fcc-arithmetic-arranger' target='_blank'>the full project description and starter code on Repl.it</a>. <add>You can access <a href='https://repl.it/github/freeCodeCamp/boilerplate-arithmetic-formatter' target='_blank'>the full project description and starter code on Repl.it</a>. <ide> <ide> After going to that link, fork the project. Once you complete the project based on the instructions in 'README.md', submit your project link below. <ide> <ide><path>curriculum/challenges/english/07-scientific-computing-with-python/scientific-computing-with-python-projects/budget-app.md <ide> challengeType: 10 <ide> <section id='description'> <ide> Create a "Category" class that can be used to create different budget categories. <ide> <del>You can access <a href='https://repl.it/@freeCodeCamp/fcc-budget-app' target='_blank'>the full project description and starter code on Repl.it</a>. <add>You can access <a href='https://repl.it/github/freeCodeCamp/boilerplate-budget-app' target='_blank'>the full project description and starter code on Repl.it</a>. <ide> <ide> After going to that link, fork the project. Once you complete the project based on the instructions in 'README.md', submit your project link below. <ide> <ide><path>curriculum/challenges/english/07-scientific-computing-with-python/scientific-computing-with-python-projects/polygon-area-calculator.md <ide> challengeType: 10 <ide> <section id='description'> <ide> In this project you will use object oriented programming to create a Rectangle class and a Square class. The Square class should be a subclass of Rectangle and inherit methods and attributes. <ide> <del>You can access <a href='https://repl.it/@freeCodeCamp/fcc-shape-calculator' target='_blank'>the full project description and starter code on Repl.it</a>. <add>You can access <a href='https://repl.it/github/freeCodeCamp/boilerplate-polygon-area-calculator' target='_blank'>the full project description and starter code on Repl.it</a>. <ide> <ide> After going to that link, fork the project. Once you complete the project based on the instructions in 'README.md', submit your project link below. <ide> <ide><path>curriculum/challenges/english/07-scientific-computing-with-python/scientific-computing-with-python-projects/probability-calculator.md <ide> challengeType: 10 <ide> <section id='description'> <ide> Write a program to determine the approximate probability of drawing certain balls randomly from a hat. <ide> <del>You can access <a href='https://repl.it/@freeCodeCamp/fcc-probability-calculator' target='_blank'>the full project description and starter code on Repl.it</a>. After going to that link, fork the project. Once you complete the project based on the instructions in 'README.md', submit your project link below. <add>You can access <a href='https://repl.it/github/freeCodeCamp/boilerplate-probability-calculator' target='_blank'>the full project description and starter code on Repl.it</a>. After going to that link, fork the project. Once you complete the project based on the instructions in 'README.md', submit your project link below. <ide> <ide> We are still developing the interactive instructional part of the Python curriculum. For now, here are some videos on the freeCodeCamp.org YouTube channel that will teach you everything you need to know to complete this project: <ide> <ul> <ide><path>curriculum/challenges/english/07-scientific-computing-with-python/scientific-computing-with-python-projects/time-calculator.md <ide> challengeType: 10 <ide> <section id='description'> <ide> Write a function named "add_time" that can add a duration to a start time and return the result. <ide> <del>You can access <a href='https://repl.it/@freeCodeCamp/fcc-time-calculator' target='_blank'>the full project description and starter code on Repl.it</a>. After going to that link, fork the project. Once you complete the project based on the instructions in 'README.md', submit your project link below. <add>You can access <a href='https://repl.it/github/freeCodeCamp/boilerplate-time-calculator' target='_blank'>the full project description and starter code on Repl.it</a>. After going to that link, fork the project. Once you complete the project based on the instructions in 'README.md', submit your project link below. <ide> <ide> We are still developing the interactive instructional part of the Python curriculum. For now, here are some videos on the freeCodeCamp.org YouTube channel that will teach you everything you need to know to complete this project: <ide> <ul> <ide><path>curriculum/challenges/english/08-data-analysis-with-python/data-analysis-with-python-projects/demographic-data-analyzer.md <ide> challengeType: 10 <ide> <section id='description'> <ide> In this challenge you must analyze demographic data using Pandas. You are given a dataset of demographic data that was extracted from the 1994 Census database. <ide> <del>You can access <a href='https://repl.it/@freeCodeCamp/fcc-demographic-data-analyzer' target='_blank'>the full project description and starter code on Repl.it</a>. <add>You can access <a href='https://repl.it/github/freeCodeCamp/boilerplate-demographic-data-analyzer' target='_blank'>the full project description and starter code on Repl.it</a>. <ide> <ide> After going to that link, fork the project. Once you complete the project based on the instructions in 'README.md', submit your project link below. <ide> <ide><path>curriculum/challenges/english/08-data-analysis-with-python/data-analysis-with-python-projects/mean-variance-standard-deviation-calculator.md <ide> challengeType: 10 <ide> <section id='description'> <ide> Create a function that uses Numpy to output the mean, variance, and standard deviation of the rows, columns, and elements in a 3 x 3 matrix. <ide> <del>You can access <a href='https://repl.it/@freeCodeCamp/fcc-mean-var-std' target='_blank'>the full project description and starter code on Repl.it</a>. <add>You can access <a href='https://repl.it/github/freeCodeCamp/boilerplate-mean-variance-standard-deviation-calculator' target='_blank'>the full project description and starter code on Repl.it</a>. <ide> <ide> After going to that link, fork the project. Once you complete the project based on the instructions in 'README.md', submit your project link below. <ide> <ide><path>curriculum/challenges/english/08-data-analysis-with-python/data-analysis-with-python-projects/medical-data-visualizer.md <ide> challengeType: 10 <ide> <section id='description'> <ide> In this project, you will visualize and make calculations from medical examination data using matplotlib, seaborn, and pandas. <ide> <del>You can access <a href='https://repl.it/@freeCodeCamp/fcc-medical-data-visualizer' target='_blank'>the full project description and starter code on Repl.it</a>. <add>You can access <a href='https://repl.it/github/freeCodeCamp/boilerplate-medical-data-visualizer' target='_blank'>the full project description and starter code on Repl.it</a>. <ide> <ide> After going to that link, fork the project. Once you complete the project based on the instructions in 'README.md', submit your project link below. <ide> <ide><path>curriculum/challenges/english/08-data-analysis-with-python/data-analysis-with-python-projects/page-view-time-series-visualizer.md <ide> challengeType: 10 <ide> <section id='description'> <ide> For this project you will visualize time series data using a line chart, bar chart, and box plots. You will use Pandas, matplotlib, and seaborn to visualize a dataset containing the number of page views each day on the freeCodeCamp.org forum from 2016-05-09 to 2019-12-03. The data visualizations will help you understand the patterns in visits and identify yearly and monthly growth. <ide> <del>You can access <a href='https://repl.it/@freeCodeCamp/fcc-time-series-visualizer' target='_blank'>the full project description and starter code on Repl.it</a>. <add>You can access <a href='https://repl.it/github/freeCodeCamp/boilerplate-page-view-time-series-visualizer' target='_blank'>the full project description and starter code on Repl.it</a>. <ide> <ide> After going to that link, fork the project. Once you complete the project based on the instructions in 'README.md', submit your project link below. <ide> <ide><path>curriculum/challenges/english/08-data-analysis-with-python/data-analysis-with-python-projects/sea-level-predictor.md <ide> challengeType: 10 <ide> <section id='description'> <ide> In this project, you will analyze a dataset of the global average sea level change since 1880. You will use the data to predict the sea level change through year 2050. <ide> <del>You can access <a href='https://repl.it/@freeCodeCamp/fcc-sea-level-predictor' target='_blank'>the full project description and starter code on Repl.it</a>. <add>You can access <a href='https://repl.it/github/freeCodeCamp/boilerplate-sea-level-predictor' target='_blank'>the full project description and starter code on Repl.it</a>. <ide> <ide> After going to that link, fork the project. Once you complete the project based on the instructions in 'README.md', submit your project link below. <ide> <ide><path>curriculum/challenges/english/09-information-security/information-security-projects/port-scanner.md <ide> challengeType: 10 <ide> <section id='description'> <ide> Create a port scanner using Python. <ide> <del>You can access <a href='https://repl.it/@freeCodeCamp/fcc-port-scanner' target='_blank'>the full project description and starter code on Repl.it</a>. <add>You can access <a href='https://repl.it/github/freeCodeCamp/boilerplate-port-scanner' target='_blank'>the full project description and starter code on Repl.it</a>. <ide> <ide> After going to that link, fork the project. Once you complete the project based on the instructions in 'README.md', submit your project link below. <ide> <ide><path>curriculum/challenges/english/09-information-security/information-security-projects/sha-1-password-cracker.md <ide> challengeType: 10 <ide> <section id='description'> <ide> For this project you will learn about the importance of good security by creating a password cracker to figure out passwords that were hashed using SHA-1. <ide> <del>You can access <a href='https://repl.it/@freeCodeCamp/fcc-brute-force-password-cracker' target='_blank'>the full project description and starter code on Repl.it</a>. <add>You can access <a href='https://repl.it/github/freeCodeCamp/boilerplate-SHA-1-password-cracker' target='_blank'>the full project description and starter code on Repl.it</a>. <ide> <ide> After going to that link, fork the project. Once you complete the project based on the instructions in 'README.md', submit your project link below. <ide> <ide><path>curriculum/challenges/english/11-machine-learning-with-python/machine-learning-with-python-projects/rock-paper-scissors.md <ide> challengeType: 10 <ide> <section id='description'> <ide> For this challenge, you will create a program to play Rock, Paper, Scissors. A program that picks at random will usually win 50% of the time. To pass this challenge your program must play matches against four different bots, winning at least 60% of the games in each match. <ide> <del>You can access <a href='https://repl.it/@freeCodeCamp/fcc-rock-paper-scissors' target='_blank'>the full project description and starter code on repl.it</a>. <add>You can access <a href='https://repl.it/github/freeCodeCamp/boilerplate-rock-paper-scissors' target='_blank'>the full project description and starter code on repl.it</a>. <ide> <ide> After going to that link, fork the project. Once you complete the project based on the instructions in 'README.md', submit your project link below. <ide>
13
Javascript
Javascript
centralize props memoization
7268d97d2b2b595db24cb6210e535dd2bd421df2
<ide><path>packages/react-reconciler/src/ReactFiberBeginWork.js <ide> function updateForwardRef( <ide> nextChildren, <ide> renderExpirationTime, <ide> ); <del> memoizeProps(workInProgress, nextProps); <ide> return workInProgress.child; <ide> } <ide> <ide> function updatePureComponent( <ide> nextChildren, <ide> renderExpirationTime, <ide> ); <del> memoizeProps(workInProgress, nextProps); <ide> return workInProgress.child; <ide> } <ide> <ide> function updateFragment( <ide> nextChildren, <ide> renderExpirationTime, <ide> ); <del> memoizeProps(workInProgress, nextChildren); <ide> return workInProgress.child; <ide> } <ide> <ide> function updateMode( <ide> nextChildren, <ide> renderExpirationTime, <ide> ); <del> memoizeProps(workInProgress, nextChildren); <ide> return workInProgress.child; <ide> } <ide> <ide> function updateProfiler( <ide> nextChildren, <ide> renderExpirationTime, <ide> ); <del> memoizeProps(workInProgress, nextProps); <ide> return workInProgress.child; <ide> } <ide> <ide> function updateFunctionComponent( <ide> nextChildren, <ide> renderExpirationTime, <ide> ); <del> memoizeProps(workInProgress, nextProps); <ide> return workInProgress.child; <ide> } <ide> <ide> function finishClassComponent( <ide> ); <ide> } <ide> <del> // Memoize props and state using the values we just used to render. <add> // Memoize state using the values we just used to render. <ide> // TODO: Restructure so we never read values from the instance. <del> memoizeState(workInProgress, instance.state); <del> memoizeProps(workInProgress, instance.props); <add> workInProgress.memoizedState = instance.state; <ide> <ide> // The context might have changed so we need to recalculate it. <ide> if (hasContext) { <ide> function updateHostComponent(current, workInProgress, renderExpirationTime) { <ide> ) { <ide> // Schedule this fiber to re-render at offscreen priority. Then bailout. <ide> workInProgress.expirationTime = Never; <del> workInProgress.memoizedProps = nextProps; <ide> return null; <ide> } <ide> <ide> function updateHostComponent(current, workInProgress, renderExpirationTime) { <ide> nextChildren, <ide> renderExpirationTime, <ide> ); <del> memoizeProps(workInProgress, nextProps); <ide> return workInProgress.child; <ide> } <ide> <ide> function updateHostText(current, workInProgress) { <ide> if (current === null) { <ide> tryToClaimNextHydratableInstance(workInProgress); <ide> } <del> const nextProps = workInProgress.pendingProps; <del> memoizeProps(workInProgress, nextProps); <ide> // Nothing to do here. This is terminal. We'll do the completion step <ide> // immediately after. <ide> return null; <ide> function mountIndeterminateComponent( <ide> ); <ide> } <ide> } <del> workInProgress.memoizedProps = props; <ide> return child; <ide> } <ide> <ide> function mountIndeterminateComponent( <ide> } <ide> } <ide> reconcileChildren(null, workInProgress, value, renderExpirationTime); <del> memoizeProps(workInProgress, props); <ide> return workInProgress.child; <ide> } <ide> } <ide> function updateSuspenseComponent( <ide> } <ide> } <ide> <del> workInProgress.memoizedProps = nextProps; <ide> workInProgress.memoizedState = nextState; <ide> workInProgress.child = child; <ide> return next; <ide> function updatePortalComponent( <ide> nextChildren, <ide> renderExpirationTime, <ide> ); <del> memoizeProps(workInProgress, nextChildren); <ide> } else { <ide> reconcileChildren( <ide> current, <ide> workInProgress, <ide> nextChildren, <ide> renderExpirationTime, <ide> ); <del> memoizeProps(workInProgress, nextChildren); <ide> } <ide> return workInProgress.child; <ide> } <ide> function updateContextProvider( <ide> const oldProps = workInProgress.memoizedProps; <ide> <ide> const newValue = newProps.value; <del> workInProgress.memoizedProps = newProps; <ide> <ide> if (__DEV__) { <ide> const providerPropTypes = workInProgress.type.propTypes; <ide> function updateContextConsumer( <ide> // React DevTools reads this flag. <ide> workInProgress.effectTag |= PerformedWork; <ide> reconcileChildren(current, workInProgress, newChildren, renderExpirationTime); <del> workInProgress.memoizedProps = newProps; <ide> return workInProgress.child; <ide> } <ide> <ide> function bailoutOnAlreadyFinishedWork( <ide> } <ide> } <ide> <del>// TODO: Delete memoizeProps/State and move to reconcile/bailout instead <del>function memoizeProps(workInProgress: Fiber, nextProps: any) { <del> workInProgress.memoizedProps = nextProps; <del>} <del> <del>function memoizeState(workInProgress: Fiber, nextState: any) { <del> workInProgress.memoizedState = nextState; <del> // Don't reset the updateQueue, in case there are pending updates. Resetting <del> // is handled by processUpdateQueue. <del>} <del> <ide> function beginWork( <ide> current: Fiber | null, <ide> workInProgress: Fiber, <ide> function beginWork( <ide> resolveDefaultProps(Component, unresolvedProps), <ide> renderExpirationTime, <ide> ); <del> workInProgress.memoizedProps = unresolvedProps; <ide> return child; <ide> } <ide> case ClassComponent: { <ide> function beginWork( <ide> resolveDefaultProps(Component, unresolvedProps), <ide> renderExpirationTime, <ide> ); <del> workInProgress.memoizedProps = unresolvedProps; <ide> return child; <ide> } <ide> case HostRoot: <ide> function beginWork( <ide> resolveDefaultProps(Component, unresolvedProps), <ide> renderExpirationTime, <ide> ); <del> workInProgress.memoizedProps = unresolvedProps; <ide> return child; <ide> } <ide> case Fragment: <ide> function beginWork( <ide> updateExpirationTime, <ide> renderExpirationTime, <ide> ); <del> workInProgress.memoizedProps = unresolvedProps; <ide> return child; <ide> } <ide> default: <ide><path>packages/react-reconciler/src/ReactFiberScheduler.js <ide> function performUnitOfWork(workInProgress: Fiber): Fiber | null { <ide> } <ide> <ide> next = beginWork(current, workInProgress, nextRenderExpirationTime); <add> workInProgress.memoizedProps = workInProgress.pendingProps; <ide> <ide> if (workInProgress.mode & ProfileMode) { <ide> // Record the render duration assuming we didn't bailout (or error). <ide> stopProfilerTimerIfRunningAndRecordDelta(workInProgress, true); <ide> } <ide> } else { <ide> next = beginWork(current, workInProgress, nextRenderExpirationTime); <add> workInProgress.memoizedProps = workInProgress.pendingProps; <ide> } <ide> <ide> if (__DEV__) {
2
Javascript
Javascript
use performance.now() when possible
629594ece5ed2229143ca1f1b62c07aa0824b35e
<ide><path>src/js/component.js <ide> class Component { <ide> pageY: event.touches[0].pageY <ide> }; <ide> // Record start time so we can detect a tap vs. "touch and hold" <del> touchStart = new Date().getTime(); <add> touchStart = window.performance.now(); <ide> // Reset couldBeTap tracking <ide> couldBeTap = true; <ide> } <ide> class Component { <ide> // Proceed only if the touchmove/leave/cancel event didn't happen <ide> if (couldBeTap === true) { <ide> // Measure how long the touch lasted <del> const touchTime = new Date().getTime() - touchStart; <add> const touchTime = window.performance.now() - touchStart; <ide> <ide> // Make sure the touch was less than the threshold to be considered a tap <ide> if (touchTime < touchTimeThreshold) { <ide><path>src/js/utils/dom-data.js <ide> * @module dom-data <ide> */ <ide> import * as Guid from './guid.js'; <add>import window from 'global/window'; <ide> <ide> /** <ide> * Element Data Store. <ide> export const elData = {}; <ide> * @constant <ide> * @private <ide> */ <del>const elIdAttr = 'vdata' + (new Date()).getTime(); <add>const elIdAttr = 'vdata' + Math.floor(window.performance && window.performance.now() || Date.now()); <ide> <ide> /** <ide> * Returns the cache object where data for an element is stored <ide><path>src/js/utils/fn.js <ide> export const bind = function(context, fn, uid) { <ide> * @return {Function} <ide> */ <ide> export const throttle = function(fn, wait) { <del> let last = Date.now(); <add> let last = window.performance.now(); <ide> <ide> const throttled = function(...args) { <del> const now = Date.now(); <add> const now = window.performance.now(); <ide> <ide> if (now - last >= wait) { <ide> fn(...args);
3
Java
Java
remove unused variable
36400807ab968f9e48aee2292545147e74462caf
<ide><path>ReactAndroid/src/main/java/com/facebook/react/modules/netinfo/NetInfoModule.java <ide> public class NetInfoModule extends ReactContextBaseJavaModule <ide> private static final String ERROR_MISSING_PERMISSION = "E_MISSING_PERMISSION"; <ide> <ide> private final ConnectivityManager mConnectivityManager; <del> private final ConnectivityManagerCompat mConnectivityManagerCompat; <ide> private final ConnectivityBroadcastReceiver mConnectivityBroadcastReceiver; <ide> private boolean mNoNetworkPermission = false; <ide> <ide> public NetInfoModule(ReactApplicationContext reactContext) { <ide> super(reactContext); <ide> mConnectivityManager = <ide> (ConnectivityManager) reactContext.getSystemService(Context.CONNECTIVITY_SERVICE); <del> mConnectivityManagerCompat = new ConnectivityManagerCompat(); <ide> mConnectivityBroadcastReceiver = new ConnectivityBroadcastReceiver(); <ide> } <ide>
1
Text
Text
remove extra comma in image pull api examples
307c39c187f8785d8edae4bfd460e2dd0432626a
<ide><path>docs/reference/api/docker_remote_api_v1.22.md <ide> Query Parameters: <ide> { <ide> "username": "jdoe", <ide> "password": "secret", <del> "email": "jdoe@acme.com", <add> "email": "jdoe@acme.com" <ide> } <ide> ``` <ide> <ide><path>docs/reference/api/docker_remote_api_v1.23.md <ide> Query Parameters: <ide> { <ide> "username": "jdoe", <ide> "password": "secret", <del> "email": "jdoe@acme.com", <add> "email": "jdoe@acme.com" <ide> } <ide> ``` <ide> <ide><path>docs/reference/api/docker_remote_api_v1.24.md <ide> a base64-encoded AuthConfig object. <ide> { <ide> "username": "jdoe", <ide> "password": "secret", <del> "email": "jdoe@acme.com", <add> "email": "jdoe@acme.com" <ide> } <ide> ``` <ide> <ide><path>docs/reference/api/docker_remote_api_v1.25.md <ide> a base64-encoded AuthConfig object. <ide> { <ide> "username": "jdoe", <ide> "password": "secret", <del> "email": "jdoe@acme.com", <add> "email": "jdoe@acme.com" <ide> } <ide> ``` <ide>
4
Javascript
Javascript
fix textarea cursor positioning in ie - fixes
531dac71be52d9ec38d990b020a3b0d8f833fcc4
<ide><path>packages/ember-handlebars/lib/controls/text_area.js <ide> Ember.TextArea = Ember.View.extend(Ember.TextSupport, <ide> cols: null, <ide> <ide> _updateElementValue: Ember.observer(function() { <del> this.$().val(get(this, 'value')); <add> // We do this check so cursor position doesn't get affected in IE <add> var value = get(this, 'value'); <add> if (value !== this.$().val()) { <add> this.$().val(value); <add> } <ide> }, 'value'), <ide> <ide> init: function() {
1
Ruby
Ruby
remove extra white spaces on activerecord docs
0034b7822d6132f5945b0514a5391d18e52aa4b6
<ide><path>activerecord/lib/active_record/aggregations.rb <ide> module ClassMethods <ide> # order in which mappings are defined determine the order in which attributes are sent to the <ide> # value class constructor. <ide> # * <tt>:allow_nil</tt> - Specifies that the value object will not be instantiated when all mapped <del> # attributes are +nil+. Setting the value object to +nil+ has the effect of writing +nil+ to all <add> # attributes are +nil+. Setting the value object to +nil+ has the effect of writing +nil+ to all <ide> # mapped attributes. <ide> # This defaults to +false+. <ide> # * <tt>:constructor</tt> - A symbol specifying the name of the constructor method or a Proc that <ide><path>activerecord/lib/active_record/associations/collection_association.rb <ide> def create!(attrs = {}, options = {}, &block) <ide> record <ide> end <ide> <del> # Add +records+ to this association. Returns +self+ so method calls may be chained. <add> # Add +records+ to this association. Returns +self+ so method calls may be chained. <ide> # Since << flattens its argument list and inserts each record, +push+ and +concat+ behave identically. <ide> def concat(*records) <ide> result = true <ide><path>activerecord/lib/active_record/associations/has_many_association.rb <ide> def insert_record(record, validate = true) <ide> # <ide> # If the association has a counter cache it gets that value. Otherwise <ide> # it will attempt to do a count via SQL, bounded to <tt>:limit</tt> if <del> # there's one. Some configuration options like :group make it impossible <add> # there's one. Some configuration options like :group make it impossible <ide> # to do an SQL count, in those cases the array count will be used. <ide> # <ide> # That does not depend on whether the collection has already been loaded <ide><path>activerecord/lib/active_record/attribute_methods/read.rb <ide> def define_read_method_for_serialized_attribute(attr_name) <ide> generated_attribute_methods.module_eval("def _#{attr_name}; #{access_code}; end; alias #{attr_name} _#{attr_name}", __FILE__, __LINE__) <ide> end <ide> <del> # Define an attribute reader method. Cope with nil column. <add> # Define an attribute reader method. Cope with nil column. <ide> # method_name is the same as attr_name except when a non-standard primary key is used, <ide> # we still define #id as an accessor for the key <ide> def define_read_method(method_name, attr_name, column) <ide><path>activerecord/lib/active_record/autosave_association.rb <ide> def define_non_cyclic_method(name, reflection, &block) <ide> # <ide> # For performance reasons, we don't check whether to validate at runtime. <ide> # However the validation and callback methods are lazy and those methods <del> # get created when they are invoked for the very first time. However, <add> # get created when they are invoked for the very first time. However, <ide> # this can change, for instance, when using nested attributes, which is <ide> # called _after_ the association has been defined. Since we don't want <ide> # the callbacks to get defined multiple times, there are guards that <ide><path>activerecord/lib/active_record/connection_adapters/abstract/connection_pool.rb <ide> def release_connection(with_id = current_connection_id) <ide> checkin conn if conn <ide> end <ide> <del> # If a connection already exists yield it to the block. If no connection <add> # If a connection already exists yield it to the block. If no connection <ide> # exists checkout a connection, yield it to the block, and checkin the <ide> # connection when finished. <ide> def with_connection <ide><path>activerecord/lib/active_record/connection_adapters/abstract/database_statements.rb <ide> def execute(sql, name = nil) <ide> undef_method :execute <ide> <ide> # Executes +sql+ statement in the context of this connection using <del> # +binds+ as the bind substitutes. +name+ is logged along with <add> # +binds+ as the bind substitutes. +name+ is logged along with <ide> # the executed +sql+ statement. <ide> def exec_query(sql, name = 'SQL', binds = []) <ide> end <ide> def default_sequence_name(table, column) <ide> <ide> # Set the sequence to the max value of the table's column. <ide> def reset_sequence!(table, column, sequence = nil) <del> # Do nothing by default. Implement for PostgreSQL, Oracle, ... <add> # Do nothing by default. Implement for PostgreSQL, Oracle, ... <ide> end <ide> <ide> # Inserts the given fixture into the table. Overridden in adapters that require <ide><path>activerecord/lib/active_record/connection_adapters/abstract/schema_statements.rb <ide> module ActiveRecord <ide> module ConnectionAdapters # :nodoc: <ide> module SchemaStatements <ide> # Returns a Hash of mappings from the abstract data types to the native <del> # database types. See TableDefinition#column for details on the recognized <add> # database types. See TableDefinition#column for details on the recognized <ide> # abstract data types. <ide> def native_database_types <ide> {} <ide> def column_exists?(table_name, column_name, type = nil, options = {}) <ide> # Creates a new table with the name +table_name+. +table_name+ may either <ide> # be a String or a Symbol. <ide> # <del> # There are two ways to work with +create_table+. You can use the block <add> # There are two ways to work with +create_table+. You can use the block <ide> # form or the regular form, like this: <ide> # <ide> # === Block form <ide> def rename_column(table_name, column_name, new_column_name) <ide> raise NotImplementedError, "rename_column is not implemented" <ide> end <ide> <del> # Adds a new index to the table. +column_name+ can be a single Symbol, or <add> # Adds a new index to the table. +column_name+ can be a single Symbol, or <ide> # an Array of Symbols. <ide> # <ide> # The index will be named after the table and the first column name, <ide><path>activerecord/lib/active_record/connection_adapters/mysql_adapter.rb <ide> def last_inserted_id(result) <ide> <ide> def exec_without_stmt(sql, name = 'SQL') # :nodoc: <ide> # Some queries, like SHOW CREATE TABLE don't work through the prepared <del> # statement API. For those queries, we need to use this method. :'( <add> # statement API. For those queries, we need to use this method. :'( <ide> log(sql, name) do <ide> result = @connection.query(sql) <ide> cols = [] <ide> def exec_stmt(sql, name, binds) <ide> stmt.execute(*binds.map { |col, val| type_cast(val, col) }) <ide> rescue Mysql::Error => e <ide> # Older versions of MySQL leave the prepared statement in a bad <del> # place when an error occurs. To support older mysql versions, we <add> # place when an error occurs. To support older mysql versions, we <ide> # need to close the statement and delete the statement from the <ide> # cache. <ide> stmt.close <ide><path>activerecord/lib/active_record/connection_adapters/postgresql_adapter.rb <ide> def self.extract_value_from_default(default) <ide> # * <tt>:password</tt> - Defaults to nothing. <ide> # * <tt>:database</tt> - The name of the database. No default, must be provided. <ide> # * <tt>:schema_search_path</tt> - An optional schema search path for the connection given <del> # as a string of comma-separated schema names. This is backward-compatible with the <tt>:schema_order</tt> option. <add> # as a string of comma-separated schema names. This is backward-compatible with the <tt>:schema_order</tt> option. <ide> # * <tt>:encoding</tt> - An optional client encoding that is used in a <tt>SET client_encoding TO <ide> # <encoding></tt> call on the connection. <ide> # * <tt>:min_messages</tt> - An optional client min messages that is used in a <ide> def recreate_database(name) #:nodoc: <ide> create_database(name) <ide> end <ide> <del> # Create a new PostgreSQL database. Options include <tt>:owner</tt>, <tt>:template</tt>, <add> # Create a new PostgreSQL database. Options include <tt>:owner</tt>, <tt>:template</tt>, <ide> # <tt>:encoding</tt>, <tt>:tablespace</tt>, and <tt>:connection_limit</tt> (note that MySQL uses <ide> # <tt>:charset</tt> while PostgreSQL uses <tt>:encoding</tt>). <ide> # <ide><path>activerecord/lib/active_record/counter_cache.rb <ide> module ActiveRecord <ide> # = Active Record Counter Cache <ide> module CounterCache <ide> # Resets one or more counter caches to their correct value using an SQL <del> # count query. This is useful when adding new counter caches, or if the <add> # count query. This is useful when adding new counter caches, or if the <ide> # counter has been corrupted or modified directly by SQL. <ide> # <ide> # ==== Parameters <ide><path>activerecord/lib/active_record/fixtures.rb <ide> class FixturesFileNotFound < StandardError; end <ide> # name: Google <ide> # url: http://www.google.com <ide> # <del># This YAML fixture file includes two fixtures. Each YAML fixture (ie. record) is given a name and is followed by an <del># indented list of key/value pairs in the "key: value" format. Records are separated by a blank line for your viewing <add># This YAML fixture file includes two fixtures. Each YAML fixture (ie. record) is given a name and is followed by an <add># indented list of key/value pairs in the "key: value" format. Records are separated by a blank line for your viewing <ide> # pleasure. <ide> # <ide> # Note that YAML fixtures are unordered. If you want ordered fixtures, use the omap YAML type. <ide> # See http://yaml.org/type/omap.html <del># for the specification. You will need ordered fixtures when you have foreign key constraints on keys in the same table. <del># This is commonly needed for tree structures. Example: <add># for the specification. You will need ordered fixtures when you have foreign key constraints on keys in the same table. <add># This is commonly needed for tree structures. Example: <ide> # <ide> # --- !omap <ide> # - parent: <ide> class FixturesFileNotFound < StandardError; end <ide> # <ide> # = Using fixtures in testcases <ide> # <del># Since fixtures are a testing construct, we use them in our unit and functional tests. There are two ways to use the <add># Since fixtures are a testing construct, we use them in our unit and functional tests. There are two ways to use the <ide> # fixtures, but first let's take a look at a sample unit test: <ide> # <ide> # require 'test_helper' <ide> def size <ide> fixtures.size <ide> end <ide> <del> # Return a hash of rows to be inserted. The key is the table, the value is <add> # Return a hash of rows to be inserted. The key is the table, the value is <ide> # a list of rows to insert to that table. <ide> def table_rows <ide> now = ActiveRecord::Base.default_timezone == :utc ? Time.now.utc : Time.now <ide><path>activerecord/lib/active_record/migration/command_recorder.rb <ide> module ActiveRecord <ide> class Migration <ide> # ActiveRecord::Migration::CommandRecorder records commands done during <del> # a migration and knows how to reverse those commands. The CommandRecorder <add> # a migration and knows how to reverse those commands. The CommandRecorder <ide> # knows how to invert the following commands: <ide> # <ide> # * add_column <ide> def initialize(delegate = nil) <ide> @delegate = delegate <ide> end <ide> <del> # record +command+. +command+ should be a method name and arguments. <add> # record +command+. +command+ should be a method name and arguments. <ide> # For example: <ide> # <ide> # recorder.record(:method_name, [:arg1, arg2]) <ide> def record(*command) <ide> end <ide> <ide> # Returns a list that represents commands that are the inverse of the <del> # commands stored in +commands+. For example: <add> # commands stored in +commands+. For example: <ide> # <ide> # recorder.record(:rename_table, [:old, :new]) <ide> # recorder.inverse # => [:rename_table, [:new, :old]] <ide><path>activerecord/lib/active_record/reflection.rb <ide> def derive_foreign_key <ide> class ThroughReflection < AssociationReflection #:nodoc: <ide> delegate :foreign_key, :foreign_type, :association_foreign_key, :active_record_primary_key, :to => :source_reflection <ide> <del> # Gets the source of the through reflection. It checks both a singularized <add> # Gets the source of the through reflection. It checks both a singularized <ide> # and pluralized form for <tt>:belongs_to</tt> or <tt>:has_many</tt>. <ide> # <ide> # class Post < ActiveRecord::Base <ide><path>activerecord/lib/active_record/relation.rb <ide> def destroy_all(conditions = nil) <ide> end <ide> <ide> # Destroy an object (or multiple objects) that has the given id, the object is instantiated first, <del> # therefore all callbacks and filters are fired off before the object is deleted. This method is <add> # therefore all callbacks and filters are fired off before the object is deleted. This method is <ide> # less efficient than ActiveRecord#delete but allows cleanup methods and other actions to be run. <ide> # <ide> # This essentially finds the object (or multiple objects) with the given id, creates a new object <ide> def destroy(id) <ide> # Deletes the records matching +conditions+ without instantiating the records first, and hence not <ide> # calling the +destroy+ method nor invoking callbacks. This is a single SQL DELETE statement that <ide> # goes straight to the database, much more efficient than +destroy_all+. Be careful with relations <del> # though, in particular <tt>:dependent</tt> rules defined on associations are not honored. Returns <add> # though, in particular <tt>:dependent</tt> rules defined on associations are not honored. Returns <ide> # the number of rows affected. <ide> # <ide> # ==== Parameters <ide><path>activerecord/lib/active_record/relation/calculations.rb <ide> def average(column_name, options = {}) <ide> calculate(:average, column_name, options) <ide> end <ide> <del> # Calculates the minimum value on a given column. The value is returned <add> # Calculates the minimum value on a given column. The value is returned <ide> # with the same data type of the column, or +nil+ if there's no row. See <ide> # +calculate+ for examples with options. <ide> # <ide> def sum(column_name, options = {}) <ide> calculate(:sum, column_name, options) <ide> end <ide> <del> # This calculates aggregate values in the given column. Methods for count, sum, average, <add> # This calculates aggregate values in the given column. Methods for count, sum, average, <ide> # minimum, and maximum have been added as shortcuts. Options such as <tt>:conditions</tt>, <ide> # <tt>:order</tt>, <tt>:group</tt>, <tt>:having</tt>, and <tt>:joins</tt> can be passed to customize the query. <ide> # <ide> # There are two basic forms of output: <ide> # * Single aggregate value: The single value is type cast to Fixnum for COUNT, Float <ide> # for AVG, and the given column's type for everything else. <ide> # * Grouped values: This returns an ordered hash of the values and groups them by the <del> # <tt>:group</tt> option. It takes either a column name, or the name of a belongs_to association. <add> # <tt>:group</tt> option. It takes either a column name, or the name of a belongs_to association. <ide> # <ide> # values = Person.maximum(:age, :group => 'last_name') <ide> # puts values["Drake"] <ide> def sum(column_name, options = {}) <ide> # Options: <ide> # * <tt>:conditions</tt> - An SQL fragment like "administrator = 1" or [ "user_name = ?", username ]. <ide> # See conditions in the intro to ActiveRecord::Base. <del> # * <tt>:include</tt>: Eager loading, see Associations for details. Since calculations don't load anything, <add> # * <tt>:include</tt>: Eager loading, see Associations for details. Since calculations don't load anything, <ide> # the purpose of this is to access fields on joined tables in your conditions, order, or group clauses. <ide> # * <tt>:joins</tt> - An SQL fragment for additional joins like "LEFT JOIN comments ON comments.post_id = id". <ide> # (Rarely needed). <ide><path>activerecord/lib/active_record/relation/finder_methods.rb <ide> module FinderMethods <ide> # <ide> # Example for find with a lock: Imagine two concurrent transactions: <ide> # each will read <tt>person.visits == 2</tt>, add 1 to it, and save, resulting <del> # in two saves of <tt>person.visits = 3</tt>. By locking the row, the second <add> # in two saves of <tt>person.visits = 3</tt>. By locking the row, the second <ide> # transaction has to wait until the first is finished; we get the <ide> # expected <tt>person.visits == 4</tt>. <ide> # <ide><path>activerecord/lib/active_record/result.rb <ide> module ActiveRecord <ide> ### <ide> # This class encapsulates a Result returned from calling +exec_query+ on any <del> # database connection adapter. For example: <add> # database connection adapter. For example: <ide> # <ide> # x = ActiveRecord::Base.connection.exec_query('SELECT * FROM foo') <ide> # x # => #<ActiveRecord::Result:0xdeadbeef> <ide><path>activerecord/lib/active_record/schema_dumper.rb <ide> def table(table, stream) <ide> spec = {} <ide> spec[:name] = column.name.inspect <ide> <del> # AR has an optimisation which handles zero-scale decimals as integers. This <add> # AR has an optimisation which handles zero-scale decimals as integers. This <ide> # code ensures that the dumper still dumps the column as a decimal. <ide> spec[:type] = if column.type == :integer && [/^numeric/, /^decimal/].any? { |e| e.match(column.sql_type) } <ide> 'decimal' <ide><path>activerecord/lib/active_record/serializers/xml_serializer.rb <ide> module Serialization <ide> # </firm> <ide> # <ide> # Additionally, the record being serialized will be passed to a Proc's second <del> # parameter. This allows for ad hoc additions to the resultant document that <add> # parameter. This allows for ad hoc additions to the resultant document that <ide> # incorporate the context of the record being serialized. And by leveraging the <ide> # closure created by a Proc, to_xml can be used to add elements that normally fall <ide> # outside of the scope of the model -- for example, generating and appending URLs <ide><path>activerecord/lib/active_record/session_store.rb <ide> module ActiveRecord <ide> # = Active Record Session Store <ide> # <del> # A session store backed by an Active Record class. A default class is <add> # A session store backed by an Active Record class. A default class is <ide> # provided, but any object duck-typing to an Active Record Session class <ide> # with text +session_id+ and +data+ attributes is sufficient. <ide> # <ide> module ActiveRecord <ide> # ActiveRecord::SessionStore::Session.data_column_name = 'legacy_session_data' <ide> # <ide> # Note that setting the primary key to the +session_id+ frees you from <del> # having a separate +id+ column if you don't want it. However, you must <add> # having a separate +id+ column if you don't want it. However, you must <ide> # set <tt>session.model.id = session.session_id</tt> by hand! A before filter <ide> # on ApplicationController is a good place. <ide> # <ide> module ActiveRecord <ide> # save <ide> # destroy <ide> # <del> # The example SqlBypass class is a generic SQL session store. You may <add> # The example SqlBypass class is a generic SQL session store. You may <ide> # use it as a basis for high-performance database-specific stores. <ide> class SessionStore < ActionDispatch::Session::AbstractStore <ide> module ClassMethods # :nodoc: <ide> class Session < ActiveRecord::Base <ide> <ide> ## <ide> # :singleton-method: <del> # Customizable data column name. Defaults to 'data'. <add> # Customizable data column name. Defaults to 'data'. <ide> cattr_accessor :data_column_name <ide> self.data_column_name = 'data' <ide> <ide> def raise_on_session_data_overflow! <ide> end <ide> <ide> # A barebones session store which duck-types with the default session <del> # store but bypasses Active Record and issues SQL directly. This is <add> # store but bypasses Active Record and issues SQL directly. This is <ide> # an example session model class meant as a basis for your own classes. <ide> # <ide> # The database connection, table name, and session id and data columns <del> # are configurable class attributes. Marshaling and unmarshaling <del> # are implemented as class methods that you may override. By default, <add> # are configurable class attributes. Marshaling and unmarshaling <add> # are implemented as class methods that you may override. By default, <ide> # marshaling data is <ide> # <ide> # ActiveSupport::Base64.encode64(Marshal.dump(data)) <ide> def raise_on_session_data_overflow! <ide> # Marshal.load(ActiveSupport::Base64.decode64(data)) <ide> # <ide> # This marshaling behavior is intended to store the widest range of <del> # binary session data in a +text+ column. For higher performance, <add> # binary session data in a +text+ column. For higher performance, <ide> # store in a +blob+ column instead and forgo the Base64 encoding. <ide> class SqlBypass <ide> extend ClassMethods <ide> def destroy <ide> end <ide> end <ide> <del> # The class used for session storage. Defaults to <add> # The class used for session storage. Defaults to <ide> # ActiveRecord::SessionStore::Session <ide> cattr_accessor :session_class <ide> self.session_class = Session <ide><path>activerecord/lib/active_record/test_case.rb <ide> def cleanup_identity_map <ide> ActiveRecord::IdentityMap.clear <ide> end <ide> <del> # Backport skip to Ruby 1.8. test/unit doesn't support it, so just <add> # Backport skip to Ruby 1.8. test/unit doesn't support it, so just <ide> # make it a noop. <ide> unless instance_methods.map(&:to_s).include?("skip") <ide> def skip(message) <ide><path>activerecord/lib/active_record/validations.rb <ide> module ActiveRecord <ide> # = Active Record RecordInvalid <ide> # <del> # Raised by <tt>save!</tt> and <tt>create!</tt> when the record is invalid. Use the <add> # Raised by <tt>save!</tt> and <tt>create!</tt> when the record is invalid. Use the <ide> # +record+ method to retrieve the record which did not validate. <ide> # <ide> # begin <ide><path>activerecord/lib/active_record/validations/associated.rb <ide> module ClassMethods <ide> # validation contexts by default (+nil+), other options are <tt>:create</tt> <ide> # and <tt>:update</tt>. <ide> # * <tt>:if</tt> - Specifies a method, proc or string to call to determine if the validation should <del> # occur (e.g. <tt>:if => :allow_validation</tt>, or <tt>:if => Proc.new { |user| user.signup_step > 2 }</tt>). The <add> # occur (e.g. <tt>:if => :allow_validation</tt>, or <tt>:if => Proc.new { |user| user.signup_step > 2 }</tt>). The <ide> # method, proc or string should return or evaluate to a true or false value. <ide> # * <tt>:unless</tt> - Specifies a method, proc or string to call to determine if the validation should <del> # not occur (e.g. <tt>:unless => :skip_validation</tt>, or <tt>:unless => Proc.new { |user| user.signup_step <= 2 }</tt>). The <add> # not occur (e.g. <tt>:unless => :skip_validation</tt>, or <tt>:unless => Proc.new { |user| user.signup_step <= 2 }</tt>). The <ide> # method, proc or string should return or evaluate to a true or false value. <ide> def validates_associated(*attr_names) <ide> validates_with AssociatedValidator, _merge_attributes(attr_names) <ide><path>activerecord/lib/active_record/validations/uniqueness.rb <ide> module ClassMethods <ide> # validates_uniqueness_of :user_name, :scope => :account_id <ide> # end <ide> # <del> # Or even multiple scope parameters. For example, making sure that a teacher can only be on the schedule once <add> # Or even multiple scope parameters. For example, making sure that a teacher can only be on the schedule once <ide> # per semester for a particular class. <ide> # <ide> # class TeacherSchedule < ActiveRecord::Base <ide> module ClassMethods <ide> # The method, proc or string should return or evaluate to a true or false value. <ide> # * <tt>:unless</tt> - Specifies a method, proc or string to call to determine if the validation should <ide> # not occur (e.g. <tt>:unless => :skip_validation</tt>, or <del> # <tt>:unless => Proc.new { |user| user.signup_step <= 2 }</tt>). The method, proc or string should <add> # <tt>:unless => Proc.new { |user| user.signup_step <= 2 }</tt>). The method, proc or string should <ide> # return or evaluate to a true or false value. <ide> # <ide> # === Concurrency and integrity
25
Go
Go
add remove flags for service update
dc33fc1ff433fcc70efc22f5cea9b87c6ec64a3b
<ide><path>api/client/service/opts.go <ide> func addServiceFlags(cmd *cobra.Command, opts *serviceOptions) { <ide> flags.StringVar(&opts.name, flagName, "", "Service name") <ide> flags.VarP(&opts.labels, flagLabel, "l", "Service labels") <ide> <del> flags.VarP(&opts.env, "env", "e", "Set environment variables") <add> flags.VarP(&opts.env, flagEnv, "e", "Set environment variables") <ide> flags.StringVarP(&opts.workdir, "workdir", "w", "", "Working directory inside the container") <ide> flags.StringVarP(&opts.user, flagUser, "u", "", "Username or UID") <ide> flags.Var(&opts.mounts, flagMount, "Attach a mount to the service") <ide> func addServiceFlags(cmd *cobra.Command, opts *serviceOptions) { <ide> const ( <ide> flagConstraint = "constraint" <ide> flagEndpointMode = "endpoint-mode" <add> flagEnv = "env" <add> flagEnvRemove = "remove-env" <ide> flagLabel = "label" <add> flagLabelRemove = "remove-label" <ide> flagLimitCPU = "limit-cpu" <ide> flagLimitMemory = "limit-memory" <ide> flagMode = "mode" <ide> flagMount = "mount" <add> flagMountRemove = "remove-mount" <ide> flagName = "name" <ide> flagNetwork = "network" <add> flagNetworkRemove = "remove-network" <ide> flagPublish = "publish" <add> flagPublishRemove = "remove-publish" <ide> flagReplicas = "replicas" <ide> flagReserveCPU = "reserve-cpu" <ide> flagReserveMemory = "reserve-memory" <ide><path>api/client/service/update.go <ide> package service <ide> <ide> import ( <ide> "fmt" <add> "strings" <ide> "time" <ide> <ide> "golang.org/x/net/context" <ide> func newUpdateCommand(dockerCli *client.DockerCli) *cobra.Command { <ide> flags.String("image", "", "Service image tag") <ide> flags.String("args", "", "Service command args") <ide> addServiceFlags(cmd, opts) <add> flags.StringSlice(flagEnvRemove, []string{}, "Remove an environment variable") <add> flags.StringSlice(flagLabelRemove, []string{}, "The key of a label to remove") <add> flags.StringSlice(flagMountRemove, []string{}, "The mount target for a mount to remove") <add> flags.StringSlice(flagPublishRemove, []string{}, "The target port to remove") <add> flags.StringSlice(flagNetworkRemove, []string{}, "The name of a network to remove") <ide> return cmd <ide> } <ide> <ide> func updateService(flags *pflag.FlagSet, spec *swarm.ServiceSpec) error { <ide> updateLabels(flags, &spec.Labels) <ide> updateString("image", &cspec.Image) <ide> updateStringToSlice(flags, "args", &cspec.Args) <del> updateListOpts("env", &cspec.Env) <add> updateEnvironment(flags, &cspec.Env) <ide> updateString("workdir", &cspec.Dir) <ide> updateString(flagUser, &cspec.User) <ide> updateMounts(flags, &cspec.Mounts) <ide> func updateService(flags *pflag.FlagSet, spec *swarm.ServiceSpec) error { <ide> updateDurationOpt((flagRestartWindow), task.RestartPolicy.Window) <ide> } <ide> <del> // TODO: The constraints field is fixed in #23773 <add> if flags.Changed(flagConstraint) { <add> task.Placement = &swarm.Placement{} <add> updateSlice(flagConstraint, &task.Placement.Constraints) <add> } <ide> <ide> if err := updateReplicas(flags, &spec.Mode); err != nil { <ide> return err <ide> func updateStringToSlice(flags *pflag.FlagSet, flag string, field *[]string) err <ide> return err <ide> } <ide> <del>func updateLabels(flags *pflag.FlagSet, field *map[string]string) { <del> if !flags.Changed(flagLabel) { <del> return <add>func anyChanged(flags *pflag.FlagSet, fields ...string) bool { <add> for _, flag := range fields { <add> if flags.Changed(flag) { <add> return true <add> } <ide> } <add> return false <add>} <add> <add>func updateLabels(flags *pflag.FlagSet, field *map[string]string) { <add> if flags.Changed(flagLabel) { <add> if field == nil { <add> *field = map[string]string{} <add> } <ide> <del> values := flags.Lookup(flagLabel).Value.(*opts.ListOpts).GetAll() <add> values := flags.Lookup(flagLabel).Value.(*opts.ListOpts).GetAll() <add> for key, value := range runconfigopts.ConvertKVStringsToMap(values) { <add> (*field)[key] = value <add> } <add> } <ide> <del> localLabels := map[string]string{} <del> for key, value := range runconfigopts.ConvertKVStringsToMap(values) { <del> localLabels[key] = value <add> if field != nil && flags.Changed(flagLabelRemove) { <add> toRemove, _ := flags.GetStringSlice(flagLabelRemove) <add> for _, label := range toRemove { <add> delete(*field, label) <add> } <ide> } <del> *field = localLabels <ide> } <ide> <del>func anyChanged(flags *pflag.FlagSet, fields ...string) bool { <del> for _, flag := range fields { <del> if flags.Changed(flag) { <del> return true <add>func updateEnvironment(flags *pflag.FlagSet, field *[]string) { <add> if flags.Changed(flagEnv) { <add> value := flags.Lookup(flagEnv).Value.(*opts.ListOpts) <add> *field = append(*field, value.GetAll()...) <add> } <add> if flags.Changed(flagEnvRemove) { <add> toRemove, _ := flags.GetStringSlice(flagEnvRemove) <add> for _, envRemove := range toRemove { <add> for i, env := range *field { <add> key := envKey(env) <add> if key == envRemove { <add> *field = append((*field)[:i], (*field)[i+1:]...) <add> } <add> } <ide> } <ide> } <del> return false <ide> } <ide> <del>// TODO: should this override by destination path, or does swarm handle that? <add>func envKey(value string) string { <add> kv := strings.SplitN(value, "=", 2) <add> return kv[0] <add>} <add> <ide> func updateMounts(flags *pflag.FlagSet, mounts *[]swarm.Mount) { <del> if !flags.Changed(flagMount) { <del> return <add> if flags.Changed(flagMount) { <add> values := flags.Lookup(flagMount).Value.(*MountOpt).Value() <add> *mounts = append(*mounts, values...) <add> } <add> if flags.Changed(flagMountRemove) { <add> toRemove, _ := flags.GetStringSlice(flagMountRemove) <add> for _, mountTarget := range toRemove { <add> for i, mount := range *mounts { <add> if mount.Target == mountTarget { <add> *mounts = append((*mounts)[:i], (*mounts)[i+1:]...) <add> } <add> } <add> } <ide> } <del> <del> *mounts = flags.Lookup(flagMount).Value.(*MountOpt).Value() <ide> } <ide> <del>// TODO: should this override by name, or does swarm handle that? <ide> func updatePorts(flags *pflag.FlagSet, portConfig *[]swarm.PortConfig) { <del> if !flags.Changed(flagPublish) { <del> return <del> } <add> if flags.Changed(flagPublish) { <add> values := flags.Lookup(flagPublish).Value.(*opts.ListOpts).GetAll() <add> ports, portBindings, _ := nat.ParsePortSpecs(values) <ide> <del> values := flags.Lookup(flagPublish).Value.(*opts.ListOpts).GetAll() <del> ports, portBindings, _ := nat.ParsePortSpecs(values) <add> for port := range ports { <add> *portConfig = append(*portConfig, convertPortToPortConfig(port, portBindings)...) <add> } <add> } <ide> <del> var localPortConfig []swarm.PortConfig <del> for port := range ports { <del> localPortConfig = append(localPortConfig, convertPortToPortConfig(port, portBindings)...) <add> if flags.Changed(flagPublishRemove) { <add> toRemove, _ := flags.GetStringSlice(flagPublishRemove) <add> for _, rawTargetPort := range toRemove { <add> targetPort := nat.Port(rawTargetPort) <add> for i, port := range *portConfig { <add> if string(port.Protocol) == targetPort.Proto() && <add> port.TargetPort == uint32(targetPort.Int()) { <add> *portConfig = append((*portConfig)[:i], (*portConfig)[i+1:]...) <add> } <add> } <add> } <ide> } <del> *portConfig = localPortConfig <ide> } <ide> <ide> func updateNetworks(flags *pflag.FlagSet, attachments *[]swarm.NetworkAttachmentConfig) { <del> if !flags.Changed(flagNetwork) { <del> return <add> if flags.Changed(flagNetwork) { <add> networks, _ := flags.GetStringSlice(flagNetwork) <add> for _, network := range networks { <add> *attachments = append(*attachments, swarm.NetworkAttachmentConfig{Target: network}) <add> } <ide> } <del> networks, _ := flags.GetStringSlice(flagNetwork) <del> <del> var localAttachments []swarm.NetworkAttachmentConfig <del> for _, network := range networks { <del> localAttachments = append(localAttachments, swarm.NetworkAttachmentConfig{Target: network}) <add> if flags.Changed(flagNetworkRemove) { <add> toRemove, _ := flags.GetStringSlice(flagNetworkRemove) <add> for _, networkTarget := range toRemove { <add> for i, network := range *attachments { <add> if network.Target == networkTarget { <add> *attachments = append((*attachments)[:i], (*attachments)[i+1:]...) <add> } <add> } <add> } <ide> } <del> *attachments = localAttachments <ide> } <ide> <ide> func updateReplicas(flags *pflag.FlagSet, serviceMode *swarm.ServiceMode) error { <ide><path>api/client/service/update_test.go <ide> func TestUpdateServiceArgs(t *testing.T) { <ide> updateService(flags, spec) <ide> assert.EqualStringSlice(t, cspec.Args, []string{"the", "new args"}) <ide> } <add> <add>func TestUpdateLabels(t *testing.T) { <add> flags := newUpdateCommand(nil).Flags() <add> flags.Set("label", "toadd=newlabel") <add> flags.Set("remove-label", "toremove") <add> <add> labels := map[string]string{ <add> "toremove": "thelabeltoremove", <add> "tokeep": "value", <add> } <add> <add> updateLabels(flags, &labels) <add> assert.Equal(t, len(labels), 2) <add> assert.Equal(t, labels["tokeep"], "value") <add> assert.Equal(t, labels["toadd"], "newlabel") <add>} <add> <add>func TestUpdateEnvironment(t *testing.T) { <add> flags := newUpdateCommand(nil).Flags() <add> flags.Set("env", "toadd=newenv") <add> flags.Set("remove-env", "toremove") <add> <add> envs := []string{ <add> "toremove=theenvtoremove", <add> "tokeep=value", <add> } <add> <add> updateEnvironment(flags, &envs) <add> assert.Equal(t, len(envs), 2) <add> assert.Equal(t, envs[0], "tokeep=value") <add> assert.Equal(t, envs[1], "toadd=newenv") <add>} <add> <add>func TestUpdateMounts(t *testing.T) { <add> flags := newUpdateCommand(nil).Flags() <add> flags.Set("mount", "type=volume,target=/toadd") <add> flags.Set("remove-mount", "/toremove") <add> <add> mounts := []swarm.Mount{ <add> {Target: "/toremove", Type: swarm.MountType("BIND")}, <add> {Target: "/tokeep", Type: swarm.MountType("BIND")}, <add> } <add> <add> updateMounts(flags, &mounts) <add> assert.Equal(t, len(mounts), 2) <add> assert.Equal(t, mounts[0].Target, "/tokeep") <add> assert.Equal(t, mounts[1].Target, "/toadd") <add>} <add> <add>func TestUpdateNetworks(t *testing.T) { <add> flags := newUpdateCommand(nil).Flags() <add> flags.Set("network", "toadd") <add> flags.Set("remove-network", "toremove") <add> <add> attachments := []swarm.NetworkAttachmentConfig{ <add> {Target: "toremove", Aliases: []string{"foo"}}, <add> {Target: "tokeep"}, <add> } <add> <add> updateNetworks(flags, &attachments) <add> assert.Equal(t, len(attachments), 2) <add> assert.Equal(t, attachments[0].Target, "tokeep") <add> assert.Equal(t, attachments[1].Target, "toadd") <add>} <add> <add>func TestUpdatePorts(t *testing.T) { <add> flags := newUpdateCommand(nil).Flags() <add> flags.Set("publish", "1000:1000") <add> flags.Set("remove-publish", "333/udp") <add> <add> portConfigs := []swarm.PortConfig{ <add> {TargetPort: 333, Protocol: swarm.PortConfigProtocol("udp")}, <add> {TargetPort: 555}, <add> } <add> <add> updatePorts(flags, &portConfigs) <add> assert.Equal(t, len(portConfigs), 2) <add> assert.Equal(t, portConfigs[0].TargetPort, uint32(555)) <add> assert.Equal(t, portConfigs[1].TargetPort, uint32(1000)) <add>} <ide><path>integration-cli/docker_cli_service_update_test.go <ide> func (s *DockerSwarmSuite) TestServiceUpdatePort(c *check.C) { <ide> waitAndAssert(c, defaultReconciliationTimeout, d.checkActiveContainerCount, checker.Equals, 1) <ide> <ide> // Update the service: changed the port mapping from 8080:8081 to 8082:8083. <del> _, err = d.Cmd("service", "update", "-p", "8082:8083", serviceName) <add> _, err = d.Cmd("service", "update", "-p", "8082:8083", "--remove-publish", "8081", serviceName) <ide> c.Assert(err, checker.IsNil) <ide> <ide> // Inspect the service and verify port mapping
4
Javascript
Javascript
remove numeric fallback of symbols
587e759302ef1cc02954831ccc72f7f668e32426
<ide><path>packages/react/src/__tests__/ReactElement-test.js <ide> let ReactTestUtils; <ide> <ide> describe('ReactElement', () => { <ide> let ComponentClass; <del> let originalSymbol; <ide> <ide> beforeEach(() => { <ide> jest.resetModules(); <ide> <del> // Delete the native Symbol if we have one to ensure we test the <del> // unpolyfilled environment. <del> originalSymbol = global.Symbol; <del> global.Symbol = undefined; <del> <ide> React = require('react'); <ide> ReactDOM = require('react-dom'); <ide> ReactTestUtils = require('react-dom/test-utils'); <ide> describe('ReactElement', () => { <ide> }; <ide> }); <ide> <del> afterEach(() => { <del> global.Symbol = originalSymbol; <del> }); <del> <del> it('uses the fallback value when in an environment without Symbol', () => { <del> expect((<div />).$$typeof).toBe(0xeac7); <del> }); <del> <ide> it('returns a complete element according to spec', () => { <ide> const element = React.createElement(ComponentClass); <ide> expect(element.type).toBe(ComponentClass); <ide> describe('ReactElement', () => { <ide> expect(element.type.someStaticMethod()).toBe('someReturnValue'); <ide> }); <ide> <del> // NOTE: We're explicitly not using JSX here. This is intended to test <del> // classic JS without JSX. <del> it('identifies valid elements', () => { <del> class Component extends React.Component { <del> render() { <del> return React.createElement('div'); <del> } <del> } <del> <del> expect(React.isValidElement(React.createElement('div'))).toEqual(true); <del> expect(React.isValidElement(React.createElement(Component))).toEqual(true); <del> <del> expect(React.isValidElement(null)).toEqual(false); <del> expect(React.isValidElement(true)).toEqual(false); <del> expect(React.isValidElement({})).toEqual(false); <del> expect(React.isValidElement('string')).toEqual(false); <del> if (!__EXPERIMENTAL__) { <del> let factory; <del> expect(() => { <del> factory = React.createFactory('div'); <del> }).toWarnDev( <del> 'Warning: React.createFactory() is deprecated and will be removed in a ' + <del> 'future major release. Consider using JSX or use React.createElement() ' + <del> 'directly instead.', <del> {withoutStack: true}, <del> ); <del> expect(React.isValidElement(factory)).toEqual(false); <del> } <del> expect(React.isValidElement(Component)).toEqual(false); <del> expect(React.isValidElement({type: 'div', props: {}})).toEqual(false); <del> <del> const jsonElement = JSON.stringify(React.createElement('div')); <del> expect(React.isValidElement(JSON.parse(jsonElement))).toBe(true); <del> }); <del> <ide> // NOTE: We're explicitly not using JSX here. This is intended to test <ide> // classic JS without JSX. <ide> it('is indistinguishable from a plain object', () => { <ide> describe('ReactElement', () => { <ide> // NOTE: We're explicitly not using JSX here. This is intended to test <ide> // classic JS without JSX. <ide> it('identifies elements, but not JSON, if Symbols are supported', () => { <del> // Rudimentary polyfill <del> // Once all jest engines support Symbols natively we can swap this to test <del> // WITH native Symbols by default. <del> const REACT_ELEMENT_TYPE = function() {}; // fake Symbol <del> const OTHER_SYMBOL = function() {}; // another fake Symbol <del> global.Symbol = function(name) { <del> return OTHER_SYMBOL; <del> }; <del> global.Symbol.for = function(key) { <del> if (key === 'react.element') { <del> return REACT_ELEMENT_TYPE; <del> } <del> return OTHER_SYMBOL; <del> }; <del> <del> jest.resetModules(); <del> <del> React = require('react'); <del> <ide> class Component extends React.Component { <ide> render() { <ide> return React.createElement('div'); <ide><path>packages/react/src/__tests__/ReactElementJSX-test.js <ide> let JSXDEVRuntime; <ide> // A lot of these tests are pulled from ReactElement-test because <ide> // this api is meant to be backwards compatible. <ide> describe('ReactElement.jsx', () => { <del> let originalSymbol; <del> <ide> beforeEach(() => { <ide> jest.resetModules(); <ide> <del> // Delete the native Symbol if we have one to ensure we test the <del> // unpolyfilled environment. <del> originalSymbol = global.Symbol; <del> global.Symbol = undefined; <del> <ide> React = require('react'); <ide> JSXRuntime = require('react/jsx-runtime'); <ide> JSXDEVRuntime = require('react/jsx-dev-runtime'); <ide> ReactDOM = require('react-dom'); <ide> ReactTestUtils = require('react-dom/test-utils'); <ide> }); <ide> <del> afterEach(() => { <del> global.Symbol = originalSymbol; <del> }); <del> <ide> it('allows static methods to be called using the type property', () => { <ide> class StaticMethodComponentClass extends React.Component { <ide> render() { <ide> describe('ReactElement.jsx', () => { <ide> expect(element.type.someStaticMethod()).toBe('someReturnValue'); <ide> }); <ide> <del> it('identifies valid elements', () => { <del> class Component extends React.Component { <del> render() { <del> return JSXRuntime.jsx('div', {}); <del> } <del> } <del> <del> expect(React.isValidElement(JSXRuntime.jsx('div', {}))).toEqual(true); <del> expect(React.isValidElement(JSXRuntime.jsx(Component, {}))).toEqual(true); <del> expect( <del> React.isValidElement(JSXRuntime.jsx(JSXRuntime.Fragment, {})), <del> ).toEqual(true); <del> if (__DEV__) { <del> expect(React.isValidElement(JSXDEVRuntime.jsxDEV('div', {}))).toEqual( <del> true, <del> ); <del> } <del> <del> expect(React.isValidElement(null)).toEqual(false); <del> expect(React.isValidElement(true)).toEqual(false); <del> expect(React.isValidElement({})).toEqual(false); <del> expect(React.isValidElement('string')).toEqual(false); <del> if (!__EXPERIMENTAL__) { <del> let factory; <del> expect(() => { <del> factory = React.createFactory('div'); <del> }).toWarnDev( <del> 'Warning: React.createFactory() is deprecated and will be removed in a ' + <del> 'future major release. Consider using JSX or use React.createElement() ' + <del> 'directly instead.', <del> {withoutStack: true}, <del> ); <del> expect(React.isValidElement(factory)).toEqual(false); <del> } <del> expect(React.isValidElement(Component)).toEqual(false); <del> expect(React.isValidElement({type: 'div', props: {}})).toEqual(false); <del> <del> const jsonElement = JSON.stringify(JSXRuntime.jsx('div', {})); <del> expect(React.isValidElement(JSON.parse(jsonElement))).toBe(true); <del> }); <del> <ide> it('is indistinguishable from a plain object', () => { <ide> const element = JSXRuntime.jsx('div', {className: 'foo'}); <ide> const object = {}; <ide> describe('ReactElement.jsx', () => { <ide> }); <ide> <ide> it('identifies elements, but not JSON, if Symbols are supported', () => { <del> // Rudimentary polyfill <del> // Once all jest engines support Symbols natively we can swap this to test <del> // WITH native Symbols by default. <del> const REACT_ELEMENT_TYPE = function() {}; // fake Symbol <del> const OTHER_SYMBOL = function() {}; // another fake Symbol <del> global.Symbol = function(name) { <del> return OTHER_SYMBOL; <del> }; <del> global.Symbol.for = function(key) { <del> if (key === 'react.element') { <del> return REACT_ELEMENT_TYPE; <del> } <del> return OTHER_SYMBOL; <del> }; <del> <del> jest.resetModules(); <del> <del> React = require('react'); <del> JSXRuntime = require('react/jsx-runtime'); <del> <ide> class Component extends React.Component { <ide> render() { <del> return JSXRuntime.jsx('div'); <add> return JSXRuntime.jsx('div', {}); <ide> } <ide> } <ide> <ide> expect(React.isValidElement(JSXRuntime.jsx('div', {}))).toEqual(true); <ide> expect(React.isValidElement(JSXRuntime.jsx(Component, {}))).toEqual(true); <add> expect( <add> React.isValidElement(JSXRuntime.jsx(JSXRuntime.Fragment, {})), <add> ).toEqual(true); <add> if (__DEV__) { <add> expect(React.isValidElement(JSXDEVRuntime.jsxDEV('div', {}))).toEqual( <add> true, <add> ); <add> } <ide> <ide> expect(React.isValidElement(null)).toEqual(false); <ide> expect(React.isValidElement(true)).toEqual(false); <ide><path>packages/shared/ReactSymbols.js <ide> // When adding new symbols to this file, <ide> // Please consider also adding to 'react-devtools-shared/src/backend/ReactSymbols' <ide> <del>// The Symbol used to tag the ReactElement-like types. If there is no native Symbol <del>// nor polyfill, then a plain number is used for performance. <del>export let REACT_ELEMENT_TYPE = 0xeac7; <del>export let REACT_PORTAL_TYPE = 0xeaca; <del>export let REACT_FRAGMENT_TYPE = 0xeacb; <del>export let REACT_STRICT_MODE_TYPE = 0xeacc; <del>export let REACT_PROFILER_TYPE = 0xead2; <del>export let REACT_PROVIDER_TYPE = 0xeacd; <del>export let REACT_CONTEXT_TYPE = 0xeace; <del>export let REACT_FORWARD_REF_TYPE = 0xead0; <del>export let REACT_SUSPENSE_TYPE = 0xead1; <del>export let REACT_SUSPENSE_LIST_TYPE = 0xead8; <del>export let REACT_MEMO_TYPE = 0xead3; <del>export let REACT_LAZY_TYPE = 0xead4; <del>export let REACT_SCOPE_TYPE = 0xead7; <del>export let REACT_DEBUG_TRACING_MODE_TYPE = 0xeae1; <del>export let REACT_OFFSCREEN_TYPE = 0xeae2; <del>export let REACT_LEGACY_HIDDEN_TYPE = 0xeae3; <del>export let REACT_CACHE_TYPE = 0xeae4; <del>export let REACT_TRACING_MARKER_TYPE = 0xeae5; <add>// The Symbol used to tag the ReactElement-like types. <add>export const REACT_ELEMENT_TYPE = Symbol.for('react.element'); <add>export const REACT_PORTAL_TYPE = Symbol.for('react.portal'); <add>export const REACT_FRAGMENT_TYPE = Symbol.for('react.fragment'); <add>export const REACT_STRICT_MODE_TYPE = Symbol.for('react.strict_mode'); <add>export const REACT_PROFILER_TYPE = Symbol.for('react.profiler'); <add>export const REACT_PROVIDER_TYPE = Symbol.for('react.provider'); <add>export const REACT_CONTEXT_TYPE = Symbol.for('react.context'); <add>export const REACT_FORWARD_REF_TYPE = Symbol.for('react.forward_ref'); <add>export const REACT_SUSPENSE_TYPE = Symbol.for('react.suspense'); <add>export const REACT_SUSPENSE_LIST_TYPE = Symbol.for('react.suspense_list'); <add>export const REACT_MEMO_TYPE = Symbol.for('react.memo'); <add>export const REACT_LAZY_TYPE = Symbol.for('react.lazy'); <add>export const REACT_SCOPE_TYPE = Symbol.for('react.scope'); <add>export const REACT_DEBUG_TRACING_MODE_TYPE = Symbol.for( <add> 'react.debug_trace_mode', <add>); <add>export const REACT_OFFSCREEN_TYPE = Symbol.for('react.offscreen'); <add>export const REACT_LEGACY_HIDDEN_TYPE = Symbol.for('react.legacy_hidden'); <add>export const REACT_CACHE_TYPE = Symbol.for('react.cache'); <add>export const REACT_TRACING_MARKER_TYPE = Symbol.for('react.tracing_marker'); <ide> <del>if (typeof Symbol === 'function' && Symbol.for) { <del> const symbolFor = Symbol.for; <del> REACT_ELEMENT_TYPE = symbolFor('react.element'); <del> REACT_PORTAL_TYPE = symbolFor('react.portal'); <del> REACT_FRAGMENT_TYPE = symbolFor('react.fragment'); <del> REACT_STRICT_MODE_TYPE = symbolFor('react.strict_mode'); <del> REACT_PROFILER_TYPE = symbolFor('react.profiler'); <del> REACT_PROVIDER_TYPE = symbolFor('react.provider'); <del> REACT_CONTEXT_TYPE = symbolFor('react.context'); <del> REACT_FORWARD_REF_TYPE = symbolFor('react.forward_ref'); <del> REACT_SUSPENSE_TYPE = symbolFor('react.suspense'); <del> REACT_SUSPENSE_LIST_TYPE = symbolFor('react.suspense_list'); <del> REACT_MEMO_TYPE = symbolFor('react.memo'); <del> REACT_LAZY_TYPE = symbolFor('react.lazy'); <del> REACT_SCOPE_TYPE = symbolFor('react.scope'); <del> REACT_DEBUG_TRACING_MODE_TYPE = symbolFor('react.debug_trace_mode'); <del> REACT_OFFSCREEN_TYPE = symbolFor('react.offscreen'); <del> REACT_LEGACY_HIDDEN_TYPE = symbolFor('react.legacy_hidden'); <del> REACT_CACHE_TYPE = symbolFor('react.cache'); <del> REACT_TRACING_MARKER_TYPE = symbolFor('react.tracing_marker'); <del>} <del> <del>const MAYBE_ITERATOR_SYMBOL = typeof Symbol === 'function' && Symbol.iterator; <add>const MAYBE_ITERATOR_SYMBOL = Symbol.iterator; <ide> const FAUX_ITERATOR_SYMBOL = '@@iterator'; <ide> <ide> export function getIteratorFn(maybeIterable: ?any): ?() => ?Iterator<*> { <ide><path>packages/shared/__tests__/ReactSymbols-test.internal.js <ide> describe('ReactSymbols', () => { <ide> it('Symbol values should be unique', () => { <ide> expectToBeUnique(Object.entries(require('shared/ReactSymbols'))); <ide> }); <del> <del> it('numeric values should be unique', () => { <del> const originalSymbolFor = global.Symbol.for; <del> global.Symbol.for = null; <del> try { <del> const entries = Object.entries(require('shared/ReactSymbols')).filter( <del> // REACT_ASYNC_MODE_TYPE and REACT_CONCURRENT_MODE_TYPE have the same numeric value <del> // for legacy backwards compatibility <del> ([key]) => key !== 'REACT_ASYNC_MODE_TYPE', <del> ); <del> expectToBeUnique(entries); <del> } finally { <del> global.Symbol.for = originalSymbolFor; <del> } <del> }); <ide> }); <ide><path>packages/shared/isValidElementType.js <ide> import { <ide> enableTransitionTracing, <ide> } from './ReactFeatureFlags'; <ide> <del>let REACT_MODULE_REFERENCE: number | Symbol = 0; <del>if (typeof Symbol === 'function') { <del> REACT_MODULE_REFERENCE = Symbol.for('react.module.reference'); <del>} <add>const REACT_MODULE_REFERENCE: Symbol = Symbol.for('react.module.reference'); <ide> <ide> export default function isValidElementType(type: mixed) { <ide> if (typeof type === 'string' || typeof type === 'function') {
5
Python
Python
fix data type of parameter shape
119bf865b15747bea815ec3ced10e2bbc1ba8de1
<ide><path>numpy/ma/extras.py <ide> def masked_all(shape, dtype=float): <ide> <ide> Parameters <ide> ---------- <del> shape : tuple <del> Shape of the required MaskedArray. <add> shape : int or tuple of ints <add> Shape of the required MaskedArray, e.g., ``(2, 3)`` or ``2``. <ide> dtype : dtype, optional <ide> Data type of the output. <ide>
1
PHP
PHP
simplify const names
e41f9a4743f0635defa6d1bba1baa1affa17b5c4
<ide><path>src/Validation/Validation.php <ide> class Validation <ide> <ide> /** <ide> * Default locale <del> * <del> * @var string <ide> */ <ide> const DEFAULT_LOCALE = 'en_US'; <ide> <ide> /** <ide> * Same as operator. <del> * <del> * @var string <ide> */ <del> const COMPARE_SAME_AS = '==='; <add> const COMPARE_SAME = '==='; <ide> <ide> /** <ide> * Not same as comparison operator. <del> * <del> * @var string <ide> */ <del> const COMPARE_NOT_SAME_AS = '!=='; <add> const COMPARE_NOT_SAME = '!=='; <ide> <ide> /** <ide> * Equal to comparison operator. <del> * <del> * @var string <ide> */ <del> const COMPARE_EQUAL_TO = '=='; <add> const COMPARE_EQUAL = '=='; <ide> <ide> /** <ide> * Not equal to comparison operator. <del> * <del> * @var string <ide> */ <del> const COMPARE_NOT_EQUAL_TO = '!='; <add> const COMPARE_NOT_EQUAL = '!='; <ide> <ide> /** <ide> * Greater than comparison operator. <del> * <del> * @var string <ide> */ <del> const COMPARE_GREATER_THAN = '>'; <add> const COMPARE_GREATER = '>'; <ide> <ide> /** <ide> * Greater than or equal to comparison operator. <del> * <del> * @var string <ide> */ <ide> const COMPARE_GREATER_OR_EQUAL = '>='; <ide> <ide> /** <ide> * Less than comparison operator. <del> * <del> * @var string <ide> */ <del> const COMPARE_LESS_THAN = '<'; <add> const COMPARE_LESS = '<'; <ide> <ide> /** <ide> * Less than or equal to comparison operator. <del> * <del> * @var string <ide> */ <ide> const COMPARE_LESS_OR_EQUAL = '<='; <ide> <ide> public static function comparison($check1, $operator, $check2) <ide> $operator = str_replace([' ', "\t", "\n", "\r", "\0", "\x0B"], '', strtolower($operator)); <ide> switch ($operator) { <ide> case 'isgreater': <del> case static::COMPARE_GREATER_THAN: <add> case static::COMPARE_GREATER: <ide> if ($check1 > $check2) { <ide> return true; <ide> } <ide> break; <ide> case 'isless': <del> case static::COMPARE_LESS_THAN: <add> case static::COMPARE_LESS: <ide> if ($check1 < $check2) { <ide> return true; <ide> } <ide> public static function comparison($check1, $operator, $check2) <ide> } <ide> break; <ide> case 'equalto': <del> case static::COMPARE_EQUAL_TO: <add> case static::COMPARE_EQUAL: <ide> if ($check1 == $check2) { <ide> return true; <ide> } <ide> break; <ide> case 'notequal': <del> case static::COMPARE_NOT_EQUAL_TO: <add> case static::COMPARE_NOT_EQUAL: <ide> if ($check1 != $check2) { <ide> return true; <ide> } <ide> break; <ide> case 'sameas': <del> case static::COMPARE_SAME_AS: <add> case static::COMPARE_SAME: <ide> if ($check1 === $check2) { <ide> return true; <ide> } <ide> break; <ide> case 'notsameas': <del> case static::COMPARE_NOT_SAME_AS: <add> case static::COMPARE_NOT_SAME: <ide> if ($check1 !== $check2) { <ide> return true; <ide> } <ide> public static function comparison($check1, $operator, $check2) <ide> */ <ide> public static function compareWith($check, $field, $context) <ide> { <del> return self::compareFields($check, $field, static::COMPARE_SAME_AS, $context); <add> return self::compareFields($check, $field, static::COMPARE_SAME, $context); <ide> } <ide> <ide> /** <ide><path>src/Validation/Validator.php <ide> public function greaterThan($field, $value, $message = null, $when = null) <ide> $extra = array_filter(['on' => $when, 'message' => $message]); <ide> <ide> return $this->add($field, 'greaterThan', $extra + [ <del> 'rule' => ['comparison', Validation::COMPARE_GREATER_THAN, $value] <add> 'rule' => ['comparison', Validation::COMPARE_GREATER, $value] <ide> ]); <ide> } <ide> <ide> public function lessThan($field, $value, $message = null, $when = null) <ide> $extra = array_filter(['on' => $when, 'message' => $message]); <ide> <ide> return $this->add($field, 'lessThan', $extra + [ <del> 'rule' => ['comparison', Validation::COMPARE_LESS_THAN, $value] <add> 'rule' => ['comparison', Validation::COMPARE_LESS, $value] <ide> ]); <ide> } <ide> <ide> public function equals($field, $value, $message = null, $when = null) <ide> $extra = array_filter(['on' => $when, 'message' => $message]); <ide> <ide> return $this->add($field, 'equals', $extra + [ <del> 'rule' => ['comparison', Validation::COMPARE_EQUAL_TO, $value] <add> 'rule' => ['comparison', Validation::COMPARE_EQUAL, $value] <ide> ]); <ide> } <ide> <ide> public function notEquals($field, $value, $message = null, $when = null) <ide> $extra = array_filter(['on' => $when, 'message' => $message]); <ide> <ide> return $this->add($field, 'notEquals', $extra + [ <del> 'rule' => ['comparison', Validation::COMPARE_NOT_EQUAL_TO, $value] <add> 'rule' => ['comparison', Validation::COMPARE_NOT_EQUAL, $value] <ide> ]); <ide> } <ide> <ide> public function sameAs($field, $secondField, $message = null, $when = null) <ide> $extra = array_filter(['on' => $when, 'message' => $message]); <ide> <ide> return $this->add($field, 'sameAs', $extra + [ <del> 'rule' => ['compareFields', $secondField, Validation::COMPARE_SAME_AS] <add> 'rule' => ['compareFields', $secondField, Validation::COMPARE_SAME] <ide> ]); <ide> } <ide> <ide> public function notSameAs($field, $secondField, $message = null, $when = null) <ide> $extra = array_filter(['on' => $when, 'message' => $message]); <ide> <ide> return $this->add($field, 'notSameAs', $extra + [ <del> 'rule' => ['compareFields', $secondField, Validation::COMPARE_NOT_SAME_AS] <add> 'rule' => ['compareFields', $secondField, Validation::COMPARE_NOT_SAME] <ide> ]); <ide> } <ide> <ide> public function equalToField($field, $secondField, $message = null, $when = null <ide> $extra = array_filter(['on' => $when, 'message' => $message]); <ide> <ide> return $this->add($field, 'equalToField', $extra + [ <del> 'rule' => ['compareFields', $secondField, Validation::COMPARE_EQUAL_TO] <add> 'rule' => ['compareFields', $secondField, Validation::COMPARE_EQUAL] <ide> ]); <ide> } <ide> <ide> public function notEqualToField($field, $secondField, $message = null, $when = n <ide> $extra = array_filter(['on' => $when, 'message' => $message]); <ide> <ide> return $this->add($field, 'notEqualToField', $extra + [ <del> 'rule' => ['compareFields', $secondField, Validation::COMPARE_NOT_EQUAL_TO] <add> 'rule' => ['compareFields', $secondField, Validation::COMPARE_NOT_EQUAL] <ide> ]); <ide> } <ide> <ide> public function greaterThanField($field, $secondField, $message = null, $when = <ide> $extra = array_filter(['on' => $when, 'message' => $message]); <ide> <ide> return $this->add($field, 'greaterThanField', $extra + [ <del> 'rule' => ['compareFields', $secondField, Validation::COMPARE_GREATER_THAN] <add> 'rule' => ['compareFields', $secondField, Validation::COMPARE_GREATER] <ide> ]); <ide> } <ide> <ide> public function lessThanField($field, $secondField, $message = null, $when = nul <ide> $extra = array_filter(['on' => $when, 'message' => $message]); <ide> <ide> return $this->add($field, 'lessThanField', $extra + [ <del> 'rule' => ['compareFields', $secondField, Validation::COMPARE_LESS_THAN] <add> 'rule' => ['compareFields', $secondField, Validation::COMPARE_LESS] <ide> ]); <ide> } <ide>
2
Text
Text
fix minor typos in autoloading guide
fec81049fe57bab6cca53b8c2d11bd668c0c9942
<ide><path>guides/source/constant_autoloading_and_reloading.md <ide> that may live in any other class or module object. If there were any, they <ide> would have separate entries in their respective constant tables. <ide> <ide> Put special attention in the previous paragraphs to the distinction between <del>class and module objects, constant names, and value objects assiociated to them <add>class and module objects, constant names, and value objects associated to them <ide> in constant tables. <ide> <ide> ### Resolution Algorithm for Relative Constants <ide> the code. <ide> Autoloading keeps track of autoloaded constants. Reloading is implemented by <ide> removing them all from their respective classes and modules using <ide> `Module#remove_const`. That way, when the code goes on, those constants are <del>going to be unkown again, and files reloaded on demand. <add>going to be unknown again, and files reloaded on demand. <ide> <ide> INFO. This is an all-or-nothing operation, Rails does not attempt to reload only <ide> what changed since dependencies between classes makes that really tricky. <ide> the boot process. But constant autoloading in Rails is **not** implemented with <ide> <ide> One possible implementation based on `Module#autoload` would be to walk the <ide> application tree and issue `autoload` calls that map existing file names to <del>their conventional contant name. <add>their conventional constant name. <ide> <ide> There are a number of reasons that prevent Rails from using that implementation. <ide> <ide> constant was missing and so it is not able to act as Ruby would. In particular, <ide> if `Admin::User` is autoloadable, it will get autoloaded in either case. <ide> <ide> Albeit qualified constants with `class` and `module` keywords may technically <del>work with autoloading in some cases, it is preferrable to use relative constants <add>work with autoloading in some cases, it is preferable to use relative constants <ide> instead: <ide> <ide> ```ruby <ide> way. Normally, though, such a call does not make sense in an initializer. <ide> <ide> `require_dependency` provides a way to ensure a certain constant is defined at <ide> some point regardless of the execution path, and one could think about doing <del>some calls in an initialzer to make sure certain constants are loaded upfront, <add>some calls in an initializer to make sure certain constants are loaded upfront, <ide> for example as an attempt to address the gotcha with STIs. <ide> <ide> Problem is, in development mode all autoloaded constants are wiped on a
1
PHP
PHP
add fixture for failing test
83644ea4263915d58de8e95a09b513158fa97b68
<ide><path>tests/test_app/Plugin/Company/TestPluginThree/tests/Fixture/ArticlesFixture.php <add><?php <add>/** <add> * CakePHP(tm) : Rapid Development Framework (http://cakephp.org) <add> * Copyright (c) Cake Software Foundation, Inc. (http://cakefoundation.org) <add> * <add> * Licensed under The MIT License <add> * For full copyright and license information, please see the LICENSE.txt <add> * Redistributions of files must retain the above copyright notice. <add> * <add> * @copyright Copyright (c) Cake Software Foundation, Inc. (http://cakefoundation.org) <add> * @link http://cakephp.org CakePHP(tm) Project <add> * @since 3.0.0 <add> * @license http://www.opensource.org/licenses/mit-license.php MIT License <add> */ <add>namespace Company\TestPluginThree\Test\Fixture; <add> <add>use Cake\TestSuite\Fixture\TestFixture; <add> <add>/** <add> * Plugin article fixture. <add> */ <add>class ArticlesFixture extends TestFixture <add>{ <add> <add> /** <add> * fields property <add> * <add> * @var array <add> */ <add> public $fields = [ <add> 'id' => ['type' => 'integer'], <add> 'author_id' => ['type' => 'integer', 'null' => true], <add> 'title' => ['type' => 'string', 'null' => true], <add> 'body' => 'text', <add> '_constraints' => ['primary' => ['type' => 'primary', 'columns' => ['id']]] <add> ]; <add> <add> /** <add> * records property <add> * <add> * @var array <add> */ <add> public $records = [ <add> ['author_id' => 1, 'title' => 'Plugin Article', 'body' => 'Plugin Article Body'], <add> ]; <add>}
1
Ruby
Ruby
fix actionmailer tests that depend on run order
73f0afd1d41aa6c3febcc2e93e4d19d9bf0f27dc
<ide><path>actionmailer/test/base_test.rb <ide> def stub_queue(klass, queue) <ide> end <ide> <ide> test "assets tags should use a Mailer's asset_host settings when available" do <del> ActionMailer::Base.config.asset_host = "global.com" <del> ActionMailer::Base.config.assets_dir = "global/" <add> begin <add> ActionMailer::Base.config.asset_host = "http://global.com" <add> ActionMailer::Base.config.assets_dir = "global/" <ide> <del> AssetMailer.asset_host = "http://local.com" <add> AssetMailer.asset_host = "http://local.com" <ide> <del> mail = AssetMailer.welcome <add> mail = AssetMailer.welcome <ide> <del> assert_equal(%{<img alt="Dummy" src="http://local.com/images/dummy.png" />}, mail.body.to_s.strip) <add> assert_equal(%{<img alt="Dummy" src="http://local.com/images/dummy.png" />}, mail.body.to_s.strip) <add> ensure <add> AssetMailer.asset_host = ActionMailer::Base.config.asset_host <add> end <ide> end <ide> <ide> # Before and After hooks
1
Text
Text
fix yaml syntax errors
ad012c9bbc2f273920be92f006166197cca46518
<ide><path>doc/api/buffer.md <ide> added: <ide> changes: <ide> - version: <ide> - v14.10.0 <del> = v12.19.0 <add> - v12.19.0 <ide> pr-url: https://github.com/nodejs/node/pull/34960 <ide> description: This function is also available as `buf.readBigUint64LE()`. <ide> --> <ide><path>doc/api/modules.md <ide> loading. <ide> ### `module.parent` <ide> <!-- YAML <ide> added: v0.1.16 <del>deprecated: v14.6.0 <ide> deprecated: <ide> - v12.19.0 <ide> - v14.6.0 <ide><path>doc/api/zlib.md <ide> These advanced options are available for controlling decompression: <ide> <!-- YAML <ide> added: v0.11.1 <ide> changes: <del> - version <add> - version: <ide> - v14.5.0 <ide> - v12.19.0 <ide> pr-url: https://github.com/nodejs/node/pull/33516
3
Text
Text
add code examples to node test runner
6975dd14253e936509e2a68b293101fa234e53af
<ide><path>doc/api/test.md <ide> This function is used to write TAP diagnostics to the output. Any diagnostic <ide> information is included at the end of the test's results. This function does <ide> not return a value. <ide> <add>```js <add>test('top level test', (t) => { <add> t.diagnostic('A diagnostic message'); <add>}); <add>``` <add> <ide> ### `context.runOnly(shouldRunOnlyTests)` <ide> <ide> <!-- YAML <ide> have the `only` option set. Otherwise, all tests are run. If Node.js was not <ide> started with the [`--test-only`][] command-line option, this function is a <ide> no-op. <ide> <add>```js <add>test('top level test', (t) => { <add> // The test context can be set to run subtests with the 'only' option. <add> t.runOnly(true); <add> return Promise.all([ <add> t.test('this subtest is now skipped'), <add> t.test('this subtest is run', { only: true }), <add> ]); <add>}); <add>``` <add> <ide> ### `context.skip([message])` <ide> <ide> <!-- YAML <ide> This function causes the test's output to indicate the test as skipped. If <ide> not terminate execution of the test function. This function does not return a <ide> value. <ide> <add>```js <add>test('top level test', (t) => { <add> // Make sure to return here as well if the test contains additional logic. <add> t.skip('this is skipped'); <add>}); <add>``` <add> <ide> ### `context.todo([message])` <ide> <ide> <!-- YAML <ide> This function adds a `TODO` directive to the test's output. If `message` is <ide> provided, it is included in the TAP output. Calling `todo()` does not terminate <ide> execution of the test function. This function does not return a value. <ide> <add>```js <add>test('top level test', (t) => { <add> // This test is marked as `TODO` <add> t.todo('this is a todo'); <add>}); <add>``` <add> <ide> ### `context.test([name][, options][, fn])` <ide> <ide> <!-- YAML <ide> added: v18.0.0 <ide> This function is used to create subtests under the current test. This function <ide> behaves in the same fashion as the top level [`test()`][] function. <ide> <add>```js <add>test('top level test', async (t) => { <add> await t.test( <add> 'This is a subtest', <add> { only: false, skip: false, concurrency: 1, todo: false }, <add> (t) => { <add> assert.ok('some relevant assertion here'); <add> } <add> ); <add>}); <add>``` <add> <ide> [TAP]: https://testanything.org/ <ide> [`--test-only`]: cli.md#--test-only <ide> [`--test`]: cli.md#--test
1
Go
Go
lock container when deleting its root directory
18e322bc7c530d7b4393aca64e70dcad659621e3
<ide><path>daemon/delete.go <ide> func (daemon *Daemon) cleanupContainer(container *container.Container, config ty <ide> container.RWLayer = nil <ide> } <ide> <del> if err := containerfs.EnsureRemoveAll(container.Root); err != nil { <add> // Hold the container lock while deleting the container root directory <add> // so that other goroutines don't attempt to concurrently open files <add> // within it. Having any file open on Windows (without the <add> // FILE_SHARE_DELETE flag) will block it from being deleted. <add> container.Lock() <add> err := containerfs.EnsureRemoveAll(container.Root) <add> container.Unlock() <add> if err != nil { <ide> err = errors.Wrapf(err, "unable to remove filesystem for %s", container.ID) <ide> container.SetRemovalError(err) <ide> return err
1
Javascript
Javascript
remove overriding removeobserver on bind views
86f5f87b06d4786f90c6e3c5bb5725f1b38654c0
<ide><path>packages/sproutcore-handlebars/lib/helpers/binding.js <ide> var get = SC.get, getPath = SC.getPath, set = SC.set, fmt = SC.String.fmt; <ide> // is an empty string, we are printing the current context <ide> // object ({{this}}) so updating it is not our responsibility. <ide> if (property !== '') { <del> set(bindView, 'removeObserver', function() { <del> SC.removeObserver(ctx, property, invoker); <del> }); <del> <ide> SC.addObserver(ctx, property, invoker); <ide> } <ide> } else {
1
PHP
PHP
route
902ecb9bc84a8b4632ed87e0cf5952aaf848a069
<ide><path>src/Illuminate/Support/helpers.php <ide> function public_path($path = '') <ide> /** <ide> * Generate a URL to a named route. <ide> * <del> * @param string $route <add> * @param string $name <ide> * @param array $parameters <add> * @param bool $absolute <add> * @param \Illuminate\Routing\Route $route <ide> * @return string <ide> */ <del> function route($route, $parameters = array()) <add> function route($name, $parameters = array(), $absolute = true, $route = null) <ide> { <del> return app('url')->route($route, $parameters); <add> return app('url')->route($name, $parameters, $absolute, $route); <ide> } <ide> } <ide>
1
Go
Go
add env and labels to log context
656cdbb0e96a1f8531b118caedd8e9b3d281c201
<ide><path>daemon/container.go <ide> func (container *Container) getLogger() (logger.Logger, error) { <ide> ContainerImageID: container.ImageID, <ide> ContainerImageName: container.Config.Image, <ide> ContainerCreated: container.Created, <add> ContainerEnv: container.Config.Env, <add> ContainerLabels: container.Config.Labels, <ide> } <ide> <ide> // Set logging file for "json-logger" <ide><path>daemon/logger/context.go <ide> type Context struct { <ide> ContainerImageID string <ide> ContainerImageName string <ide> ContainerCreated time.Time <add> ContainerEnv []string <add> ContainerLabels map[string]string <ide> LogPath string <ide> } <ide>
2
Java
Java
fix error handling in jackson2jsondecoder
aa43472f2ea1f8aaffa65be917bc24603dc4d56c
<ide><path>spring-web/src/main/java/org/springframework/http/codec/json/Jackson2JsonDecoder.java <ide> private Flux<Object> decodeInternal(JsonObjectDecoder objectDecoder, Publisher<D <ide> return value; <ide> } <ide> catch (IOException ex) { <del> return Flux.error(new CodecException("Error while reading the data", ex)); <add> throw new CodecException("Error while reading the data", ex); <ide> } <ide> }); <ide> } <ide><path>spring-web/src/test/java/org/springframework/http/codec/json/Jackson2JsonDecoderTests.java <ide> import reactor.test.StepVerifier; <ide> <ide> import org.springframework.core.ResolvableType; <add>import org.springframework.core.codec.CodecException; <ide> import org.springframework.core.io.buffer.AbstractDataBufferAllocatingTestCase; <ide> import org.springframework.core.io.buffer.DataBuffer; <ide> import org.springframework.http.MediaType; <ide> public void decodePojo() throws Exception { <ide> .verify(); <ide> } <ide> <add> @Test <add> public void decodePojoWithError() throws Exception { <add> Flux<DataBuffer> source = Flux.just(stringBuffer("{\"foo\":}")); <add> ResolvableType elementType = ResolvableType.forClass(Pojo.class); <add> Flux<Object> flux = new Jackson2JsonDecoder().decode(source, elementType, null, <add> Collections.emptyMap()); <add> <add> StepVerifier.create(flux).verifyError(CodecException.class); <add> } <add> <ide> @Test <ide> public void decodeToList() throws Exception { <ide> Flux<DataBuffer> source = Flux.just(stringBuffer(
2
Javascript
Javascript
drop separate findpendingwork phase
b63cda6e852b55d0f4b7d0419da2068c8540bf34
<ide><path>src/renderers/shared/fiber/ReactChildFiber.js <ide> function ChildReconciler(shouldClone) { <ide> exports.reconcileChildFibers = ChildReconciler(true); <ide> <ide> exports.reconcileChildFibersInPlace = ChildReconciler(false); <add> <add> <add>function cloneSiblings(current : Fiber, workInProgress : Fiber, returnFiber : Fiber) { <add> workInProgress.return = returnFiber; <add> while (current.sibling) { <add> current = current.sibling; <add> workInProgress = workInProgress.sibling = cloneFiber( <add> current, <add> current.pendingWorkPriority <add> ); <add> workInProgress.return = returnFiber; <add> } <add> workInProgress.sibling = null; <add>} <add> <add>exports.cloneChildFibers = function(workInProgress : Fiber) { <add> if (!workInProgress.child) { <add> return; <add> } <add> const current = workInProgress.alternate; <add> if (!current || workInProgress.child !== current.child) { <add> // If there is no alternate, then we don't need to clone the children. <add> // If the children of the alternate fiber is a different set, then we don't <add> // need to clone. We need to reset the return fiber though since we'll <add> // traverse down into them. <add> // TODO: I don't think it is actually possible for them to be anything but <add> // equal at this point because this fiber was just cloned. Can we skip this <add> // check? Similar question about the return fiber. <add> let child = workInProgress.child; <add> while (child) { <add> child.return = workInProgress; <add> child = child.sibling; <add> } <add> return; <add> } <add> // TODO: This used to reset the pending priority. Not sure if that is needed. <add> // workInProgress.pendingWorkPriority = current.pendingWorkPriority; <add> <add> // TODO: The below priority used to be set to NoWork which would've <add> // dropped work. This is currently unobservable but will become <add> // observable when the first sibling has lower priority work remaining <add> // than the next sibling. At that point we should add tests that catches <add> // this. <add> <add> const currentChild = current.child; <add> if (!currentChild) { <add> return; <add> } <add> workInProgress.child = cloneFiber( <add> currentChild, <add> currentChild.pendingWorkPriority <add> ); <add> cloneSiblings(currentChild, workInProgress.child, workInProgress); <add>} <ide><path>src/renderers/shared/fiber/ReactFiber.js <ide> exports.cloneFiber = function(fiber : Fiber, priorityLevel : PriorityLevel) : Fi <ide> alt.child = fiber.child; <ide> alt.childInProgress = fiber.childInProgress; <ide> alt.sibling = fiber.sibling; <del> alt.ref = alt.ref; <add> alt.ref = fiber.ref; <ide> alt.pendingProps = fiber.pendingProps; <ide> alt.pendingWorkPriority = priorityLevel; <ide> <ide> exports.cloneFiber = function(fiber : Fiber, priorityLevel : PriorityLevel) : Fi <ide> alt.child = fiber.child; <ide> alt.childInProgress = fiber.childInProgress; <ide> alt.sibling = fiber.sibling; <del> alt.ref = alt.ref; <add> alt.ref = fiber.ref; <ide> // pendingProps is here for symmetry but is unnecessary in practice for now. <ide> alt.pendingProps = fiber.pendingProps; <ide> alt.pendingWorkPriority = priorityLevel; <ide><path>src/renderers/shared/fiber/ReactFiberBeginWork.js <ide> import type { HostConfig } from 'ReactFiberReconciler'; <ide> var { <ide> reconcileChildFibers, <ide> reconcileChildFibersInPlace, <add> cloneChildFibers, <ide> } = require('ReactChildFiber'); <ide> var ReactTypeOfWork = require('ReactTypeOfWork'); <ide> var { <ide> var { <ide> NoWork, <ide> OffscreenPriority, <ide> } = require('ReactPriorityLevel'); <del>var { findNextUnitOfWorkAtPriority } = require('ReactFiberPendingWork'); <ide> <ide> module.exports = function<T, P, I, C>(config : HostConfig<T, P, I, C>) { <ide> <ide> module.exports = function<T, P, I, C>(config : HostConfig<T, P, I, C>) { <ide> priorityLevel <ide> ); <ide> } else { <add> // TODO: <ide> workInProgress.childInProgress = reconcileChildFibers( <ide> workInProgress, <del> current ? current.child : null, <add> current ? current.child : workInProgress.child, <ide> nextChildren, <ide> priorityLevel <ide> ); <ide> module.exports = function<T, P, I, C>(config : HostConfig<T, P, I, C>) { <ide> function updateFunctionalComponent(current, workInProgress) { <ide> var fn = workInProgress.type; <ide> var props = workInProgress.pendingProps; <add> <ide> var nextChildren = fn(props); <ide> reconcileChildren(current, workInProgress, nextChildren); <add> return workInProgress.childInProgress; <ide> } <ide> <ide> function updateClassComponent(current : ?Fiber, workInProgress : Fiber) { <ide> module.exports = function<T, P, I, C>(config : HostConfig<T, P, I, C>) { <ide> } <ide> } <ide> } <add> <ide> instance.props = props; <ide> var nextChildren = instance.render(); <ide> reconcileChildren(current, workInProgress, nextChildren); <add> <ide> return workInProgress.childInProgress; <ide> } <ide> <ide> module.exports = function<T, P, I, C>(config : HostConfig<T, P, I, C>) { <ide> // becomes part of the render tree, even though it never completed. Its <ide> // `output` property is unpredictable because of it. <ide> reconcileChildrenAtPriority(current, workInProgress, nextChildren, OffscreenPriority); <add> workInProgress.child = current ? current.child : null; <add> let child = workInProgress.childInProgress; <add> while (child) { <add> const currentChild = child.alternate; <add> if (currentChild) { <add> child.child = currentChild.child; <add> child.childInProgress = currentChild.childInProgress; <add> child.memoizedProps = currentChild.memoizedProps; <add> child.output = currentChild.output; <add> } <add> child.nextEffect = null; <add> child.firstEffect = null; <add> child.lastEffect = null; <add> <add> child = child.sibling; <add> } <ide> return null; <ide> } else { <ide> reconcileChildren(current, workInProgress, nextChildren); <ide> module.exports = function<T, P, I, C>(config : HostConfig<T, P, I, C>) { <ide> } <ide> } <ide> reconcileChildren(current, workInProgress, value); <add> return workInProgress.childInProgress; <ide> } <ide> <ide> function updateCoroutineComponent(current, workInProgress) { <ide> module.exports = function<T, P, I, C>(config : HostConfig<T, P, I, C>) { <ide> // Update the returnFiber of the child to the newest fiber. <ide> child.return = returnFiber; <ide> // Retain the priority if there's any work left to do in the children. <del> if (child.pendingWorkPriority !== NoWork && <add> /*if (child.pendingWorkPriority !== NoWork && <ide> (returnFiber.pendingWorkPriority === NoWork || <ide> returnFiber.pendingWorkPriority > child.pendingWorkPriority)) { <ide> returnFiber.pendingWorkPriority = child.pendingWorkPriority; <add> }*/ <add> if (!child.pendingProps && !child.memoizedProps) { <add> throw new Error('Should have memoized props by now'); <ide> } <ide> } while (child = child.sibling); <ide> } <ide> module.exports = function<T, P, I, C>(config : HostConfig<T, P, I, C>) { <ide> } while (child = child.sibling); <ide> } <ide> <add>/* <ide> function bailoutOnCurrent(current : Fiber, workInProgress : Fiber) : ?Fiber { <ide> // The most likely scenario is that the previous copy of the tree contains <ide> // the same props as the new one. In that case, we can just copy the output <ide> // and children from that node. <ide> workInProgress.memoizedProps = workInProgress.pendingProps; <ide> workInProgress.output = current.output; <ide> const priorityLevel = workInProgress.pendingWorkPriority; <del> workInProgress.pendingProps = null; <add> // workInProgress.pendingProps = null; <ide> workInProgress.stateNode = current.stateNode; <del> workInProgress.childInProgress = current.childInProgress; <del> if (current.child) { <del> // If we bail out but still has work with the current priority in this <del> // subtree, we need to go find it right now. If we don't, we won't flush <del> // it until the next tick. <del> workInProgress.child = current.child; <del> reuseChildren(workInProgress, workInProgress.child); <del> if (workInProgress.pendingWorkPriority !== NoWork && workInProgress.pendingWorkPriority <= priorityLevel) { <del> return findNextUnitOfWorkAtPriority( <del> workInProgress, <del> workInProgress.pendingWorkPriority <del> ); <del> } else { <del> return null; <del> } <del> } else { <del> workInProgress.child = null; <del> return null; <del> } <add> <add> workInProgress.nextEffect = null; <add> workInProgress.firstEffect = null; <add> workInProgress.lastEffect = null; <add> <add> workInProgress.childInProgress = null; // current.childInProgress; <add> workInProgress.child = current.child; <add> <add> cloneChildFibers(workInProgress); <add> <add> // TODO: Maybe bailout with null if the children priority flag indicate <add> // that there is no nested work. <add> return workInProgress.child; <ide> } <add>*/ <ide> <ide> function bailoutOnAlreadyFinishedWork(current, workInProgress : Fiber) : ?Fiber { <ide> // If we started this work before, and finished it, or if we're in a <ide> // ping-pong update scenario, this version could already be what we're <ide> // looking for. In that case, we should be able to just bail out. <ide> const priorityLevel = workInProgress.pendingWorkPriority; <del> workInProgress.pendingProps = null; <add> // workInProgress.pendingProps = null; <ide> <ide> workInProgress.firstEffect = null; <ide> workInProgress.nextEffect = null; <ide> workInProgress.lastEffect = null; <ide> <ide> const child = workInProgress.child; <add> if (workInProgress.childInProgress) { <add> throw new Error('Child in progress means we cannot bail here.'); <add> } <ide> if (child) { <ide> // Ensure that the effects of reused work are preserved. <ide> reuseChildrenEffects(workInProgress, child); <ide> // If we bail out but still has work with the current priority in this <ide> // subtree, we need to go find it right now. If we don't, we won't flush <ide> // it until the next tick. <ide> reuseChildren(workInProgress, child); <del> if (workInProgress.pendingWorkPriority !== NoWork && <del> workInProgress.pendingWorkPriority <= priorityLevel) { <del> // TODO: This passes the current node and reads the priority level and <del> // pending props from that. We want it to read our priority level and <del> // pending props from the work in progress. Needs restructuring. <del> return findNextUnitOfWorkAtPriority(workInProgress, priorityLevel); <del> } <add> // TODO: Maybe bailout with null if the children priority flag indicate <add> // that there is no nested work. <add> return workInProgress.child; <ide> } <ide> return null; <ide> } <ide> <del> function beginWork(current : ?Fiber, workInProgress : Fiber) : ?Fiber { <add> function beginWork(current : ?Fiber, workInProgress : Fiber, priorityLevel) : ?Fiber { <add> if (!workInProgress.pendingProps) { <add> throw new Error('should have pending props here'); <add> } <add> <add> if (workInProgress.pendingWorkPriority === NoWork || <add> workInProgress.pendingWorkPriority > priorityLevel) { <add> <add> if (current) { <add> workInProgress.child = current.child; <add> workInProgress.childInProgress = current.childInProgress; <add> workInProgress.memoizedProps = current.memoizedProps; <add> workInProgress.output = current.output; <add> } <add> <add> return null; <add> } <add> <ide> // The current, flushed, state of this fiber is the alternate. <ide> // Ideally nothing should rely on this, but relying on it here <ide> // means that we don't need an additional field on the work in <ide> module.exports = function<T, P, I, C>(config : HostConfig<T, P, I, C>) { <ide> <ide> switch (workInProgress.tag) { <ide> case IndeterminateComponent: <del> mountIndeterminateComponent(current, workInProgress); <del> return workInProgress.childInProgress; <add> return mountIndeterminateComponent(current, workInProgress); <ide> case FunctionalComponent: <del> updateFunctionalComponent(current, workInProgress); <del> return workInProgress.childInProgress; <add> return updateFunctionalComponent(current, workInProgress); <ide> case ClassComponent: <ide> return updateClassComponent(current, workInProgress); <ide> case HostContainer: <ide> module.exports = function<T, P, I, C>(config : HostConfig<T, P, I, C>) { <ide> } <ide> return null; <ide> case HostComponent: <add> if (workInProgress.stateNode && config.beginUpdate) { <add> config.beginUpdate(workInProgress.stateNode); <add> } <ide> return updateHostComponent(current, workInProgress); <ide> case CoroutineHandlerPhase: <ide> // This is a restart. Reset the tag to the initial phase. <ide><path>src/renderers/shared/fiber/ReactFiberPendingWork.js <del>/** <del> * Copyright 2013-present, Facebook, Inc. <del> * All rights reserved. <del> * <del> * This source code is licensed under the BSD-style license found in the <del> * LICENSE file in the root directory of this source tree. An additional grant <del> * of patent rights can be found in the PATENTS file in the same directory. <del> * <del> * @providesModule ReactFiberPendingWork <del> * @flow <del> */ <del> <del>'use strict'; <del> <del>import type { Fiber } from 'ReactFiber'; <del>import type { PriorityLevel } from 'ReactPriorityLevel'; <del> <del>var { cloneFiber } = require('ReactFiber'); <del> <del>var { <del> NoWork, <del>} = require('ReactPriorityLevel'); <del> <del>function cloneSiblings(current : Fiber, workInProgress : Fiber, returnFiber : Fiber) { <del> workInProgress.return = returnFiber; <del> while (current.sibling) { <del> current = current.sibling; <del> workInProgress = workInProgress.sibling = cloneFiber( <del> current, <del> current.pendingWorkPriority <del> ); <del> workInProgress.return = returnFiber; <del> } <del> workInProgress.sibling = null; <del>} <del> <del>function cloneChildrenIfNeeded(workInProgress : Fiber) { <del> const current = workInProgress.alternate; <del> if (!current || workInProgress.child !== current.child) { <del> // If there is no alternate, then we don't need to clone the children. <del> // If the children of the alternate fiber is a different set, then we don't <del> // need to clone. We need to reset the return fiber though since we'll <del> // traverse down into them. <del> // TODO: I don't think it is actually possible for them to be anything but <del> // equal at this point because this fiber was just cloned. Can we skip this <del> // check? Similar question about the return fiber. <del> let child = workInProgress.child; <del> while (child) { <del> child.return = workInProgress; <del> child = child.sibling; <del> } <del> return; <del> } <del> // TODO: This used to reset the pending priority. Not sure if that is needed. <del> // workInProgress.pendingWorkPriority = current.pendingWorkPriority; <del> <del> // TODO: The below priority used to be set to NoWork which would've <del> // dropped work. This is currently unobservable but will become <del> // observable when the first sibling has lower priority work remaining <del> // than the next sibling. At that point we should add tests that catches <del> // this. <del> <del> const currentChild = current.child; <del> if (!currentChild) { <del> return; <del> } <del> workInProgress.child = cloneFiber( <del> currentChild, <del> currentChild.pendingWorkPriority <del> ); <del> cloneSiblings(currentChild, workInProgress.child, workInProgress); <del>} <del> <del>exports.findNextUnitOfWorkAtPriority = function(workRoot : Fiber, priorityLevel : PriorityLevel) : ?Fiber { <del> let workInProgress = workRoot; <del> while (workInProgress) { <del> if (workInProgress.pendingWorkPriority !== NoWork && <del> workInProgress.pendingWorkPriority <= priorityLevel) { <del> // This node has work to do that fits our priority level criteria. <del> if (workInProgress.pendingProps !== null) { <del> return workInProgress; <del> } <del> <del> // If we have a child let's see if any of our children has work to do. <del> // Only bother doing this at all if the current priority level matches <del> // because it is the highest priority for the whole subtree. <del> // TODO: Coroutines can have work in their stateNode which is another <del> // type of child that needs to be searched for work. <del> if (workInProgress.childInProgress) { <del> let child = workInProgress.childInProgress; <del> while (child) { <del> child.return = workInProgress; <del> child = child.sibling; <del> } <del> child = workInProgress.childInProgress; <del> while (child) { <del> // Don't bother drilling further down this tree if there is no child <del> // with more content. <del> // TODO: Shouldn't this still drill down even though the first <del> // shallow level doesn't have anything pending on it. <del> if (child.pendingWorkPriority !== NoWork && <del> child.pendingWorkPriority <= priorityLevel && <del> child.pendingProps !== null) { <del> return child; <del> } <del> child = child.sibling; <del> } <del> } else if (workInProgress.child) { <del> cloneChildrenIfNeeded(workInProgress); <del> workInProgress = workInProgress.child; <del> continue; <del> } <del> // If we match the priority but has no child and no work to do, <del> // then we can safely reset the flag. <del> workInProgress.pendingWorkPriority = NoWork; <del> } <del> if (workInProgress === workRoot) { <del> if (workInProgress.pendingWorkPriority <= priorityLevel) { <del> // If this subtree had work left to do, we would have returned it by <del> // now. This could happen if a child with pending work gets cleaned up <del> // but we don't clear the flag then. It is safe to reset it now. <del> workInProgress.pendingWorkPriority = NoWork; <del> } <del> return null; <del> } <del> while (!workInProgress.sibling) { <del> workInProgress = workInProgress.return; <del> if (!workInProgress || workInProgress === workRoot) { <del> return null; <del> } <del> if (workInProgress.pendingWorkPriority <= priorityLevel) { <del> // If this subtree had work left to do, we would have returned it by <del> // now. This could happen if a child with pending work gets cleaned up <del> // but we don't clear the flag then. It is safe to reset it now. <del> workInProgress.pendingWorkPriority = NoWork; <del> } <del> } <del> workInProgress.sibling.return = workInProgress.return; <del> workInProgress = workInProgress.sibling; <del> } <del> return null; <del>}; <ide><path>src/renderers/shared/fiber/ReactFiberScheduler.js <ide> var ReactFiberCompleteWork = require('ReactFiberCompleteWork'); <ide> var ReactFiberCommitWork = require('ReactFiberCommitWork'); <ide> <ide> var { cloneFiber } = require('ReactFiber'); <del>var { findNextUnitOfWorkAtPriority } = require('ReactFiberPendingWork'); <ide> <ide> var { <ide> NoWork, <ide> module.exports = function<T, P, I, C>(config : HostConfig<T, P, I, C>) { <ide> // TODO: This is scanning one root at a time. It should be scanning all <ide> // roots for high priority work before moving on to lower priorities. <ide> let root = nextScheduledRoot; <add> let highestPriorityRoot = null; <add> let highestPriorityLevel = NoWork; <ide> while (root) { <del> let rootInProgress = cloneFiber( <del> root.current, <del> root.current.pendingWorkPriority <del> ); <del> // Find the highest possible priority work to do. <del> // This loop is unrolled just to satisfy Flow's enum constraint. <del> // We could make arbitrary many idle priority levels but having <del> // too many just means flushing changes too often. <del> let work = findNextUnitOfWorkAtPriority(rootInProgress, HighPriority); <del> if (work) { <del> nextPriorityLevel = HighPriority; <del> return work; <del> } <del> work = findNextUnitOfWorkAtPriority(rootInProgress, LowPriority); <del> if (work) { <del> nextPriorityLevel = LowPriority; <del> return work; <del> } <del> work = findNextUnitOfWorkAtPriority(rootInProgress, OffscreenPriority); <del> if (work) { <del> nextPriorityLevel = OffscreenPriority; <del> return work; <add> if (highestPriorityLevel === NoWork || <add> highestPriorityLevel > root.current.pendingWorkPriority) { <add> highestPriorityLevel = root.current.pendingWorkPriority; <add> highestPriorityRoot = root; <ide> } <ide> // We didn't find anything to do in this root, so let's try the next one. <ide> root = root.nextScheduledRoot; <ide> } <del> root = nextScheduledRoot; <del> while (root) { <del> root = root.nextScheduledRoot; <add> if (highestPriorityRoot) { <add> nextPriorityLevel = highestPriorityLevel; <add> return cloneFiber( <add> highestPriorityRoot.current, <add> highestPriorityLevel <add> ); <ide> } <ide> <ide> nextPriorityLevel = NoWork; <ide> module.exports = function<T, P, I, C>(config : HostConfig<T, P, I, C>) { <ide> <ide> function resetWorkPriority(workInProgress : Fiber) { <ide> let newPriority = NoWork; <del> let child = workInProgress.childInProgress || workInProgress.child; <add> let child = workInProgress.child; <ide> while (child) { <ide> // Ensure that remaining work priority bubbles up. <ide> if (child.pendingWorkPriority !== NoWork && <ide> module.exports = function<T, P, I, C>(config : HostConfig<T, P, I, C>) { <ide> // means that we don't need an additional field on the work in <ide> // progress. <ide> const current = workInProgress.alternate; <del> const next = completeWork(current, workInProgress); <add> let next = null; <ide> <del> resetWorkPriority(workInProgress); <add> // If this bailed at a lower priority. <add> // TODO: This branch is currently needed if a particular type of component <add> // ends up being a priority lowering. We should probably know that already <add> // before entering begin work. <add> if (workInProgress.pendingWorkPriority === NoWork || <add> workInProgress.pendingWorkPriority > nextPriorityLevel) { <add> // This fiber was ignored. We need to fall through to the next fiber <add> // and leave the pending props and work untouched on this fiber. <add> } else { <add> next = completeWork(current, workInProgress); <ide> <del> // The work is now done. We don't need this anymore. This flags <del> // to the system not to redo any work here. <del> workInProgress.pendingProps = null; <add> resetWorkPriority(workInProgress); <add> <add> // The work is now done. We don't need this anymore. This flags <add> // to the system not to redo any work here. <add> workInProgress.pendingProps = null; <add> } <ide> <ide> const returnFiber = workInProgress.return; <ide> <ide> module.exports = function<T, P, I, C>(config : HostConfig<T, P, I, C>) { <ide> } else if (returnFiber) { <ide> // If there's no more work in this returnFiber. Complete the returnFiber. <ide> workInProgress = returnFiber; <del> // If we're stepping up through the child, that means we can now commit <del> // this work. We should only do this when we're stepping upwards because <del> // completing a downprioritized item is not the same as completing its <del> // children. <del> if (workInProgress.childInProgress) { <del> workInProgress.child = workInProgress.childInProgress; <del> workInProgress.childInProgress = null; <del> } <ide> continue; <ide> } else { <ide> // If we're at the root, there's no more work to do. We can flush it. <ide> module.exports = function<T, P, I, C>(config : HostConfig<T, P, I, C>) { <ide> } <ide> <ide> function performUnitOfWork(workInProgress : Fiber) : ?Fiber { <del> // Ignore work if there is nothing to do. <del> if (workInProgress.pendingProps === null) { <del> return completeUnitOfWork(workInProgress); <del> } <ide> // The current, flushed, state of this fiber is the alternate. <ide> // Ideally nothing should rely on this, but relying on it here <ide> // means that we don't need an additional field on the work in <ide> // progress. <ide> const current = workInProgress.alternate; <del> const next = beginWork(current, workInProgress); <add> const next = beginWork(current, workInProgress, nextPriorityLevel); <add> <ide> if (next) { <ide> // If this spawns new work, do that next. <ide> return next; <ide><path>src/renderers/shared/fiber/__tests__/ReactIncremental-test.js <ide> describe('ReactIncremental', () => { <ide> <ide> // Init <ide> ReactNoop.render(<Foo text="foo" text2="foo" step={0} />); <del> ReactNoop.flushLowPri(55); <add> ReactNoop.flushLowPri(55 + 25); <ide> <ide> // We only finish the higher priority work. So the low pri content <ide> // has not yet finished mounting. <ide> describe('ReactIncremental', () => { <ide> // Make a quick update which will schedule low priority work to <ide> // update the middle content. <ide> ReactNoop.render(<Foo text="bar" text2="bar" step={1} />); <del> ReactNoop.flushLowPri(30); <add> ReactNoop.flushLowPri(30 + 25); <ide> <ide> expect(ops).toEqual(['Foo', 'Bar']); <ide> <ide> describe('ReactIncremental', () => { <ide> ops = []; <ide> <ide> // The middle content is now pending rendering... <del> ReactNoop.flushLowPri(30); <add> ReactNoop.flushLowPri(30 + 25); <ide> expect(ops).toEqual(['Content', 'Middle', 'Bar']); // One more Middle left. <ide> <ide> ops = []; <ide><path>src/renderers/shared/fiber/__tests__/ReactIncrementalSideEffects-test.js <ide> describe('ReactIncrementalSideEffects', () => { <ide> // render some higher priority work. The middle content will bailout so <ide> // it remains untouched which means that it should reuse it next time. <ide> ReactNoop.render(<Foo text="foo" step={1} />); <del> ReactNoop.flush(30); <add> ReactNoop.flush(); <ide> <ide> // Since we did nothing to the middle subtree during the interuption, <ide> // we should be able to reuse the reconciliation work that we already did <ide> describe('ReactIncrementalSideEffects', () => { <ide> ]); <ide> }); <ide> <add> <ide> // TODO: Test that side-effects are not cut off when a work in progress node <ide> // moves to "current" without flushing due to having lower priority. Does this <ide> // even happen? Maybe a child doesn't get processed because it is lower prio?
7
Javascript
Javascript
fix misuse of setproperties
37fbe405e2bb5fe7b10a96713148771edaff1919
<ide><path>packages/ember-views/lib/system/render_buffer.js <ide> Ember._RenderBuffer.prototype = <ide> var buffer = new Ember._RenderBuffer(tagName); <ide> buffer.parentBuffer = parent; <ide> <del> if (other) { buffer.setProperties(other); } <add> if (other) { Ember.$.extend(buffer, other); } <ide> if (fn) { fn.call(this, buffer); } <ide> <ide> return buffer;
1
Python
Python
add full exceptions with spaces
bced6309e5c9e8d4f0bc006f2a20b0230e2f289f
<ide><path>spacy/lang/es/tokenizer_exceptions.py <ide> "Dra.", <ide> "EE.UU.", <ide> "Ee.Uu.", <del> "UU.", # For "EE. UU." <del> "Uu.", # For "Ee. Uu." <add> "EE. UU.", <add> "Ee. Uu.", <ide> "etc.", <ide> "fig.", <ide> "Gob.",
1
Ruby
Ruby
add path check
a37d53aa890e7cf0e21fa1b71ebf74c009efce21
<ide><path>Library/Homebrew/brew_doctor.rb <ide> def check_homebrew_prefix <ide> end <ide> end <ide> <add>def check_user_path <add> seen_prefix_bin = false <add> seen_prefix_sbin = false <add> seen_usr_bin = false <add> <add> paths = ENV['PATH'].split(":") <add> <add> paths.each do |p| <add> if p == '/usr/bin' <add> seen_usr_bin = true <add> unless seen_prefix_bin <add> puts <<-EOS.undent <add> /usr/bin is in your PATH before Homebrew's bin. This means that system- <add> provided programs will be used before Homebrew-provided ones. This is an <add> issue if you install, for instance, Python. <add> Consider editing your .bashrc to put: <add> #{HOMEBREW_PREFIX}/bin <add> ahead of /usr/bin. <add> <add> EOS <add> end <add> end <add> <add> seen_prefix_bin = true if p == "#{HOMEBREW_PREFIX}/bin" <add> seen_prefix_sbin = true if p == "#{HOMEBREW_PREFIX}/sbin" <add> end <add> <add> unless seen_prefix_sbin <add> puts <<-EOS.undent <add> Some brews install binaries to sbin instead of bin, but Homebrew's <add> sbin was not found in your path. <add> Consider editing your .bashrc to add sbin to PATH: <add> #{HOMEBREW_PREFIX}/sbin <add> <add> EOS <add> end <add>end <add> <ide> def brew_doctor <ide> read, write = IO.pipe <ide> <ide> def brew_doctor <ide> check_for_other_package_managers <ide> check_for_x11 <ide> check_share_locale <add> check_user_path <ide> <ide> exit! 0 <ide> else
1
Text
Text
fix detaching from attached container
1d2a1598c54d9ca8d3d02ec5e7b0231a8cf153d3
<ide><path>docs/sources/reference/commandline/cli.md <ide> container at the same time - screen sharing style, or quickly view the <ide> progress of your daemonized process. <ide> <ide> You can detach from the container again (and leave it running) with <del>`CTRL-C` (for a quiet exit) or `CTRL-\` <del>to get a stacktrace of the Docker client when it quits. When <del>you detach from the container's process the exit code will be returned <del>to the client. <add>`CTRL-p CTRL-q` (for a quiet exit), or `CTRL-c` which will send a <add>SIGKILL to the container, or `CTRL-\` to get a stacktrace of the <add>Docker client when it quits. When you detach from the container's <add>process the exit code will be returned to the client. <ide> <ide> To stop a container, use `docker stop`. <ide>
1
Go
Go
use a const for ".docker" string
565712014f3dfe01b0b5012ab46a7e26217538f2
<ide><path>cliconfig/config.go <ide> import ( <ide> const ( <ide> // ConfigFileName is the name of config file <ide> ConfigFileName = "config.json" <add> configFileDir = ".docker" <ide> oldConfigfile = ".dockercfg" <ide> <ide> // This constant is only used for really old config files when the <ide> var ( <ide> <ide> func init() { <ide> if configDir == "" { <del> configDir = filepath.Join(homedir.Get(), ".docker") <add> configDir = filepath.Join(homedir.Get(), configFileDir) <ide> } <ide> } <ide> <ide><path>image/v1/imagev1.go <ide> func rawJSON(value interface{}) *json.RawMessage { <ide> // ValidateID checks whether an ID string is a valid image ID. <ide> func ValidateID(id string) error { <ide> if ok := validHex.MatchString(id); !ok { <del> return fmt.Errorf("image ID '%s' is invalid ", id) <add> return fmt.Errorf("image ID %q is invalid", id) <ide> } <ide> return nil <ide> }
2
Javascript
Javascript
fix jquery.queue leaks empty queues
80af46e8ffe8292e0af0537db6c7e89019e5edba
<ide><path>src/queue.js <ide> jQuery.extend({ <ide> jQuery.dequeue(elem, type); <ide> }); <ide> } <add> <add> if ( !queue.length ) { <add> jQuery.removeData( elem, type + "queue", true ); <add> } <ide> } <ide> }); <ide>
1
Mixed
Javascript
remove scope $destroy event
ac5151a469667b1cc1b5e2f96d330b71631efd0b
<ide><path>CHANGELOG.md <ide> behavior and migrate your controllers one at a time: <https://gist.github.com/16 <ide> - before: `scope.$watch('expression', function(scope, newVal, oldVal) {})` <ide> - after: `scope.$watch('expression', function(newVal, oldVal, scope) {}, true)` <ide> <add>- `scope.$destroy` doesn't cause the `$destroy` event to be emitted any more - this event was <add> primarily used by the old forms implementation and is not needed any more. We are considering <add> broadcasting this event in the future, which could then be used by directives and child scopes to <add> be notified of their scope destruction. <add> <ide> <ide> ## New directives: <ide> <ide><path>src/service/scope.js <ide> function $RootScopeProvider(){ <ide> * scope and its children. Removal also implies that the current scope is eligible for garbage <ide> * collection. <ide> * <del> * The destructing scope emits an `$destroy` {@link angular.module.ng.$rootScope.Scope#$emit event}. <del> * <ide> * The `$destroy()` is usually used by directives such as <ide> * {@link angular.module.ng.$compileProvider.directive.ng-repeat ng-repeat} for managing the unrolling of the loop. <ide> * <ide> */ <ide> $destroy: function() { <ide> if (this.$root == this) return; // we can't remove the root node; <del> this.$emit('$destroy'); <ide> var parent = this.$parent; <ide> <ide> if (parent.$$childHead == this) parent.$$childHead = this.$$nextSibling; <ide><path>test/directive/ngViewSpec.js <ide> describe('ng-view', function() { <ide> var createCtrl = function(name) { <ide> return function($scope) { <ide> log.push('init-' + name); <del> $scope.$on('$destroy', function() { <add> var destroy = $scope.$destroy; <add> $scope.$destroy = function() { <ide> log.push('destroy-' + name); <del> }); <add> destroy.call($scope); <add> } <ide> }; <ide> }; <ide> <ide> describe('ng-view', function() { <ide> function createController(name) { <ide> return function($scope) { <ide> log.push('init-' + name); <del> $scope.$on('$destroy', logger('destroy-' + name)); <add> var destroy = $scope.$destroy; <add> $scope.$destroy = function() { <add> log.push('destroy-' + name); <add> destroy.call($scope); <add> } <ide> $scope.$on('$routeUpdate', logger('route-update')); <ide> }; <ide> } <ide><path>test/service/compilerSpec.js <ide> describe('$compile', function() { <ide> expect(widgetScope.$parent).toEqual($rootScope); <ide> expect(transcludeScope.$parent).toEqual($rootScope); <ide> <del> var removed = 0; <del> $rootScope.$on('$destroy', function() { removed++; }); <ide> $rootScope.select = false; <ide> $rootScope.$apply(); <ide> expect(element.text()).toEqual('Hello: Misko!'); <del> expect(removed).toEqual(1); <ide> expect(widgetScope.$$nextSibling).toEqual(null); <ide> }); <ide> }); <ide><path>test/service/scopeSpec.js <ide> describe('Scope', function() { <ide> $rootScope.$digest(); <ide> expect(log).toEqual('12'); <ide> })); <del> <del> it('should fire a $destroy event', inject(function($rootScope) { <del> var destructedScopes = []; <del> middle.$on('$destroy', function(event) { <del> destructedScopes.push(event.currentScope); <del> }); <del> middle.$destroy(); <del> expect(destructedScopes).toEqual([middle]); <del> })); <del> <ide> }); <ide> <ide>
5
PHP
PHP
avoid use of compact()
67d44451b690700aa525df9ac4512c47e151a090
<ide><path>src/ORM/Behavior/Translate/ShadowTableStrategy.php <ide> public function beforeSave(Event $event, EntityInterface $entity, ArrayObject $o <ide> return; <ide> } <ide> <del> $where = compact('id', 'locale'); <add> $where = ['id' => $id, 'locale' => $locale]; <ide> <ide> $translation = $this->translationTable->find() <ide> ->select(array_merge(['id', 'locale'], $fields))
1
Javascript
Javascript
check ipv6 support before testing it
c7b42fe2e5342c903cf36351d1d13b9d43261b70
<ide><path>test/common.js <ide> var path = require('path'); <ide> var fs = require('fs'); <ide> var assert = require('assert'); <add>var os = require('os'); <ide> <ide> exports.testDir = path.dirname(__filename); <ide> exports.fixturesDir = path.join(exports.testDir, 'fixtures'); <ide> if (process.platform === 'win32') { <ide> "faketime"); <ide> } <ide> <add>var ifaces = os.networkInterfaces(); <add>exports.hasIPv6 = Object.keys(ifaces).some(function(name) { <add> return /lo/.test(name) && ifaces[name].some(function(info) { <add> return info.family === 'IPv6'; <add> }); <add>}); <add> <ide> var util = require('util'); <ide> for (var i in util) exports[i] = util[i]; <ide> //for (var i in exports) global[i] = exports[i]; <ide><path>test/simple/test-dgram-bind-default-address.js <ide> dgram.createSocket('udp4').bind(common.PORT + 0, common.mustCall(function() { <ide> this.close(); <ide> })); <ide> <add>if (!common.hasIPv6) { <add> console.error('Skipping udp6 part of test, no IPv6 support'); <add> return; <add>} <add> <ide> dgram.createSocket('udp6').bind(common.PORT + 1, common.mustCall(function() { <ide> assert.equal(this.address().port, common.PORT + 1); <ide> var address = this.address().address; <ide><path>test/simple/test-net-connect-options-ipv6.js <ide> var assert = require('assert'); <ide> var net = require('net'); <ide> var dns = require('dns'); <ide> <add>if (!common.hasIPv6) { <add> console.error('Skipping test, no IPv6 support'); <add> return; <add>} <add> <ide> var serverGotEnd = false; <ide> var clientGotEnd = false; <ide> <ide><path>test/simple/test-net-pingpong.js <ide> console.log(common.PIPE); <ide> pingPongTest(common.PIPE); <ide> pingPongTest(common.PORT); <ide> pingPongTest(common.PORT + 1, 'localhost'); <del>pingPongTest(common.PORT + 2, '::1'); <add>if (common.hasIPv6) <add> pingPongTest(common.PORT + 2, '::1'); <ide> <ide> process.on('exit', function() { <del> assert.equal(4, tests_run); <add> if (common.hasIPv6) <add> assert.equal(4, tests_run); <add> else <add> assert.equal(3, tests_run); <ide> console.log('done'); <ide> }); <ide><path>test/simple/test-net-server-address.js <ide> server_ipv6.listen(common.PORT, localhost_ipv6, function() { <ide> server_ipv6.close(); <ide> }); <ide> <add>if (!common.hasIPv6) { <add> console.error('Skipping ipv6 part of test, no IPv6 support'); <add> return; <add>} <add> <ide> // Test without hostname or ip <ide> var anycast_ipv6 = '::'; <ide> var server1 = net.createServer();
5
Text
Text
fix wrong table
2a1afe6b372b9c8c59b875ebaa0e4f29e407afc2
<ide><path>guides/source/active_record_migrations.md <ide> add_foreign_key :articles, :authors <ide> ``` <ide> <ide> This adds a new foreign key to the `author_id` column of the `articles` <del>table. The key references the `id` column of the `articles` table. If the <add>table. The key references the `id` column of the `authors` table. If the <ide> column names can not be derived from the table names, you can use the <ide> `:column` and `:primary_key` options. <ide>
1
PHP
PHP
throw decryptexception on error, for consistency
8d0e7b1281fc6cfc005484547ca95a15fa1eb4e4
<ide><path>src/Illuminate/Encryption/Encrypter.php <ide> public function decrypt($payload) <ide> */ <ide> protected function mcryptDecrypt($value, $iv) <ide> { <del> return mcrypt_decrypt($this->cipher, $this->key, $value, $this->mode, $iv); <add> try { <add> return mcrypt_decrypt($this->cipher, $this->key, $value, $this->mode, $iv); <add> } <add> catch (\Exception $e) { <add> throw new DecryptException($e->getMessage()); <add> } <ide> } <ide> <ide> /**
1
Ruby
Ruby
allow revert of whole migration []
65e154f33b54acf40b51082fc5b681ba605015d9
<ide><path>activerecord/lib/active_record/migration.rb <ide> def initialize(name = self.class.name, version = nil) <ide> self.verbose = true <ide> self.delegate = new <ide> <del> # Reverses the migration commands for the given block. <add> # Reverses the migration commands for the given block and <add> # the given migrations. <ide> # <ide> # The following migration will remove the table 'horses' <ide> # and create the table 'apples' on the way up, and the reverse <ide> def initialize(name = self.class.name, version = nil) <ide> # end <ide> # end <ide> # <del> # This command can be nested. <add> # Or equivalently, if +TenderloveMigration+ is defined as in the <add> # documentation for Migration: <add> # <add> # require_relative '2012121212_tenderlove_migration' <add> # <add> # class FixupTLMigration < ActiveRecord::Migration <add> # def change <add> # revert TenderloveMigration <add> # <add> # create_table(:apples) do |t| <add> # t.string :variety <add> # end <add> # end <add> # end <ide> # <del> def revert <add> # This command can be nested. <add> def revert(*migration_classes) <add> run(*migration_classes.reverse, revert: true) unless migration_classes.empty? <add> if block_given? <ide> if @connection.respond_to? :revert <ide> @connection.revert { yield } <ide> else <ide> def revert <ide> send(cmd, *args, &block) <ide> end <ide> end <add> end <ide> end <ide> <ide> def reverting? <ide> @connection.respond_to?(:reverting) && @connection.reverting <ide> end <ide> <add> # Runs the given migration classes. <add> # Last argument can specify options: <add> # - :direction (default is :up) <add> # - :revert (default is false) <add> def run(*migration_classes) <add> opts = migration_classes.extract_options! <add> dir = opts[:direction] || :up <add> dir = (dir == :down ? :up : :down) if opts[:revert] <add> if reverting? <add> # If in revert and going :up, say, we want to execute :down without reverting, so <add> revert { run(*migration_classes, direction: dir, revert: true) } <add> else <add> migration_classes.each do |migration_class| <add> migration_class.new.exec_migration(@connection, dir) <add> end <add> end <add> end <add> <ide> def up <ide> self.class.delegate = self <ide> return unless self.class.respond_to?(:up) <ide><path>activerecord/test/cases/invertible_migration_test.rb <ide> def self.down <ide> end <ide> end <ide> <add> class RevertWholeMigration < SilentMigration <add> def initialize(name = self.class.name, version = nil, migration) <add> @migration = migration <add> super(name, version) <add> end <add> <add> def change <add> revert @migration <add> end <add> end <add> <add> class NestedRevertWholeMigration < RevertWholeMigration <add> def change <add> revert { super } <add> end <add> end <add> <ide> def teardown <ide> if ActiveRecord::Base.connection.table_exists?("horses") <ide> ActiveRecord::Base.connection.drop_table("horses") <ide> def test_migrate_revert <ide> assert !migration.connection.table_exists?("horses") <ide> end <ide> <add> def test_migrate_revert_whole_migration <add> migration = InvertibleMigration.new <add> [LegacyMigration, InvertibleMigration].each do |klass| <add> revert = RevertWholeMigration.new(klass) <add> migration.migrate :up <add> revert.migrate :up <add> assert !migration.connection.table_exists?("horses") <add> revert.migrate :down <add> assert migration.connection.table_exists?("horses") <add> migration.migrate :down <add> assert !migration.connection.table_exists?("horses") <add> end <add> end <add> <add> def test_migrate_nested_revert_whole_migration <add> revert = NestedRevertWholeMigration.new(InvertibleRevertMigration) <add> revert.migrate :down <add> assert revert.connection.table_exists?("horses") <add> revert.migrate :up <add> assert !revert.connection.table_exists?("horses") <add> end <add> <add> def test_revert_order <add> block = Proc.new{|t| t.string :name } <add> recorder = ActiveRecord::Migration::CommandRecorder.new(ActiveRecord::Base.connection) <add> recorder.instance_eval do <add> create_table("apples", &block) <add> revert do <add> create_table("bananas", &block) <add> revert do <add> create_table("clementines") <add> create_table("dates") <add> end <add> create_table("elderberries") <add> end <add> revert do <add> create_table("figs") <add> create_table("grapes") <add> end <add> end <add> assert_equal [[:create_table, ["apples"], block], [:drop_table, ["elderberries"]], <add> [:create_table, ["clementines"], nil], [:create_table, ["dates"], nil], <add> [:drop_table, ["bananas"]], [:drop_table, ["grapes"]], <add> [:drop_table, ["figs"]]], recorder.commands <add> end <add> <ide> def test_legacy_up <ide> LegacyMigration.migrate :up <ide> assert ActiveRecord::Base.connection.table_exists?("horses"), "horses should exist"
2
PHP
PHP
add @method annotation to connectioninterface
32833086aa17c1036e8638df5c98d0d9c9699b1a
<ide><path>src/Datasource/ConnectionInterface.php <ide> * @method \Cake\Database\Query newQuery() <ide> * @method \Cake\Database\StatementInterface prepare($sql) <ide> * @method \Cake\Database\StatementInterface execute($query, $params = [], array $types = []) <add> * @method \Cake\Database\StatementInterface query(string $sql) <ide> * @method $this enableQueryLogging($value) <ide> * @method $this disableQueryLogging() <ide> * @method $this disableSavePoints()
1
Ruby
Ruby
fix keyword arguments warnings
352560308bc13b881efd6d062134a9a67102b204
<ide><path>actionmailer/lib/action_mailer/base.rb <ide> def set_content_type(m, user_content_type, class_default) # :doc: <ide> # If the subject has interpolations, you can pass them through the +interpolations+ parameter. <ide> def default_i18n_subject(interpolations = {}) # :doc: <ide> mailer_scope = self.class.mailer_name.tr("/", ".") <del> I18n.t(:subject, interpolations.merge(scope: [mailer_scope, action_name], default: action_name.humanize)) <add> I18n.t(:subject, **interpolations.merge(scope: [mailer_scope, action_name], default: action_name.humanize)) <ide> end <ide> <ide> # Emails do not support relative path links. <ide><path>actionpack/test/dispatch/ssl_test.rb <ide> class SSLTest < ActionDispatch::IntegrationTest <ide> <ide> def build_app(headers: {}, ssl_options: {}) <ide> headers = HEADERS.merge(headers) <del> ActionDispatch::SSL.new lambda { |env| [200, headers, []] }, ssl_options.reverse_merge(hsts: { subdomains: true }) <add> ActionDispatch::SSL.new lambda { |env| [200, headers, []] }, **ssl_options.reverse_merge(hsts: { subdomains: true }) <ide> end <ide> end <ide> <ide><path>activerecord/lib/active_record/schema_migration.rb <ide> def create_table <ide> version_options = connection.internal_string_options_for_primary_key <ide> <ide> connection.create_table(table_name, id: false) do |t| <del> t.string :version, version_options <add> t.string :version, **version_options <ide> end <ide> end <ide> end <ide><path>railties/test/application/middleware/cache_test.rb <ide> def keeps_if_modified_since <ide> end <ide> private <ide> def render_conditionally(headers) <del> if stale?(headers.merge(public: !params[:private])) <add> if stale?(**headers.merge(public: !params[:private])) <ide> render plain: SecureRandom.hex(16) <ide> end <ide> end
4
Text
Text
remove some sfc references
ac74f15f0d155e2487ca8bfb80650c2335f1812a
<ide><path>README.md <ide> Good luck! <ide> <ide> Homebrew is a non-profit project run entirely by unpaid volunteers. We need your funds to pay for software, hardware and hosting around continuous integration and future improvements to the project. Every donation will be spent on making Homebrew better for our users. <ide> <del>Please consider a regular donation through [GitHub Sponsors](https://github.com/sponsors/Homebrew) or [Patreon](https://www.patreon.com/homebrew). <del> <del>Alternatively, if you'd rather make a one-off payment: <del> <del>- [Donate with PayPal](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=V6ZE57MJRYC8L) <del>- Donate by USA $ check from a USA bank: <del> - Make check payable to "Software Freedom Conservancy, Inc." and place "Directed donation: Homebrew" in the memo field. Checks should then be mailed to: <del> - Software Freedom Conservancy, Inc. <del> 137 Montague ST STE 380 <del> BROOKLYN, NY 11201 USA <del>- Donate by wire transfer: contact accounting@sfconservancy.org for wire transfer details. <del> <del>Homebrew is a member of the [Software Freedom Conservancy](https://sfconservancy.org) which provides us with an ability to receive tax-deductible, Homebrew earmarked donations (and [many other services](https://sfconservancy.org/members/services/)). Software Freedom Conservancy, Inc. is a 501(c)(3) organization incorporated in New York, and donations made to it are fully tax-deductible to the extent permitted by law. <add>Please consider a regular donation through [GitHub Sponsors](https://github.com/sponsors/Homebrew), [Open Collective](https://opencollective.com/homebrew) or [Patreon](https://www.patreon.com/homebrew). Homebrew is fiscally hosted by the [Open Source Collective](https://opencollective.com/opensource). <ide> <ide> ## Security <ide> <ide> Flaky test detection and tracking is provided by [BuildPulse](https://buildpulse <ide> <ide> [![BuildPulse](https://user-images.githubusercontent.com/2988/130445500-96f44c87-e7dd-4da0-9877-7e5b1618e144.png)](https://buildpulse.io) <ide> <del>Homebrew is a member of the [Software Freedom Conservancy](https://sfconservancy.org). <del> <del>[![Software Freedom Conservancy](https://sfconservancy.org/img/conservancy_64x64.png)](https://sfconservancy.org) <del> <ide> Homebrew is generously supported by [Substack](https://github.com/substackinc), [Randy Reddig](https://github.com/ydnar), [embark-studios](https://github.com/embark-studios), [CodeCrafters](https://github.com/codecrafters-io) and many other users and organisations via [GitHub Sponsors](https://github.com/sponsors/Homebrew). <ide> <ide> [![Substack](https://github.com/substackinc.png?size=64)](https://github.com/substackinc) <ide><path>docs/New-Maintainer-Checklist.md <ide> If they are interested in doing system administration work: <ide> <ide> If they are elected to the Homebrew's [Project Leadership Committee](https://docs.brew.sh/Homebrew-Governance#4-project-leadership-committee): <ide> <del>- Email their name, email and employer to the [Software Freedom Conservancy](https://sfconservancy.org) at homebrew@sfconservancy.org <ide> - Make them [owners on the Homebrew GitHub organisation](https://github.com/orgs/Homebrew/people) <ide> - Invite them to the [**@Homebrew/plc** team](https://github.com/orgs/Homebrew/teams/plc/members) <ide> - Invite them to [Google Analytics](https://analytics.google.com/analytics/web/#management/Settings/a76679469w115400090p120682403/%3Fm.page%3DAccountUsers/).
2
Java
Java
extract idgenerator into a top-level class
ce3e55743f23100f0e4044320cdb1f168ca76ea3
<ide><path>spring-core/src/main/java/org/springframework/util/AlternativeJdkIdGenerator.java <add>/* <add> * Copyright 2002-2013 the original author or authors. <add> * <add> * Licensed under the Apache License, Version 2.0 (the "License"); <add> * you may not use this file except in compliance with the License. <add> * You may obtain a copy of the License at <add> * <add> * http://www.apache.org/licenses/LICENSE-2.0 <add> * <add> * Unless required by applicable law or agreed to in writing, software <add> * distributed under the License is distributed on an "AS IS" BASIS, <add> * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. <add> * See the License for the specific language governing permissions and <add> * limitations under the License. <add> */ <add> <add>package org.springframework.util; <add> <add>import java.math.BigInteger; <add>import java.security.SecureRandom; <add>import java.util.Random; <add>import java.util.UUID; <add> <add>/** <add> * A variation of {@link UUID#randomUUID()} that uses {@link SecureRandom} only for <add> * the initial seed and {@link Random} thereafter. This provides better performance <add> * in exchange for less securely random id's. <add> * <add> * @author Rossen Stoyanchev <add> * @author Rob Winch <add> * @since 4.0 <add> */ <add>public class AlternativeJdkIdGenerator implements IdGenerator { <add> <add> private final Random random; <add> <add> <add> public AlternativeJdkIdGenerator() { <add> byte[] seed = new SecureRandom().generateSeed(8); <add> this.random = new Random(new BigInteger(seed).longValue()); <add> } <add> <add> <add> public UUID generateId() { <add> <add> byte[] randomBytes = new byte[16]; <add> this.random.nextBytes(randomBytes); <add> <add> long mostSigBits = 0; <add> for (int i = 0; i < 8; i++) { <add> mostSigBits = (mostSigBits << 8) | (randomBytes[i] & 0xff); <add> } <add> <add> long leastSigBits = 0; <add> for (int i = 8; i < 16; i++) { <add> leastSigBits = (leastSigBits << 8) | (randomBytes[i] & 0xff); <add> } <add> <add> return new UUID(mostSigBits, leastSigBits); <add> } <add> <add>} <ide><path>spring-core/src/main/java/org/springframework/util/IdGenerator.java <add>/* <add> * Copyright 2002-2013 the original author or authors. <add> * <add> * Licensed under the Apache License, Version 2.0 (the "License"); <add> * you may not use this file except in compliance with the License. <add> * You may obtain a copy of the License at <add> * <add> * http://www.apache.org/licenses/LICENSE-2.0 <add> * <add> * Unless required by applicable law or agreed to in writing, software <add> * distributed under the License is distributed on an "AS IS" BASIS, <add> * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. <add> * See the License for the specific language governing permissions and <add> * limitations under the License. <add> */ <add> <add>package org.springframework.util; <add> <add>import java.util.UUID; <add> <add>/** <add> * Contract for generating {@link UUID} identifiers. <add> * <add> * @author Rossen Stoyanchev <add> * @since 4.0 <add> */ <add>public interface IdGenerator { <add> <add> /** <add> * Generate a new identifier. <add> * @return the generated identifier <add> */ <add> UUID generateId(); <add> <add>} <ide><path>spring-messaging/src/main/java/org/springframework/messaging/MessageHeaders.java <ide> import java.io.ObjectInputStream; <ide> import java.io.ObjectOutputStream; <ide> import java.io.Serializable; <del>import java.math.BigInteger; <del>import java.security.SecureRandom; <ide> import java.util.ArrayList; <ide> import java.util.Arrays; <ide> import java.util.Collection; <ide> import java.util.LinkedHashMap; <ide> import java.util.List; <ide> import java.util.Map; <del>import java.util.Random; <ide> import java.util.Set; <ide> import java.util.UUID; <ide> <ide> import org.apache.commons.logging.Log; <ide> import org.apache.commons.logging.LogFactory; <add>import org.springframework.util.AlternativeJdkIdGenerator; <add>import org.springframework.util.IdGenerator; <ide> <ide> /** <ide> * The headers for a {@link Message} <ide> private void readObject(ObjectInputStream in) throws IOException, ClassNotFoundE <ide> in.defaultReadObject(); <ide> } <ide> <del> public static interface IdGenerator { <del> UUID generateId(); <del> } <del> <del> /** <del> * A variation of {@link UUID#randomUUID()} that uses {@link SecureRandom} only for <del> * the initial seed and {@link Random} thereafter, which provides better performance <del> * in exchange for less securely random id's. <del> */ <del> public static class AlternativeJdkIdGenerator implements IdGenerator { <del> <del> private final Random random; <del> <del> public AlternativeJdkIdGenerator() { <del> byte[] seed = new SecureRandom().generateSeed(8); <del> this.random = new Random(new BigInteger(seed).longValue()); <del> } <del> <del> public UUID generateId() { <del> <del> byte[] randomBytes = new byte[16]; <del> this.random.nextBytes(randomBytes); <del> <del> long mostSigBits = 0; <del> for (int i = 0; i < 8; i++) { <del> mostSigBits = (mostSigBits << 8) | (randomBytes[i] & 0xff); <del> } <del> long leastSigBits = 0; <del> for (int i = 8; i < 16; i++) { <del> leastSigBits = (leastSigBits << 8) | (randomBytes[i] & 0xff); <del> } <del> <del> return new UUID(mostSigBits, leastSigBits); <del> } <del> } <del> <ide> }
3
Text
Text
improve documentation [ci skip]
db8b06099f1f7a0e2f109c3574bbd88c99d43bce
<ide><path>guides/source/active_record_postgresql.md <ide> article.save! <ide> * [type definition](http://www.postgresql.org/docs/9.3/static/datatype-uuid.html) <ide> * [generator functions](http://www.postgresql.org/docs/9.3/static/uuid-ossp.html) <ide> <add>NOTE: you need to enable the `uuid-ossp` extension to use uuid. <ide> <ide> ```ruby <ide> # db/migrate/20131220144913_create_revisions.rb <ide> revision = Revision.first <ide> revision.identifier # => "a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11" <ide> ``` <ide> <add>You can use `uuid` type to define references in migrations <add> <add>```ruby <add># db/migrate/20150418012400_create_blog.rb <add>def change <add> create_table :posts, id: :uuid <add>end <add> <add>create_table :comments, id: :uuid do |t| <add> # t.belongs_to :post, type: :uuid <add> t.references :post, type: :uuid <add>end <add> <add># app/models/post.rb <add>class Post < ActiveRecord::Base <add> has_many :comments <add>end <add> <add># app/models/comment.rb <add>class Comment < ActiveRecord::Base <add> belongs_to :post <add>end <add>``` <add> <ide> ### Bit String Types <ide> <ide> * [type definition](http://www.postgresql.org/docs/9.3/static/datatype-bit.html)
1
Javascript
Javascript
set null reference properties to `undefined`
d19504a179355d7801d59a8db0285a1322e04601
<ide><path>src/ng/parse.js <ide> ASTCompiler.prototype = { <ide> } <ide> } <ide> recursionFn(intoId); <add> }, function() { <add> self.assign(intoId, 'undefined'); <ide> }); <ide> }, !!create); <ide> break; <ide><path>test/ng/parseSpec.js <ide> describe('parser', function() { <ide> expect(scope.$eval("0&&2")).toEqual(0 && 2); <ide> expect(scope.$eval("0||2")).toEqual(0 || 2); <ide> expect(scope.$eval("0||1&&2")).toEqual(0 || 1 && 2); <add> expect(scope.$eval("true&&a")).toEqual(true && undefined); <add> expect(scope.$eval("true&&a.b")).toEqual(true && undefined); <add> expect(scope.$eval("false||a")).toEqual(false || undefined); <add> expect(scope.$eval("false||a.b")).toEqual(false || undefined); <ide> }); <ide> <ide> it('should parse ternary', function() {
2
Go
Go
add pubsub package to handle robust publisher
2f46b7601a3f5e11359b79624d73075b69778fbb
<ide><path>api/stats/stats.go <add>// This package is used for API stability in the types and response to the <add>// consumers of the API stats endpoint. <ide> package stats <ide> <del>import ( <del> "time" <del> <del> "github.com/docker/libcontainer" <del> "github.com/docker/libcontainer/cgroups" <del>) <add>import "time" <ide> <ide> type ThrottlingData struct { <ide> // Number of periods with throttling active <ide> type Stats struct { <ide> MemoryStats MemoryStats `json:"memory_stats,omitempty"` <ide> BlkioStats BlkioStats `json:"blkio_stats,omitempty"` <ide> } <del> <del>// ToStats converts the libcontainer.ContainerStats to the api specific <del>// structs. This is done to preserve API compatibility and versioning. <del>func ToStats(ls *libcontainer.ContainerStats) *Stats { <del> s := &Stats{} <del> if ls.NetworkStats != nil { <del> s.Network = Network{ <del> RxBytes: ls.NetworkStats.RxBytes, <del> RxPackets: ls.NetworkStats.RxPackets, <del> RxErrors: ls.NetworkStats.RxErrors, <del> RxDropped: ls.NetworkStats.RxDropped, <del> TxBytes: ls.NetworkStats.TxBytes, <del> TxPackets: ls.NetworkStats.TxPackets, <del> TxErrors: ls.NetworkStats.TxErrors, <del> TxDropped: ls.NetworkStats.TxDropped, <del> } <del> } <del> cs := ls.CgroupStats <del> if cs != nil { <del> s.BlkioStats = BlkioStats{ <del> IoServiceBytesRecursive: copyBlkioEntry(cs.BlkioStats.IoServiceBytesRecursive), <del> IoServicedRecursive: copyBlkioEntry(cs.BlkioStats.IoServicedRecursive), <del> IoQueuedRecursive: copyBlkioEntry(cs.BlkioStats.IoQueuedRecursive), <del> IoServiceTimeRecursive: copyBlkioEntry(cs.BlkioStats.IoServiceTimeRecursive), <del> IoWaitTimeRecursive: copyBlkioEntry(cs.BlkioStats.IoWaitTimeRecursive), <del> IoMergedRecursive: copyBlkioEntry(cs.BlkioStats.IoMergedRecursive), <del> IoTimeRecursive: copyBlkioEntry(cs.BlkioStats.IoTimeRecursive), <del> SectorsRecursive: copyBlkioEntry(cs.BlkioStats.SectorsRecursive), <del> } <del> cpu := cs.CpuStats <del> s.CpuStats = CpuStats{ <del> CpuUsage: CpuUsage{ <del> TotalUsage: cpu.CpuUsage.TotalUsage, <del> PercpuUsage: cpu.CpuUsage.PercpuUsage, <del> UsageInKernelmode: cpu.CpuUsage.UsageInKernelmode, <del> UsageInUsermode: cpu.CpuUsage.UsageInUsermode, <del> }, <del> ThrottlingData: ThrottlingData{ <del> Periods: cpu.ThrottlingData.Periods, <del> ThrottledPeriods: cpu.ThrottlingData.ThrottledPeriods, <del> ThrottledTime: cpu.ThrottlingData.ThrottledTime, <del> }, <del> } <del> mem := cs.MemoryStats <del> s.MemoryStats = MemoryStats{ <del> Usage: mem.Usage, <del> MaxUsage: mem.MaxUsage, <del> Stats: mem.Stats, <del> Failcnt: mem.Failcnt, <del> } <del> } <del> return s <del>} <del> <del>func copyBlkioEntry(entries []cgroups.BlkioStatEntry) []BlkioStatEntry { <del> out := make([]BlkioStatEntry, len(entries)) <del> for i, re := range entries { <del> out[i] = BlkioStatEntry{ <del> Major: re.Major, <del> Minor: re.Minor, <del> Op: re.Op, <del> Value: re.Value, <del> } <del> } <del> return out <del>} <ide><path>daemon/daemon.go <ide> func (daemon *Daemon) Stats(c *Container) (*execdriver.ResourceStats, error) { <ide> return daemon.execDriver.Stats(c.ID) <ide> } <ide> <del>func (daemon *Daemon) SubscribeToContainerStats(name string) (chan *execdriver.ResourceStats, error) { <add>func (daemon *Daemon) SubscribeToContainerStats(name string) (chan interface{}, error) { <ide> c := daemon.Get(name) <ide> if c == nil { <ide> return nil, fmt.Errorf("no such container") <ide> func (daemon *Daemon) SubscribeToContainerStats(name string) (chan *execdriver.R <ide> return ch, nil <ide> } <ide> <del>func (daemon *Daemon) UnsubscribeToContainerStats(name string, ch chan *execdriver.ResourceStats) error { <add>func (daemon *Daemon) UnsubscribeToContainerStats(name string, ch chan interface{}) error { <ide> c := daemon.Get(name) <ide> if c == nil { <ide> return fmt.Errorf("no such container") <ide><path>daemon/stats.go <ide> import ( <ide> "encoding/json" <ide> <ide> "github.com/docker/docker/api/stats" <add> "github.com/docker/docker/daemon/execdriver" <ide> "github.com/docker/docker/engine" <add> "github.com/docker/libcontainer" <add> "github.com/docker/libcontainer/cgroups" <ide> ) <ide> <ide> func (daemon *Daemon) ContainerStats(job *engine.Job) engine.Status { <del> s, err := daemon.SubscribeToContainerStats(job.Args[0]) <add> updates, err := daemon.SubscribeToContainerStats(job.Args[0]) <ide> if err != nil { <ide> return job.Error(err) <ide> } <ide> enc := json.NewEncoder(job.Stdout) <del> for update := range s { <del> ss := stats.ToStats(update.ContainerStats) <add> for v := range updates { <add> update := v.(*execdriver.ResourceStats) <add> ss := convertToAPITypes(update.ContainerStats) <ide> ss.MemoryStats.Limit = uint64(update.MemoryLimit) <ide> ss.Read = update.Read <ide> ss.CpuStats.SystemUsage = update.SystemUsage <ide> if err := enc.Encode(ss); err != nil { <ide> // TODO: handle the specific broken pipe <del> daemon.UnsubscribeToContainerStats(job.Args[0], s) <add> daemon.UnsubscribeToContainerStats(job.Args[0], updates) <ide> return job.Error(err) <ide> } <ide> } <ide> return engine.StatusOK <ide> } <add> <add>// convertToAPITypes converts the libcontainer.ContainerStats to the api specific <add>// structs. This is done to preserve API compatibility and versioning. <add>func convertToAPITypes(ls *libcontainer.ContainerStats) *stats.Stats { <add> s := &stats.Stats{} <add> if ls.NetworkStats != nil { <add> s.Network = stats.Network{ <add> RxBytes: ls.NetworkStats.RxBytes, <add> RxPackets: ls.NetworkStats.RxPackets, <add> RxErrors: ls.NetworkStats.RxErrors, <add> RxDropped: ls.NetworkStats.RxDropped, <add> TxBytes: ls.NetworkStats.TxBytes, <add> TxPackets: ls.NetworkStats.TxPackets, <add> TxErrors: ls.NetworkStats.TxErrors, <add> TxDropped: ls.NetworkStats.TxDropped, <add> } <add> } <add> cs := ls.CgroupStats <add> if cs != nil { <add> s.BlkioStats = stats.BlkioStats{ <add> IoServiceBytesRecursive: copyBlkioEntry(cs.BlkioStats.IoServiceBytesRecursive), <add> IoServicedRecursive: copyBlkioEntry(cs.BlkioStats.IoServicedRecursive), <add> IoQueuedRecursive: copyBlkioEntry(cs.BlkioStats.IoQueuedRecursive), <add> IoServiceTimeRecursive: copyBlkioEntry(cs.BlkioStats.IoServiceTimeRecursive), <add> IoWaitTimeRecursive: copyBlkioEntry(cs.BlkioStats.IoWaitTimeRecursive), <add> IoMergedRecursive: copyBlkioEntry(cs.BlkioStats.IoMergedRecursive), <add> IoTimeRecursive: copyBlkioEntry(cs.BlkioStats.IoTimeRecursive), <add> SectorsRecursive: copyBlkioEntry(cs.BlkioStats.SectorsRecursive), <add> } <add> cpu := cs.CpuStats <add> s.CpuStats = stats.CpuStats{ <add> CpuUsage: stats.CpuUsage{ <add> TotalUsage: cpu.CpuUsage.TotalUsage, <add> PercpuUsage: cpu.CpuUsage.PercpuUsage, <add> UsageInKernelmode: cpu.CpuUsage.UsageInKernelmode, <add> UsageInUsermode: cpu.CpuUsage.UsageInUsermode, <add> }, <add> ThrottlingData: stats.ThrottlingData{ <add> Periods: cpu.ThrottlingData.Periods, <add> ThrottledPeriods: cpu.ThrottlingData.ThrottledPeriods, <add> ThrottledTime: cpu.ThrottlingData.ThrottledTime, <add> }, <add> } <add> mem := cs.MemoryStats <add> s.MemoryStats = stats.MemoryStats{ <add> Usage: mem.Usage, <add> MaxUsage: mem.MaxUsage, <add> Stats: mem.Stats, <add> Failcnt: mem.Failcnt, <add> } <add> } <add> return s <add>} <add> <add>func copyBlkioEntry(entries []cgroups.BlkioStatEntry) []stats.BlkioStatEntry { <add> out := make([]stats.BlkioStatEntry, len(entries)) <add> for i, re := range entries { <add> out[i] = stats.BlkioStatEntry{ <add> Major: re.Major, <add> Minor: re.Minor, <add> Op: re.Op, <add> Value: re.Value, <add> } <add> } <add> return out <add>} <ide><path>daemon/stats_collector.go <ide> import ( <ide> <ide> log "github.com/Sirupsen/logrus" <ide> "github.com/docker/docker/daemon/execdriver" <add> "github.com/docker/docker/pkg/pubsub" <ide> "github.com/docker/libcontainer/system" <ide> ) <ide> <ide> import ( <ide> func newStatsCollector(interval time.Duration) *statsCollector { <ide> s := &statsCollector{ <ide> interval: interval, <del> containers: make(map[string]*statsData), <add> publishers: make(map[*Container]*pubsub.Publisher), <ide> clockTicks: uint64(system.GetClockTicks()), <ide> } <del> s.start() <add> go s.run() <ide> return s <ide> } <ide> <del>type statsData struct { <del> c *Container <del> lastStats *execdriver.ResourceStats <del> subs []chan *execdriver.ResourceStats <del>} <del> <ide> // statsCollector manages and provides container resource stats <ide> type statsCollector struct { <ide> m sync.Mutex <ide> interval time.Duration <ide> clockTicks uint64 <del> containers map[string]*statsData <add> publishers map[*Container]*pubsub.Publisher <ide> } <ide> <ide> // collect registers the container with the collector and adds it to <ide> // the event loop for collection on the specified interval returning <ide> // a channel for the subscriber to receive on. <del>func (s *statsCollector) collect(c *Container) chan *execdriver.ResourceStats { <add>func (s *statsCollector) collect(c *Container) chan interface{} { <ide> s.m.Lock() <ide> defer s.m.Unlock() <del> ch := make(chan *execdriver.ResourceStats, 1024) <del> if _, exists := s.containers[c.ID]; exists { <del> s.containers[c.ID].subs = append(s.containers[c.ID].subs, ch) <del> return ch <add> publisher, exists := s.publishers[c] <add> if !exists { <add> publisher = pubsub.NewPublisher(100*time.Millisecond, 1024) <add> s.publishers[c] = publisher <ide> } <del> s.containers[c.ID] = &statsData{ <del> c: c, <del> subs: []chan *execdriver.ResourceStats{ <del> ch, <del> }, <del> } <del> return ch <add> return publisher.Subscribe() <ide> } <ide> <ide> // stopCollection closes the channels for all subscribers and removes <ide> // the container from metrics collection. <ide> func (s *statsCollector) stopCollection(c *Container) { <ide> s.m.Lock() <del> defer s.m.Unlock() <del> d := s.containers[c.ID] <del> if d == nil { <del> return <add> if publisher, exists := s.publishers[c]; exists { <add> publisher.Close() <add> delete(s.publishers, c) <ide> } <del> for _, sub := range d.subs { <del> close(sub) <del> } <del> delete(s.containers, c.ID) <add> s.m.Unlock() <ide> } <ide> <del>// unsubscribe removes a specific subscriber from receiving updates for a <del>// container's stats. <del>func (s *statsCollector) unsubscribe(c *Container, ch chan *execdriver.ResourceStats) { <add>// unsubscribe removes a specific subscriber from receiving updates for a container's stats. <add>func (s *statsCollector) unsubscribe(c *Container, ch chan interface{}) { <ide> s.m.Lock() <del> cd := s.containers[c.ID] <del> for i, sub := range cd.subs { <del> if ch == sub { <del> cd.subs = append(cd.subs[:i], cd.subs[i+1:]...) <del> close(ch) <del> } <del> } <del> // if there are no more subscribers then remove the entire container <del> // from collection. <del> if len(cd.subs) == 0 { <del> delete(s.containers, c.ID) <add> publisher := s.publishers[c] <add> if publisher != nil { <add> publisher.Evict(ch) <ide> } <ide> s.m.Unlock() <ide> } <ide> <del>func (s *statsCollector) start() { <del> go func() { <del> for _ = range time.Tick(s.interval) { <del> s.m.Lock() <del> for id, d := range s.containers { <del> systemUsage, err := s.getSystemCpuUsage() <del> if err != nil { <del> log.Errorf("collecting system cpu usage for %s: %v", id, err) <del> continue <del> } <del> stats, err := d.c.Stats() <del> if err != nil { <del> if err == execdriver.ErrNotRunning { <del> continue <del> } <del> // if the error is not because the container is currently running then <del> // evict the container from the collector and close the channel for <del> // any subscribers currently waiting on changes. <del> log.Errorf("collecting stats for %s: %v", id, err) <del> for _, sub := range s.containers[id].subs { <del> close(sub) <del> } <del> delete(s.containers, id) <del> continue <del> } <del> stats.SystemUsage = systemUsage <del> for _, sub := range s.containers[id].subs { <del> sub <- stats <add>func (s *statsCollector) run() { <add> for _ = range time.Tick(s.interval) { <add> for container, publisher := range s.publishers { <add> systemUsage, err := s.getSystemCpuUsage() <add> if err != nil { <add> log.Errorf("collecting system cpu usage for %s: %v", container.ID, err) <add> continue <add> } <add> stats, err := container.Stats() <add> if err != nil { <add> if err != execdriver.ErrNotRunning { <add> log.Errorf("collecting stats for %s: %v", container.ID, err) <ide> } <add> continue <ide> } <del> s.m.Unlock() <add> stats.SystemUsage = systemUsage <add> publisher.Publish(stats) <ide> } <del> }() <add> } <ide> } <ide> <ide> const nanoSeconds = 1e9 <ide><path>pkg/pubsub/publisher.go <add>package pubsub <add> <add>import ( <add> "sync" <add> "time" <add>) <add> <add>// NewPublisher creates a new pub/sub publisher to broadcast messages. <add>// The duration is used as the send timeout as to not block the publisher publishing <add>// messages to other clients if one client is slow or unresponsive. <add>// The buffer is used when creating new channels for subscribers. <add>func NewPublisher(publishTimeout time.Duration, buffer int) *Publisher { <add> return &Publisher{ <add> buffer: buffer, <add> timeout: publishTimeout, <add> subscribers: make(map[subscriber]struct{}), <add> } <add>} <add> <add>type subscriber chan interface{} <add> <add>type Publisher struct { <add> m sync.RWMutex <add> buffer int <add> timeout time.Duration <add> subscribers map[subscriber]struct{} <add>} <add> <add>// Subscribe adds a new subscriber to the publisher returning the channel. <add>func (p *Publisher) Subscribe() chan interface{} { <add> ch := make(chan interface{}, p.buffer) <add> p.m.Lock() <add> p.subscribers[ch] = struct{}{} <add> p.m.Unlock() <add> return ch <add>} <add> <add>// Evict removes the specified subscriber from receiving any more messages. <add>func (p *Publisher) Evict(sub chan interface{}) { <add> p.m.Lock() <add> delete(p.subscribers, sub) <add> close(sub) <add> p.m.Unlock() <add>} <add> <add>// Publish sends the data in v to all subscribers currently registered with the publisher. <add>func (p *Publisher) Publish(v interface{}) { <add> p.m.RLock() <add> for sub := range p.subscribers { <add> // send under a select as to not block if the receiver is unavailable <add> select { <add> case sub <- v: <add> case <-time.After(p.timeout): <add> } <add> } <add> p.m.RUnlock() <add>} <add> <add>// Close closes the channels to all subscribers registered with the publisher. <add>func (p *Publisher) Close() { <add> p.m.Lock() <add> for sub := range p.subscribers { <add> close(sub) <add> } <add> p.m.Unlock() <add>} <ide><path>pkg/pubsub/publisher_test.go <add>package pubsub <add> <add>import ( <add> "testing" <add> "time" <add>) <add> <add>func TestSendToOneSub(t *testing.T) { <add> p := NewPublisher(100*time.Millisecond, 10) <add> c := p.Subscribe() <add> <add> p.Publish("hi") <add> <add> msg := <-c <add> if msg.(string) != "hi" { <add> t.Fatalf("expected message hi but received %v", msg) <add> } <add>} <add> <add>func TestSendToMultipleSubs(t *testing.T) { <add> p := NewPublisher(100*time.Millisecond, 10) <add> subs := []chan interface{}{} <add> subs = append(subs, p.Subscribe(), p.Subscribe(), p.Subscribe()) <add> <add> p.Publish("hi") <add> <add> for _, c := range subs { <add> msg := <-c <add> if msg.(string) != "hi" { <add> t.Fatalf("expected message hi but received %v", msg) <add> } <add> } <add>} <add> <add>func TestEvictOneSub(t *testing.T) { <add> p := NewPublisher(100*time.Millisecond, 10) <add> s1 := p.Subscribe() <add> s2 := p.Subscribe() <add> <add> p.Evict(s1) <add> p.Publish("hi") <add> if _, ok := <-s1; ok { <add> t.Fatal("expected s1 to not receive the published message") <add> } <add> <add> msg := <-s2 <add> if msg.(string) != "hi" { <add> t.Fatalf("expected message hi but received %v", msg) <add> } <add>} <add> <add>func TestClosePublisher(t *testing.T) { <add> p := NewPublisher(100*time.Millisecond, 10) <add> subs := []chan interface{}{} <add> subs = append(subs, p.Subscribe(), p.Subscribe(), p.Subscribe()) <add> p.Close() <add> <add> for _, c := range subs { <add> if _, ok := <-c; ok { <add> t.Fatal("expected all subscriber channels to be closed") <add> } <add> } <add>}
6
Ruby
Ruby
simplify tab test setup
d443089270addfc9588c9efba399763523b88de4
<ide><path>Library/Homebrew/test/test_tab.rb <ide> <ide> class TabTests < Homebrew::TestCase <ide> def setup <del> @used, @unused = Options.new, Options.new <del> @used << Option.new("with-foo") << Option.new("without-bar") <del> @unused << Option.new("with-baz") << Option.new("without-qux") <add> @used = Options.create(%w(--with-foo --without-bar)) <add> @unused = Options.create(%w(--with-baz --without-qux)) <ide> <ide> @tab = Tab.new({ <ide> :used_options => @used.map(&:to_s),
1
Ruby
Ruby
fix an example of using inflector's #parameterize
010cce6ad1d134786eaa3f814319ebbe2e63123b
<ide><path>activesupport/lib/active_support/inflector.rb <ide> def demodulize(class_name_in_module) <ide> # @person = Person.find(1) <ide> # # => #<Person id: 1, name: "Donald E. Knuth"> <ide> # <del> # <%= link_to(@person.name, person_path %> <add> # <%= link_to(@person.name, person_path(@person)) %> <ide> # # => <a href="/person/1-donald-e-knuth">Donald E. Knuth</a> <ide> def parameterize(string, sep = '-') <ide> re_sep = Regexp.escape(sep)
1
PHP
PHP
add test methods for path() in paginator objects
774ac42fdbddd4e208511a6786452d5232d8ae51
<ide><path>tests/Pagination/LengthAwarePaginatorTest.php <ide> public function testLengthAwarePaginatorCanGenerateUrls() <ide> $this->p->setPath('http://website.com'); <ide> $this->p->setPageName('foo'); <ide> <add> $this->assertEquals('http://website.com', <add> $this->p->path()); <add> <ide> $this->assertEquals('http://website.com?foo=2', <ide> $this->p->url($this->p->currentPage())); <ide> <ide><path>tests/Pagination/PaginatorTest.php <ide> public function testItRetrievesThePaginatorOptions() <ide> <ide> $this->assertSame($p->getOptions(), $options); <ide> } <add> <add> public function testPaginatorReturnsPath() <add> { <add> $p = new Paginator($array = ['item1', 'item2', 'item3'], 2, 2, <add> ['path' => 'http://website.com/test']); <add> <add> $this->assertSame($p->path(), 'http://website.com/test'); <add> } <ide> }
2
Mixed
Go
support multi-dir wildcards in .dockerignore
eddb14a44eb3ca6ba0b5e6906e21d767eba1af86
<ide><path>docs/reference/builder.md <ide> eliminates `.` and `..` elements using Go's <ide> [filepath.Clean](http://golang.org/pkg/path/filepath/#Clean). Lines <ide> that are blank after preprocessing are ignored. <ide> <add>Beyond Go's filepath.Match rules, Docker also supports a special <add>wildcard string `**` that matches any number of directories (including <add>zero). For example, `**/*.go` will exclude all files that end with `.go` <add>that are found in all directories, including the root of the build context. <add> <ide> Lines starting with `!` (exclamation mark) can be used to make exceptions <ide> to exclusions. The following is an example `.dockerignore` file that <ide> uses this mechanism: <ide><path>integration-cli/docker_cli_build_test.go <ide> func (s *DockerSuite) TestBuildDockerignore(c *check.C) { <ide> RUN [[ ! -e /bla/README.md ]] <ide> RUN [[ ! -e /bla/dir/foo ]] <ide> RUN [[ ! -e /bla/foo ]] <del> RUN [[ ! -e /bla/.git ]]` <add> RUN [[ ! -e /bla/.git ]] <add> RUN [[ ! -e v.cc ]] <add> RUN [[ ! -e src/v.cc ]] <add> RUN [[ ! -e src/_vendor/v.cc ]]` <ide> ctx, err := fakeContext(dockerfile, map[string]string{ <ide> "Makefile": "all:", <ide> ".git/HEAD": "ref: foo", <ide> "src/x.go": "package main", <ide> "src/_vendor/v.go": "package main", <add> "src/_vendor/v.cc": "package main", <add> "src/v.cc": "package main", <add> "v.cc": "package main", <ide> "dir/foo": "", <ide> ".gitignore": "", <ide> "README.md": "readme", <ide> pkg <ide> .gitignore <ide> src/_vendor <ide> *.md <add>**/*.cc <ide> dir`, <ide> }) <ide> if err != nil { <ide> func (s *DockerSuite) TestBuildDockerignoreExceptions(c *check.C) { <ide> RUN [[ -f /bla/dir/e ]] <ide> RUN [[ -f /bla/dir/e-dir/foo ]] <ide> RUN [[ ! -e /bla/foo ]] <del> RUN [[ ! -e /bla/.git ]]` <add> RUN [[ ! -e /bla/.git ]] <add> RUN [[ -e /bla/dir/a.cc ]]` <ide> ctx, err := fakeContext(dockerfile, map[string]string{ <ide> "Makefile": "all:", <ide> ".git/HEAD": "ref: foo", <ide> func (s *DockerSuite) TestBuildDockerignoreExceptions(c *check.C) { <ide> "dir/e-dir/foo": "", <ide> ".gitignore": "", <ide> "README.md": "readme", <add> "dir/a.cc": "hello", <ide> ".dockerignore": ` <ide> .git <ide> pkg <ide> src/_vendor <ide> *.md <ide> dir <ide> !dir/e* <del>!dir/dir/foo`, <add>!dir/dir/foo <add>**/*.cc <add>!**/*.cc`, <ide> }) <ide> if err != nil { <ide> c.Fatal(err) <ide> func (s *DockerSuite) TestBuildDockerignoringWholeDir(c *check.C) { <ide> <ide> func (s *DockerSuite) TestBuildDockerignoringBadExclusion(c *check.C) { <ide> testRequires(c, DaemonIsLinux) <del> name := "testbuilddockerignorewholedir" <add> name := "testbuilddockerignorebadexclusion" <ide> dockerfile := ` <ide> FROM busybox <ide> COPY . / <ide> func (s *DockerSuite) TestBuildDockerignoringBadExclusion(c *check.C) { <ide> } <ide> } <ide> <add>func (s *DockerSuite) TestBuildDockerignoringWildTopDir(c *check.C) { <add> testRequires(c, DaemonIsLinux) <add> <add> dockerfile := ` <add> FROM busybox <add> COPY . / <add> RUN [[ ! -e /.dockerignore ]] <add> RUN [[ ! -e /Dockerfile ]] <add> RUN [[ ! -e /file1 ]] <add> RUN [[ ! -e /dir ]]` <add> <add> ctx, err := fakeContext(dockerfile, map[string]string{ <add> "Dockerfile": "FROM scratch", <add> "file1": "", <add> "dir/dfile1": "", <add> }) <add> c.Assert(err, check.IsNil) <add> defer ctx.Close() <add> <add> // All of these should result in ignoring all files <add> for _, variant := range []string{"**", "**/", "**/**", "*"} { <add> ctx.Add(".dockerignore", variant) <add> _, err = buildImageFromContext("noname", ctx, true) <add> c.Assert(err, check.IsNil, check.Commentf("variant: %s", variant)) <add> } <add>} <add> <add>func (s *DockerSuite) TestBuildDockerignoringWildDirs(c *check.C) { <add> testRequires(c, DaemonIsLinux) <add> <add> dockerfile := ` <add> FROM busybox <add> COPY . / <add> RUN [[ -e /.dockerignore ]] <add> RUN [[ -e /Dockerfile ]] <add> <add> RUN [[ ! -e /file0 ]] <add> RUN [[ ! -e /dir1/file0 ]] <add> RUN [[ ! -e /dir2/file0 ]] <add> <add> RUN [[ ! -e /file1 ]] <add> RUN [[ ! -e /dir1/file1 ]] <add> RUN [[ ! -e /dir1/dir2/file1 ]] <add> <add> RUN [[ ! -e /dir1/file2 ]] <add> RUN [[ -e /dir1/dir2/file2 ]] <add> <add> RUN [[ ! -e /dir1/dir2/file4 ]] <add> RUN [[ ! -e /dir1/dir2/file5 ]] <add> RUN [[ ! -e /dir1/dir2/file6 ]] <add> RUN [[ ! -e /dir1/dir3/file7 ]] <add> RUN [[ ! -e /dir1/dir3/file8 ]] <add> RUN [[ -e /dir1/dir3 ]] <add> RUN [[ -e /dir1/dir4 ]] <add> <add> RUN [[ ! -e 'dir1/dir5/fileAA' ]] <add> RUN [[ -e 'dir1/dir5/fileAB' ]] <add> RUN [[ -e 'dir1/dir5/fileB' ]] # "." in pattern means nothing <add> <add> RUN echo all done!` <add> <add> ctx, err := fakeContext(dockerfile, map[string]string{ <add> "Dockerfile": "FROM scratch", <add> "file0": "", <add> "dir1/file0": "", <add> "dir1/dir2/file0": "", <add> <add> "file1": "", <add> "dir1/file1": "", <add> "dir1/dir2/file1": "", <add> <add> "dir1/file2": "", <add> "dir1/dir2/file2": "", // remains <add> <add> "dir1/dir2/file4": "", <add> "dir1/dir2/file5": "", <add> "dir1/dir2/file6": "", <add> "dir1/dir3/file7": "", <add> "dir1/dir3/file8": "", <add> "dir1/dir4/file9": "", <add> <add> "dir1/dir5/fileAA": "", <add> "dir1/dir5/fileAB": "", <add> "dir1/dir5/fileB": "", <add> <add> ".dockerignore": ` <add>**/file0 <add>**/*file1 <add>**/dir1/file2 <add>dir1/**/file4 <add>**/dir2/file5 <add>**/dir1/dir2/file6 <add>dir1/dir3/** <add>**/dir4/** <add>**/file?A <add>**/file\?B <add>**/dir5/file. <add>`, <add> }) <add> c.Assert(err, check.IsNil) <add> defer ctx.Close() <add> <add> _, err = buildImageFromContext("noname", ctx, true) <add> c.Assert(err, check.IsNil) <add>} <add> <ide> func (s *DockerSuite) TestBuildLineBreak(c *check.C) { <ide> testRequires(c, DaemonIsLinux) <ide> name := "testbuildlinebreak" <ide><path>pkg/fileutils/fileutils.go <ide> import ( <ide> "io" <ide> "os" <ide> "path/filepath" <add> "regexp" <ide> "strings" <add> "text/scanner" <ide> <ide> "github.com/Sirupsen/logrus" <ide> ) <ide> func OptimizedMatches(file string, patterns []string, patDirs [][]string) (bool, <ide> pattern = pattern[1:] <ide> } <ide> <del> match, err := filepath.Match(pattern, file) <add> match, err := regexpMatch(pattern, file) <ide> if err != nil { <del> return false, err <add> return false, fmt.Errorf("Error in pattern (%s): %s", pattern, err) <ide> } <ide> <ide> if !match && parentPath != "." { <ide> // Check to see if the pattern matches one of our parent dirs. <ide> if len(patDirs[i]) <= len(parentPathDirs) { <del> match, _ = filepath.Match(strings.Join(patDirs[i], "/"), <add> match, _ = regexpMatch(strings.Join(patDirs[i], "/"), <ide> strings.Join(parentPathDirs[:len(patDirs[i])], "/")) <ide> } <ide> } <ide> func OptimizedMatches(file string, patterns []string, patDirs [][]string) (bool, <ide> return matched, nil <ide> } <ide> <add>// regexpMatch tries to match the logic of filepath.Match but <add>// does so using regexp logic. We do this so that we can expand the <add>// wildcard set to include other things, like "**" to mean any number <add>// of directories. This means that we should be backwards compatible <add>// with filepath.Match(). We'll end up supporting more stuff, due to <add>// the fact that we're using regexp, but that's ok - it does no harm. <add>func regexpMatch(pattern, path string) (bool, error) { <add> regStr := "^" <add> <add> // Do some syntax checking on the pattern. <add> // filepath's Match() has some really weird rules that are inconsistent <add> // so instead of trying to dup their logic, just call Match() for its <add> // error state and if there is an error in the pattern return it. <add> // If this becomes an issue we can remove this since its really only <add> // needed in the error (syntax) case - which isn't really critical. <add> if _, err := filepath.Match(pattern, path); err != nil { <add> return false, err <add> } <add> <add> // Go through the pattern and convert it to a regexp. <add> // We use a scanner so we can support utf-8 chars. <add> var scan scanner.Scanner <add> scan.Init(strings.NewReader(pattern)) <add> <add> sl := string(os.PathSeparator) <add> escSL := sl <add> if sl == `\` { <add> escSL += `\` <add> } <add> <add> for scan.Peek() != scanner.EOF { <add> ch := scan.Next() <add> <add> if ch == '*' { <add> if scan.Peek() == '*' { <add> // is some flavor of "**" <add> scan.Next() <add> <add> if scan.Peek() == scanner.EOF { <add> // is "**EOF" - to align with .gitignore just accept all <add> regStr += ".*" <add> } else { <add> // is "**" <add> regStr += "((.*" + escSL + ")|([^" + escSL + "]*))" <add> } <add> <add> // Treat **/ as ** so eat the "/" <add> if string(scan.Peek()) == sl { <add> scan.Next() <add> } <add> } else { <add> // is "*" so map it to anything but "/" <add> regStr += "[^" + escSL + "]*" <add> } <add> } else if ch == '?' { <add> // "?" is any char except "/" <add> regStr += "[^" + escSL + "]" <add> } else if strings.Index(".$", string(ch)) != -1 { <add> // Escape some regexp special chars that have no meaning <add> // in golang's filepath.Match <add> regStr += `\` + string(ch) <add> } else if ch == '\\' { <add> // escape next char. Note that a trailing \ in the pattern <add> // will be left alone (but need to escape it) <add> if sl == `\` { <add> // On windows map "\" to "\\", meaning an escaped backslash, <add> // and then just continue because filepath.Match on <add> // Windows doesn't allow escaping at all <add> regStr += escSL <add> continue <add> } <add> if scan.Peek() != scanner.EOF { <add> regStr += `\` + string(scan.Next()) <add> } else { <add> regStr += `\` <add> } <add> } else { <add> regStr += string(ch) <add> } <add> } <add> <add> regStr += "$" <add> <add> res, err := regexp.MatchString(regStr, path) <add> <add> // Map regexp's error to filepath's so no one knows we're not using filepath <add> if err != nil { <add> err = filepath.ErrBadPattern <add> } <add> <add> return res, err <add>} <add> <ide> // CopyFile copies from src to dst until either EOF is reached <ide> // on src or an error occurs. It verifies src exists and remove <ide> // the dst if it exists. <ide><path>pkg/fileutils/fileutils_test.go <ide> import ( <ide> "os" <ide> "path" <ide> "path/filepath" <add> "runtime" <add> "strings" <ide> "testing" <ide> ) <ide> <ide> func TestMatchesWithMalformedPatterns(t *testing.T) { <ide> } <ide> } <ide> <add>// Test lots of variants of patterns & strings <add>func TestMatches(t *testing.T) { <add> tests := []struct { <add> pattern string <add> text string <add> pass bool <add> }{ <add> {"**", "file", true}, <add> {"**", "file/", true}, <add> {"**/", "file", true}, // weird one <add> {"**/", "file/", true}, <add> {"**", "/", true}, <add> {"**/", "/", true}, <add> {"**", "dir/file", true}, <add> {"**/", "dir/file", false}, <add> {"**", "dir/file/", true}, <add> {"**/", "dir/file/", true}, <add> {"**/**", "dir/file", true}, <add> {"**/**", "dir/file/", true}, <add> {"dir/**", "dir/file", true}, <add> {"dir/**", "dir/file/", true}, <add> {"dir/**", "dir/dir2/file", true}, <add> {"dir/**", "dir/dir2/file/", true}, <add> {"**/dir2/*", "dir/dir2/file", true}, <add> {"**/dir2/*", "dir/dir2/file/", false}, <add> {"**/dir2/**", "dir/dir2/dir3/file", true}, <add> {"**/dir2/**", "dir/dir2/dir3/file/", true}, <add> {"**file", "file", true}, <add> {"**file", "dir/file", true}, <add> {"**/file", "dir/file", true}, <add> {"**file", "dir/dir/file", true}, <add> {"**/file", "dir/dir/file", true}, <add> {"**/file*", "dir/dir/file", true}, <add> {"**/file*", "dir/dir/file.txt", true}, <add> {"**/file*txt", "dir/dir/file.txt", true}, <add> {"**/file*.txt", "dir/dir/file.txt", true}, <add> {"**/file*.txt*", "dir/dir/file.txt", true}, <add> {"**/**/*.txt", "dir/dir/file.txt", true}, <add> {"**/**/*.txt2", "dir/dir/file.txt", false}, <add> {"**/*.txt", "file.txt", true}, <add> {"**/**/*.txt", "file.txt", true}, <add> {"a**/*.txt", "a/file.txt", true}, <add> {"a**/*.txt", "a/dir/file.txt", true}, <add> {"a**/*.txt", "a/dir/dir/file.txt", true}, <add> {"a/*.txt", "a/dir/file.txt", false}, <add> {"a/*.txt", "a/file.txt", true}, <add> {"a/*.txt**", "a/file.txt", true}, <add> {"a[b-d]e", "ae", false}, <add> {"a[b-d]e", "ace", true}, <add> {"a[b-d]e", "aae", false}, <add> {"a[^b-d]e", "aze", true}, <add> {".*", ".foo", true}, <add> {".*", "foo", false}, <add> {"abc.def", "abcdef", false}, <add> {"abc.def", "abc.def", true}, <add> {"abc.def", "abcZdef", false}, <add> {"abc?def", "abcZdef", true}, <add> {"abc?def", "abcdef", false}, <add> {"a\\*b", "a*b", true}, <add> {"a\\", "a", false}, <add> {"a\\", "a\\", false}, <add> {"a\\\\", "a\\", true}, <add> {"**/foo/bar", "foo/bar", true}, <add> {"**/foo/bar", "dir/foo/bar", true}, <add> {"**/foo/bar", "dir/dir2/foo/bar", true}, <add> {"abc/**", "abc", false}, <add> {"abc/**", "abc/def", true}, <add> {"abc/**", "abc/def/ghi", true}, <add> } <add> <add> for _, test := range tests { <add> res, _ := regexpMatch(test.pattern, test.text) <add> if res != test.pass { <add> t.Fatalf("Failed: %v - res:%v", test, res) <add> } <add> } <add>} <add> <ide> // An empty string should return true from Empty. <ide> func TestEmpty(t *testing.T) { <ide> empty := empty("") <ide> func TestCreateIfNotExistsFile(t *testing.T) { <ide> t.Fatalf("Should have been a file, seems it's not") <ide> } <ide> } <add> <add>// These matchTests are stolen from go's filepath Match tests. <add>type matchTest struct { <add> pattern, s string <add> match bool <add> err error <add>} <add> <add>var matchTests = []matchTest{ <add> {"abc", "abc", true, nil}, <add> {"*", "abc", true, nil}, <add> {"*c", "abc", true, nil}, <add> {"a*", "a", true, nil}, <add> {"a*", "abc", true, nil}, <add> {"a*", "ab/c", false, nil}, <add> {"a*/b", "abc/b", true, nil}, <add> {"a*/b", "a/c/b", false, nil}, <add> {"a*b*c*d*e*/f", "axbxcxdxe/f", true, nil}, <add> {"a*b*c*d*e*/f", "axbxcxdxexxx/f", true, nil}, <add> {"a*b*c*d*e*/f", "axbxcxdxe/xxx/f", false, nil}, <add> {"a*b*c*d*e*/f", "axbxcxdxexxx/fff", false, nil}, <add> {"a*b?c*x", "abxbbxdbxebxczzx", true, nil}, <add> {"a*b?c*x", "abxbbxdbxebxczzy", false, nil}, <add> {"ab[c]", "abc", true, nil}, <add> {"ab[b-d]", "abc", true, nil}, <add> {"ab[e-g]", "abc", false, nil}, <add> {"ab[^c]", "abc", false, nil}, <add> {"ab[^b-d]", "abc", false, nil}, <add> {"ab[^e-g]", "abc", true, nil}, <add> {"a\\*b", "a*b", true, nil}, <add> {"a\\*b", "ab", false, nil}, <add> {"a?b", "a☺b", true, nil}, <add> {"a[^a]b", "a☺b", true, nil}, <add> {"a???b", "a☺b", false, nil}, <add> {"a[^a][^a][^a]b", "a☺b", false, nil}, <add> {"[a-ζ]*", "α", true, nil}, <add> {"*[a-ζ]", "A", false, nil}, <add> {"a?b", "a/b", false, nil}, <add> {"a*b", "a/b", false, nil}, <add> {"[\\]a]", "]", true, nil}, <add> {"[\\-]", "-", true, nil}, <add> {"[x\\-]", "x", true, nil}, <add> {"[x\\-]", "-", true, nil}, <add> {"[x\\-]", "z", false, nil}, <add> {"[\\-x]", "x", true, nil}, <add> {"[\\-x]", "-", true, nil}, <add> {"[\\-x]", "a", false, nil}, <add> {"[]a]", "]", false, filepath.ErrBadPattern}, <add> {"[-]", "-", false, filepath.ErrBadPattern}, <add> {"[x-]", "x", false, filepath.ErrBadPattern}, <add> {"[x-]", "-", false, filepath.ErrBadPattern}, <add> {"[x-]", "z", false, filepath.ErrBadPattern}, <add> {"[-x]", "x", false, filepath.ErrBadPattern}, <add> {"[-x]", "-", false, filepath.ErrBadPattern}, <add> {"[-x]", "a", false, filepath.ErrBadPattern}, <add> {"\\", "a", false, filepath.ErrBadPattern}, <add> {"[a-b-c]", "a", false, filepath.ErrBadPattern}, <add> {"[", "a", false, filepath.ErrBadPattern}, <add> {"[^", "a", false, filepath.ErrBadPattern}, <add> {"[^bc", "a", false, filepath.ErrBadPattern}, <add> {"a[", "a", false, filepath.ErrBadPattern}, // was nil but IMO its wrong <add> {"a[", "ab", false, filepath.ErrBadPattern}, <add> {"*x", "xxx", true, nil}, <add>} <add> <add>func errp(e error) string { <add> if e == nil { <add> return "<nil>" <add> } <add> return e.Error() <add>} <add> <add>// TestMatch test's our version of filepath.Match, called regexpMatch. <add>func TestMatch(t *testing.T) { <add> for _, tt := range matchTests { <add> pattern := tt.pattern <add> s := tt.s <add> if runtime.GOOS == "windows" { <add> if strings.Index(pattern, "\\") >= 0 { <add> // no escape allowed on windows. <add> continue <add> } <add> pattern = filepath.Clean(pattern) <add> s = filepath.Clean(s) <add> } <add> ok, err := regexpMatch(pattern, s) <add> if ok != tt.match || err != tt.err { <add> t.Fatalf("Match(%#q, %#q) = %v, %q want %v, %q", pattern, s, ok, errp(err), tt.match, errp(tt.err)) <add> } <add> } <add>}
4
Javascript
Javascript
expose a core version of `$animatecss`
39b634e50a9ed140649d4be119a291debe527d55
<ide><path>angularFiles.js <ide> var angularFiles = { <ide> <ide> 'src/ng/anchorScroll.js', <ide> 'src/ng/animate.js', <add> 'src/ng/animateCss.js', <ide> 'src/ng/browser.js', <ide> 'src/ng/cacheFactory.js', <ide> 'src/ng/compile.js', <ide><path>src/AngularPublic.js <ide> <ide> $AnchorScrollProvider, <ide> $AnimateProvider, <add> $CoreAnimateCssProvider, <ide> $$CoreAnimateQueueProvider, <ide> $$CoreAnimateRunnerProvider, <ide> $BrowserProvider, <ide> function publishExternalAPI(angular) { <ide> $provide.provider({ <ide> $anchorScroll: $AnchorScrollProvider, <ide> $animate: $AnimateProvider, <add> $animateCss: $CoreAnimateCssProvider, <ide> $$animateQueue: $$CoreAnimateQueueProvider, <ide> $$AnimateRunner: $$CoreAnimateRunnerProvider, <ide> $browser: $BrowserProvider, <ide><path>src/ng/animateCss.js <add>'use strict'; <add> <add>/** <add> * @ngdoc service <add> * @name $animateCss <add> * @kind object <add> * <add> * @description <add> * This is the core version of `$animateCss`. By default, only when the `ngAnimate` is included, <add> * then the `$animateCss` service will actually perform animations. <add> * <add> * Click here {@link ngAnimate.$animateCss to read the documentation for $animateCss}. <add> */ <add>var $CoreAnimateCssProvider = function() { <add> this.$get = ['$$rAF', '$q', function($$rAF, $q) { <add> <add> var RAFPromise = function() {}; <add> RAFPromise.prototype = { <add> done: function(cancel) { <add> this.defer && this.defer[cancel === true ? 'reject' : 'resolve'](); <add> }, <add> end: function() { <add> this.done(); <add> }, <add> cancel: function() { <add> this.done(true); <add> }, <add> getPromise: function() { <add> if (!this.defer) { <add> this.defer = $q.defer(); <add> } <add> return this.defer.promise; <add> }, <add> then: function(f1,f2) { <add> return this.getPromise().then(f1,f2); <add> }, <add> 'catch': function(f1) { <add> return this.getPromise().catch(f1); <add> }, <add> 'finally': function(f1) { <add> return this.getPromise().finally(f1); <add> } <add> }; <add> <add> return function(element, options) { <add> if (options.from) { <add> element.css(options.from); <add> options.from = null; <add> } <add> <add> var closed, runner = new RAFPromise(); <add> return { <add> start: run, <add> end: run <add> }; <add> <add> function run() { <add> $$rAF(function() { <add> close(); <add> if (!closed) { <add> runner.done(); <add> } <add> closed = true; <add> }); <add> return runner; <add> } <add> <add> function close() { <add> if (options.addClass) { <add> element.addClass(options.addClass); <add> options.addClass = null; <add> } <add> if (options.removeClass) { <add> element.removeClass(options.removeClass); <add> options.removeClass = null; <add> } <add> if (options.to) { <add> element.css(options.to); <add> options.to = null; <add> } <add> } <add> }; <add> }]; <add>}; <ide><path>test/ng/animateCssSpec.js <add>'use strict'; <add> <add>describe("$animateCss", function() { <add> <add> var triggerRAF, element; <add> beforeEach(inject(function($$rAF, $rootElement, $document) { <add> triggerRAF = function() { <add> $$rAF.flush(); <add> }; <add> <add> var body = jqLite($document[0].body); <add> element = jqLite('<div></div>'); <add> $rootElement.append(element); <add> body.append($rootElement); <add> })); <add> <add> describe("without animation", function() { <add> <add> it("should apply the provided [from] CSS to the element", inject(function($animateCss) { <add> $animateCss(element, { from: { height: '50px' }}).start(); <add> expect(element.css('height')).toBe('50px'); <add> })); <add> <add> it("should apply the provided [to] CSS to the element after the first frame", inject(function($animateCss) { <add> $animateCss(element, { to: { width: '50px' }}).start(); <add> expect(element.css('width')).not.toBe('50px'); <add> triggerRAF(); <add> expect(element.css('width')).toBe('50px'); <add> })); <add> <add> it("should apply the provided [addClass] CSS classes to the element after the first frame", inject(function($animateCss) { <add> $animateCss(element, { addClass: 'golden man' }).start(); <add> expect(element).not.toHaveClass('golden man'); <add> triggerRAF(); <add> expect(element).toHaveClass('golden man'); <add> })); <add> <add> it("should apply the provided [removeClass] CSS classes to the element after the first frame", inject(function($animateCss) { <add> element.addClass('silver'); <add> $animateCss(element, { removeClass: 'silver dude' }).start(); <add> expect(element).toHaveClass('silver'); <add> triggerRAF(); <add> expect(element).not.toHaveClass('silver'); <add> })); <add> <add> it("should return an animator with a start method which returns a promise", inject(function($animateCss) { <add> var promise = $animateCss(element, { addClass: 'cool' }).start(); <add> expect(isPromiseLike(promise)).toBe(true); <add> })); <add> <add> it("should return an animator with an end method which returns a promise", inject(function($animateCss) { <add> var promise = $animateCss(element, { addClass: 'cool' }).end(); <add> expect(isPromiseLike(promise)).toBe(true); <add> })); <add> <add> it("should only resolve the promise once both a digest and RAF have passed after start", <add> inject(function($animateCss, $rootScope) { <add> <add> var doneSpy = jasmine.createSpy(); <add> var runner = $animateCss(element, { addClass: 'cool' }).start(); <add> <add> runner.then(doneSpy); <add> expect(doneSpy).not.toHaveBeenCalled(); <add> <add> triggerRAF(); <add> expect(doneSpy).not.toHaveBeenCalled(); <add> <add> $rootScope.$digest(); <add> expect(doneSpy).toHaveBeenCalled(); <add> })); <add> <add> it("should resolve immediately if runner.end() is called", <add> inject(function($animateCss, $rootScope) { <add> <add> var doneSpy = jasmine.createSpy(); <add> var runner = $animateCss(element, { addClass: 'cool' }).start(); <add> <add> runner.then(doneSpy); <add> runner.end(); <add> expect(doneSpy).not.toHaveBeenCalled(); <add> <add> $rootScope.$digest(); <add> expect(doneSpy).toHaveBeenCalled(); <add> })); <add> <add> it("should reject immediately if runner.end() is called", <add> inject(function($animateCss, $rootScope) { <add> <add> var cancelSpy = jasmine.createSpy(); <add> var runner = $animateCss(element, { addClass: 'cool' }).start(); <add> <add> runner.catch(cancelSpy); <add> runner.cancel(); <add> expect(cancelSpy).not.toHaveBeenCalled(); <add> <add> $rootScope.$digest(); <add> expect(cancelSpy).toHaveBeenCalled(); <add> })); <add> <add> it("should not resolve after the next frame if the runner has already been cancelled", <add> inject(function($animateCss, $rootScope) { <add> <add> var doneSpy = jasmine.createSpy(); <add> var cancelSpy = jasmine.createSpy(); <add> var runner = $animateCss(element, { addClass: 'cool' }).start(); <add> <add> runner.then(doneSpy, cancelSpy); <add> runner.cancel(); <add> <add> $rootScope.$digest(); <add> expect(cancelSpy).toHaveBeenCalled(); <add> expect(doneSpy).not.toHaveBeenCalled(); <add> <add> triggerRAF(); <add> expect(cancelSpy).toHaveBeenCalled(); <add> expect(doneSpy).not.toHaveBeenCalled(); <add> })); <add> }); <add> <add>});
4
Javascript
Javascript
keep error codes in alphabetical order
fa73087fcf4bd14db7791120e688a0a508885b64
<ide><path>lib/internal/errors.js <ide> E('ERR_BUFFER_OUT_OF_BOUNDS', bufferOutOfBounds); <ide> E('ERR_CONSOLE_WRITABLE_STREAM', <ide> 'Console expects a writable stream instance for %s'); <ide> E('ERR_CPU_USAGE', 'Unable to obtain cpu usage %s'); <del>E('ERR_NO_LONGER_SUPPORTED', '%s is no longer supported'); <ide> E('ERR_FALSY_VALUE_REJECTION', 'Promise was rejected with falsy value'); <ide> E('ERR_HTTP_HEADERS_SENT', <ide> 'Cannot render headers after they are sent to the client'); <ide> E('ERR_MULTIPLE_CALLBACK', 'Callback called multiple times'); <ide> E('ERR_NAPI_CONS_FUNCTION', 'Constructor must be a function'); <ide> E('ERR_NAPI_CONS_PROTOTYPE_OBJECT', 'Constructor.prototype must be an object'); <ide> E('ERR_NO_CRYPTO', 'Node.js is not compiled with OpenSSL crypto support'); <add>E('ERR_NO_LONGER_SUPPORTED', '%s is no longer supported'); <ide> E('ERR_PARSE_HISTORY_DATA', 'Could not parse history data in %s'); <ide> E('ERR_SOCKET_ALREADY_BOUND', 'Socket is already bound'); <ide> E('ERR_SOCKET_BAD_TYPE',
1
Javascript
Javascript
add space to blockdonationtext
92ffe117f1fe3469aef23a92d2f7956975f30b85
<ide><path>client/src/components/Donation/DonationModal.js <ide> function DonateModal({ show, block, isBlockDonation, closeDonationModal }) { <ide> <Row> <ide> {!closeLabel && ( <ide> <Col sm={10} smOffset={1} xs={12}> <del> <b>Nicely done. You just completed {blockNameify(block)}.</b> <add> <b>Nicely done. You just completed {blockNameify(block)}. </b> <ide> {donationText} <ide> </Col> <ide> )}
1
Python
Python
retrieve cpu info and append it to extra key
1ee6ea3f62e01010035310322618bb8128e62dcc
<ide><path>libcloud/compute/drivers/linode.py <ide> def list_sizes(self, location=None): <ide> for obj in data: <ide> n = NodeSize(id=obj["PLANID"], name=obj["LABEL"], ram=obj["RAM"], <ide> disk=(obj["DISK"] * 1024), bandwidth=obj["XFER"], <del> price=obj["PRICE"], driver=self.connection.driver) <add> price=obj["PRICE"], driver=self.connection.driver, <add> extra={'cpus':obj["CORES"]}) <ide> sizes.append(n) <ide> return sizes <ide> <ide><path>libcloud/compute/drivers/nephoscale.py <ide> def list_sizes(self, baremetal=False): <ide> name=name, <ide> ram=value.get('ram'), <ide> disk=value.get('storage'), <add> extra={'cpus':value.get('vcpus')}, <ide> bandwidth=None, <ide> price=self._get_size_price(size_id=str(value_id)), <ide> driver=self) <ide><path>libcloud/compute/drivers/softlayer.py <ide> def _to_bare_metal_size(self, size): <ide> price=size['totalMinimumHourlyFee'], <ide> ram=None, <ide> disk=None, <add> extra={'cpus': size['cpus']}, <ide> bandwidth=None, <ide> driver=self.connection.driver, <ide> )
3
Javascript
Javascript
add more truetype rewriting magic ('post' table)
c345a4c75e8383247c85bc5d75bba45a8c775a27
<ide><path>fonts.js <ide> var kMaxWaitForFontFace = 1000; <ide> * many fonts are loaded. <ide> */ <ide> var fontCount = 0; <add>var fontName = ""; <ide> <ide> /** <ide> * Hold a map of decoded fonts and of the standard fourteen Type1 fonts and <ide> var Fonts = { <ide> }, <ide> <ide> set active(aName) { <add> fontName = aName; <ide> this._active = this[aName]; <ide> }, <ide> <ide> var TrueType = function(aName, aFile, aProperties) { <ide> // If any tables are still in the array this means some required tables are <ide> // missing, which means that we need to rebuild the font in order to pass <ide> // the sanitizer. <del> if (requiredTables.length == 1 && requiredTables[0] == "OS/2") { <add> if (requiredTables.length && requiredTables[0] == "OS/2") { <ide> var OS2 = [ <ide> 0x00, 0x03, // version <ide> 0x02, 0x24, // xAvgCharWidth <ide> var TrueType = function(aName, aFile, aProperties) { <ide> <ide> // Replace the old CMAP table <ide> var rewrittedCMAP = this._createCMAPTable(glyphs); <del> var cmapDelta = rewrittedCMAP.length - originalCMAP.data.length; <add> var offsetDelta = rewrittedCMAP.length - originalCMAP.data.length; <ide> originalCMAP.data = rewrittedCMAP; <ide> <add> // Rewrite the 'post' table if needed <add> var postTable = null; <add> for (var i = 0; i < tables.length; i++) { <add> var table = tables[i]; <add> if (table.tag == "post") { <add> postTable = table; <add> break; <add> } <add> } <add> <add> if (!postTable) { <add> var post = [ <add> 0x00, 0x03, 0x00, 0x00, // Version number <add> 0x00, 0x00, 0x01, 0x00, // italicAngle <add> 0x00, 0x00, // underlinePosition <add> 0x00, 0x00, // underlineThickness <add> 0x00, 0x00, 0x00, 0x00, // isFixedPitch <add> 0x00, 0x00, 0x00, 0x00, // minMemType42 <add> 0x00, 0x00, 0x00, 0x00, // maxMemType42 <add> 0x00, 0x00, 0x00, 0x00, // minMemType1 <add> 0x00, 0x00, 0x00, 0x00 // maxMemType1 <add> ]; <add> <add> offsetDelta += post.length; <add> tables.unshift({ <add> tag: "post", <add> data: post <add> }); <add> } <add> <ide> // Create a new file to hold the new version of our truetype with a new <ide> // header and new offsets <ide> var stream = aFile.stream || aFile; <del> var ttf = new Uint8Array(stream.length + 16 + OS2.length + cmapDelta); <add> var ttf = new Uint8Array(stream.length + 1024); <ide> <ide> // The new numbers of tables will be the last one plus the num of missing <ide> // tables <ide><path>pdf.js <ide> var CanvasExtraState = (function() { <ide> <ide> const Encodings = { <ide> get ExpertEncoding() { <del> return shadow(this, "ExpertEncoding", [ <del> null, null, null, null, null, null, null, null, null, null, null, <del> null, null, null, null, null, null, null, null, null, null, null, <del> null, null, null, null, null, null, null, null, null, null, <add> return shadow(this, "ExpertEncoding", [ ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, <ide> "space","exclamsmall","Hungarumlautsmall",,"dollaroldstyle","dollarsuperior", <ide> "ampersandsmall","Acutesmall","parenleftsuperior","parenrightsuperior", <ide> "twodotenleader","onedotenleader","comma","hyphen","period","fraction", <ide> const Encodings = { <ide> ]); <ide> }, <ide> get MacExpertEncoding() { <del> return shadow(this, "MacExpertEncoding", [ <del> null, null, null, null, null, null, null, null, null, null, null, <del> null, null, null, null, null, null, null, null, null, null, null, <del> null, null, null, null, null, null, null, null, null, null, <add> return shadow(this, "MacExpertEncoding", [ ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, <ide> "space","exclamsmall","Hungarumlautsmall","centoldstyle","dollaroldstyle", <ide> "dollarsuperior","ampersandsmall","Acutesmall","parenleftsuperior", <ide> "parenrightsuperior","twodotenleader","onedotenleader","comma","hyphen","period", <ide> const Encodings = { <ide> ]); <ide> }, <ide> get MacRomanEncoding() { <del> return shadow(this, "MacRomanEncoding", [ <del> null, null, null, null, null, null, null, null, null, null, null, <del> null, null, null, null, null, null, null, null, null, null, null, <del> null, null, null, null, null, null, null, null, null, null, <add> return shadow(this, "MacRomanEncoding", [ ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, <ide> "space","exclam","quotedbl","numbersign","dollar","percent","ampersand", <ide> "quotesingle","parenleft","parenright","asterisk","plus","comma","hyphen", <ide> "period","slash","zero","one","two","three","four","five","six","seven","eight", <ide> const Encodings = { <ide> ]); <ide> }, <ide> get StandardEncoding() { <del> return shadow(this, "StandardEncoding", [ <del> null, null, null, null, null, null, null, null, null, null, null, <del> null, null, null, null, null, null, null, null, null, null, null, <del> null, null, null, null, null, null, null, null, null, null, <add> return shadow(this, "StandardEncoding", [ ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, <ide> "space","exclam","quotedbl","numbersign","dollar","percent","ampersand", <ide> "quoteright","parenleft","parenright","asterisk","plus","comma","hyphen","period", <ide> "slash","zero","one","two","three","four","five","six","seven","eight","nine", <ide> const Encodings = { <ide> ]); <ide> }, <ide> get WinAnsiEncoding() { <del> return shadow(this, "WinAnsiEncoding", [ <del> null, null, null, null, null, null, null, null, null, null, null, <del> null, null, null, null, null, null, null, null, null, null, null, <del> null, null, null, null, null, null, null, null, null, null, <add> return shadow(this, "WinAnsiEncoding", [ ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, <ide> "space","exclam","quotedbl","numbersign","dollar","percent","ampersand", <ide> "quotesingle","parenleft","parenright","asterisk","plus","comma","hyphen", <ide> "period","slash","zero","one","two","three","four","five","six","seven","eight", <ide> const Encodings = { <ide> ]); <ide> }, <ide> get zapfDingbatsEncoding() { <del> return shadow(this, "zapfDingbatsEncoding", [ <del> null, null, null, null, null, null, null, null, null, null, null, <del> null, null, null, null, null, null, null, null, null, null, null, <del> null, null, null, null, null, null, null, null, null, null, <add> return shadow(this, "zapfDingbatsEncoding", [ ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, <ide> "space","a1","a2","a202","a3","a4","a5","a119","a118","a117","a11","a12","a13", <ide> "a14","a15","a16","a105","a17","a18","a19","a20","a21","a22","a23","a24","a25", <ide> "a26","a27","a28","a6","a7","a8","a9","a10","a29","a30","a31","a32","a33","a34", <ide> var CanvasGraphics = (function() { <ide> error("FontFile not found for font: " + fontName); <ide> fontFile = xref.fetchIfRef(fontFile); <ide> <del> // Generate the custom cmap of the font if needed <ide> var encodingMap = {}; <ide> var charset = []; <ide> if (fontDict.has("Encoding")) { <ide> var encoding = xref.fetchIfRef(fontDict.get("Encoding")); <ide> if (IsDict(encoding)) { <del> // Build an map between codes and glyphs <add> // Build a map between codes and glyphs <ide> var differences = encoding.get("Differences"); <ide> var index = 0; <ide> for (var j = 0; j < differences.length; j++) { <ide> var CanvasGraphics = (function() { <ide> } else if (fontDict.has("ToUnicode")) { <ide> var cmapObj = xref.fetchIfRef(fontDict.get("ToUnicode")); <ide> if (IsName(cmapObj)) { <del> error("ToUnicode basic cmap translation not implemented"); <del> encodingMap = {}; <add> error("ToUnicode file cmap translation not implemented"); <ide> } else if (IsStream(cmapObj)) { <add> var encoding = Encodings["WinAnsiEncoding"]; <add> var firstChar = xref.fetchIfRef(fontDict.get("FirstChar")); <add> for (var i = firstChar; i < encoding.length; i++) <add> encodingMap[i] = new Name(encoding[i]); <add> <ide> var tokens = []; <ide> var token = ""; <ide> <ide> var CanvasGraphics = (function() { <ide> var code = parseInt("0x" + tokens[j+2]); <ide> <ide> for (var k = startRange; k <= endRange; k++) { <del> encodingMap[k] = code; <del> charset.push(code++); <add> encodingMap[k] = GlyphsUnicode[encoding[code]]; <add> charset.push(encoding[code++]); <ide> } <ide> } <ide> break;
2
Text
Text
accept suggestions of rylan
17c25980f97e6f8176a5fa05ff7d4a207a7a09de
<ide><path>docs/Homebrew-Governance.md <ide> <ide> ## 2. Members <ide> <del>1. New members will be admitted by majority vote of the Project Leadership Committee (PLC) and added to the Homebrew organisation on GitHub. <add>1. New members will be admitted by an ordinary resolution of the PLC and added to the Homebrew organisation on GitHub. <ide> <ide> 2. Members may vote in all general elections and resolutions, hold office for Homebrew, and participate in all other membership functions. <ide> <ide> 3. Members are expected to remain active within Homebrew, and are required to affirm their continued interest in Homebrew membership annually. <ide> <del>4. Members may be dismissed by majority vote of the Project Leadership Committee and removed from the Homebrew organisation on GitHub. Removed members may be reinstated by the usual admission process. <add>4. A member may be removed from Homebrew by an ordinary resolution of the PLC. A removed member may be reinstated by the usual admission process. <ide> <ide> 5. All members will follow the [Homebrew Code of Conduct](https://github.com/Homebrew/.github/blob/HEAD/CODE_OF_CONDUCT.md#code-of-conduct). Changes to the code of conduct must be approved by the PLC. <ide> <ide> <ide> 4. The PLC will meet annually to review the status of all members and remove inactive members and those who have not affirmed a commitment to Homebrew in the past year. Voting in the AGM confirms that a member wishes to remain active with the project. After the AGM, the PLC will ask the members who did not vote whether they wish to remain active with the project. The PLC removes any members who don't respond to this second request after three weeks. <ide> <del>5. The PLC will appoint the members of the Technical Steering Committee (TSC). <add>5. The PLC will appoint the members of the TSC. <ide> <del>6. Any member may refer any question or dispute to the PLC. All technical matters should first be referred to the TSC. Non-technical matters may be referred directly to the PLC. Members will make a good faith effort to resolve any disputes prior to referral to the PLC. <add>6. Any member may refer any question or dispute to the PLC. All technical matters should first be referred to the TSC. Non-technical matters may be referred directly to the PLC. Members will make a good faith effort to resolve any disputes with compromise prior to referral to the PLC. <ide> <ide> 7. The PLC may meet by any mutually agreeable means, such as text chat, voice or video call, and in person. Members of the PLC must meet at least once per quarter. Members of the PLC must meet by video call or in person at least once per year. <ide> <ide> <ide> 4. No more than two employees of the same employer may serve on the TSC. <ide> <del>5. A member of the TSC, except the Project Leader, may be removed by an ordinary resolution of the PLC. <add>5. A member of the TSC, except the Project Leader, may be removed from the TSC by an ordinary resolution of the PLC.
1
Javascript
Javascript
use promise for queue()
bac576c433ac3b85dc50df567fe008589af99091
<ide><path>packager/src/lib/BatchProcessor.js <ide> type BatchProcessorOptions = { <ide> concurrency: number, <ide> }; <ide> <add>type QueueItem<TItem, TResult> = { <add> item: TItem, <add> reject: (error: mixed) => mixed, <add> resolve: (result: TResult) => mixed, <add>}; <add> <ide> /** <ide> * We batch items together trying to minimize their processing, for example as <ide> * network queries. For that we wait a small moment before processing a batch. <ide> type BatchProcessorOptions = { <ide> */ <ide> class BatchProcessor<TItem, TResult> { <ide> <add> _currentProcessCount: number; <ide> _options: BatchProcessorOptions; <ide> _processBatch: ProcessBatch<TItem, TResult>; <del> _queue: Array<{ <del> item: TItem, <del> callback: (error?: Error, result?: TResult) => mixed, <del> }>; <add> _queue: Array<QueueItem<TItem, TResult>>; <ide> _timeoutHandle: ?number; <del> _currentProcessCount: number; <ide> <ide> constructor( <ide> options: BatchProcessorOptions, <ide> class BatchProcessor<TItem, TResult> { <ide> const jobs = this._queue.splice(0, this._options.maximumItems); <ide> const items = jobs.map(job => job.item); <ide> this._processBatch(items, (error, results) => { <del> invariant( <del> results == null || results.length === items.length, <del> 'Not enough results returned.', <del> ); <del> for (let i = 0; i < items.length; ++i) { <del> jobs[i].callback(error, results && results[i]); <add> if (error != null) { <add> for (let i = 0; i < jobs.length; ++i) { <add> jobs[i].reject(error); <add> } <add> } else { <add> invariant(results != null, 'Neither results or error were returned.'); <add> invariant(results.length === items.length, 'Not enough results returned.'); <add> for (let i = 0; i < jobs.length; ++i) { <add> jobs[i].resolve(results[i]); <add> } <ide> } <ide> this._currentProcessCount--; <ide> this._processQueueOnceReady(); <ide> class BatchProcessor<TItem, TResult> { <ide> } <ide> } <ide> <del> queue( <del> item: TItem, <del> callback: (error?: Error, result?: TResult) => mixed, <del> ) { <del> this._queue.push({item, callback}); <del> this._processQueueOnceReady(); <add> queue(item: TItem): Promise<TResult> { <add> return new Promise((resolve, reject) => { <add> this._queue.push({item, resolve, reject}); <add> this._processQueueOnceReady(); <add> }); <ide> } <ide> <ide> } <ide><path>packager/src/lib/GlobalTransformCache.js <ide> class KeyURIFetcher { <ide> } <ide> <ide> fetch(key: string, callback: FetchURICallback) { <del> this._batchProcessor.queue(key, callback); <add> this._batchProcessor.queue(key).then( <add> res => process.nextTick(callback.bind(undefined, undefined, res)), <add> err => process.nextTick(callback.bind(undefined, err)), <add> ); <ide> } <ide> <ide> constructor(fetchResultURIs: FetchResultURIs, processError: (error: Error) => mixed) { <ide> class KeyResultStore { <ide> } <ide> <ide> store(key: string, result: CachedResult) { <del> this._batchProcessor.queue({key, result}, () => {}); <add> this._batchProcessor.queue({key, result}); <ide> } <ide> <ide> constructor(storeResults: StoreResults) { <ide><path>packager/src/lib/__mocks__/BatchProcessor.js <del>/** <del> * Copyright (c) 2015-present, Facebook, Inc. <del> * All rights reserved. <del> * <del> * This source code is licensed under the BSD-style license found in the <del> * LICENSE file in the root directory of this source tree. An additional grant <del> * of patent rights can be found in the PATENTS file in the same directory. <del> */ <del> <del>'use strict'; <del> <del>const {EventEmitter} = require('events'); <del> <del>class BatchProcessorMock { <del> <del> constructor(_, processBatch) { <del> this._processBatch = processBatch; <del> this._queue = []; <del> BatchProcessorMock.mocks.emit('new', this); <del> } <del> <del> queue(item, callback) { <del> this._queue.push([item, callback]); <del> } <del> <del> flushMock() { <del> const {_queue} = this; <del> this._queue = []; <del> process.nextTick(() => { <del> this._processBatch(_queue.map(pair => pair[0]), (error, res) => { <del> _queue.forEach((pair, i) => pair[1](error, res && res[i])); <del> }); <del> }); <del> } <del> <del>} <del> <del>BatchProcessorMock.mocks = new EventEmitter(); <del> <del>module.exports = BatchProcessorMock; <ide><path>packager/src/lib/__tests__/BatchProcessor-test.js <ide> describe('BatchProcessor', () => { <ide> }, 0); <ide> }); <ide> const results = []; <del> const callback = (error, res) => { <del> expect(error).toBe(null); <del> results.push(res); <del> }; <del> input.forEach(e => bp.queue(e, callback)); <add> input.forEach(e => bp.queue(e).then( <add> res => results.push(res), <add> error => process.nextTick(() => { throw error; }), <add> )); <ide> jest.runAllTimers(); <add> jest.runAllTicks(); <ide> expect(batches).toEqual([ <ide> [1, 2, 3], <ide> [4, 5, 6], <ide> describe('BatchProcessor', () => { <ide> it('report errors', () => { <ide> const error = new Error('oh noes'); <ide> const bp = new BatchProcessor(options, (items, callback) => { <del> process.nextTick(callback.bind(null, error)); <add> setTimeout(callback.bind(null, error), 0); <ide> }); <ide> let receivedError; <del> bp.queue('foo', err => { receivedError = err; }); <add> bp.queue('foo').catch( <add> err => { receivedError = err; }, <add> ); <ide> jest.runAllTimers(); <ide> jest.runAllTicks(); <ide> expect(receivedError).toBe(error);
4
Javascript
Javascript
change default iphone version
6d09df5b726ac951417b87a49bc345ebc9142951
<ide><path>local-cli/runIOS/runIOS.js <ide> module.exports = { <ide> { <ide> command: '--simulator [string]', <ide> description: 'Explicitly set simulator to use', <del> default: 'iPhone 6', <add> default: 'iPhone X', <ide> }, <ide> { <ide> command: '--configuration [string]',
1
PHP
PHP
keep default=false for boolean columns
e2e5dfb91ed83403d0715583ea1f0ada79d37c25
<ide><path>lib/Cake/Model/CakeSchema.php <ide> protected function _columns(&$Obj) { <ide> unset($value['limit']); <ide> } <ide> <del> if (isset($value['default']) && ($value['default'] === '' || $value['default'] === false)) { <add> if (isset($value['default']) && ($value['default'] === '' || ($value['default'] === false && $value['type'] != 'boolean'))) { <ide> unset($value['default']); <ide> } <ide> if (empty($value['length'])) {
1
PHP
PHP
fix bad merge
8b0227d5a2d22c67254ca22a43475f854f3c141f
<ide><path>tests/TestCase/TestSuite/IntegrationTestTraitTest.php <ide> use Cake\Event\EventManager; <ide> use Cake\Http\Middleware\EncryptedCookieMiddleware; <ide> use Cake\Http\Response; <del><<<<<<< HEAD <del>======= <del>use Cake\Routing\DispatcherFactory; <del>>>>>>>> 3.next <ide> use Cake\Routing\Router; <ide> use Cake\Routing\Route\InflectedRoute; <ide> use Cake\TestSuite\IntegrationTestCase; <ide> use Cake\Test\Fixture\AssertIntegrationTestCase; <ide> use Cake\Utility\Security; <ide> use PHPUnit\Framework\AssertionFailedError; <del><<<<<<< HEAD <ide> use PHPUnit\Framework\Error\Deprecated; <del>======= <ide> use Zend\Diactoros\UploadedFile; <del>>>>>>>> 3.next <ide> <ide> /** <ide> * Self test of the IntegrationTestCase
1
PHP
PHP
fix typo that removed '$' from '$key'
83f37e48a9e6ee9885196f210e8bcaeab970a0b8
<ide><path>lib/Cake/Routing/Route/CakeRoute.php <ide> protected function _parseArgs($args, $context) { <ide> $separatorIsPresent = strpos($param, $namedConfig['separator']) !== false; <ide> if ((!isset($this->options['named']) || !empty($this->options['named'])) && $separatorIsPresent) { <ide> list($key, $val) = explode($namedConfig['separator'], $param, 2); <del> $key = key; <add> $key = $key; <ide> $val = $val; <ide> $hasRule = isset($rules[$key]); <ide> $passIt = (!$hasRule && !$greedy) || ($hasRule && !$this->_matchNamed($val, $rules[$key], $context));
1
Go
Go
add debug for error in the server
b7937e268fcbc529a168164fc242edc56d51094c
<ide><path>api.go <ide> func parseMultipartForm(r *http.Request) error { <ide> } <ide> <ide> func httpError(w http.ResponseWriter, err error) { <add> statusCode := http.StatusInternalServerError <ide> if strings.HasPrefix(err.Error(), "No such") { <del> http.Error(w, err.Error(), http.StatusNotFound) <add> statusCode = http.StatusNotFound <ide> } else if strings.HasPrefix(err.Error(), "Bad parameter") { <del> http.Error(w, err.Error(), http.StatusBadRequest) <add> statusCode = http.StatusBadRequest <ide> } else if strings.HasPrefix(err.Error(), "Conflict") { <del> http.Error(w, err.Error(), http.StatusConflict) <add> statusCode = http.StatusConflict <ide> } else if strings.HasPrefix(err.Error(), "Impossible") { <del> http.Error(w, err.Error(), http.StatusNotAcceptable) <add> statusCode = http.StatusNotAcceptable <ide> } else if strings.HasPrefix(err.Error(), "Wrong login/password") { <del> http.Error(w, err.Error(), http.StatusUnauthorized) <add> statusCode = http.StatusUnauthorized <ide> } else if strings.Contains(err.Error(), "hasn't been activated") { <del> http.Error(w, err.Error(), http.StatusForbidden) <del> } else { <del> http.Error(w, err.Error(), http.StatusInternalServerError) <add> statusCode = http.StatusForbidden <ide> } <add> utils.Debugf("[error %d] %s", statusCode, err) <add> http.Error(w, err.Error(), statusCode) <ide> } <ide> <ide> func writeJSON(w http.ResponseWriter, b []byte) {
1
Javascript
Javascript
switch textinput to uselayouteffect
208ebed074cfbc25125e709c1645844247601f2f
<ide><path>Libraries/Components/TextInput/TextInput.js <ide> import type {PressEvent} from '../../Types/CoreEventTypes'; <ide> import type {HostComponent} from '../../Renderer/shims/ReactNativeTypes'; <ide> import type {TextInputNativeCommands} from './TextInputNativeCommands'; <ide> <del>const {useEffect, useRef, useState} = React; <add>const {useLayoutEffect, useRef, useState} = React; <ide> <ide> type ReactRefSetter<T> = {current: null | T, ...} | ((ref: null | T) => mixed); <ide> <ide> function InternalTextInput(props: Props): React.Node { <ide> // This is necessary in case native updates the text and JS decides <ide> // that the update should be ignored and we should stick with the value <ide> // that we have in JS. <del> useEffect(() => { <add> useLayoutEffect(() => { <ide> const nativeUpdate = {}; <ide> <ide> if (lastNativeText !== props.value && typeof props.value === 'string') { <ide> function InternalTextInput(props: Props): React.Node { <ide> viewCommands, <ide> ]); <ide> <del> useEffect(() => { <add> useLayoutEffect(() => { <ide> const inputRefValue = inputRef.current; <ide> <ide> if (inputRefValue != null) { <ide> TextInputState.registerInput(inputRefValue); <ide> <ide> return () => { <ide> TextInputState.unregisterInput(inputRefValue); <add> <add> if (TextInputState.currentlyFocusedInput() === inputRefValue) { <add> nullthrows(inputRefValue).blur(); <add> } <ide> }; <ide> } <ide> }, [inputRef]); <ide> <del> useEffect(() => { <del> // When unmounting we need to blur the input <del> return () => { <del> if (isFocused()) { <del> nullthrows(inputRef.current).blur(); <del> } <del> }; <del> }, [inputRef]); <del> <ide> function clear(): void { <ide> if (inputRef.current != null) { <ide> viewCommands.setTextAndSelection(
1
Text
Text
add missing link
4f0220b7805a320e33ebee4bea21bfffbc12c2ee
<ide><path>docs/topics/third-party-resources.md <ide> To submit new content, [open an issue][drf-create-issue] or [create a pull reque <ide> [drf-create-issue]: https://github.com/tomchristie/django-rest-framework/issues/new <ide> [authentication]: ../api-guide/authentication.md <ide> [permissions]: ../api-guide/permissions.md <add>[third-party-resources]: ../topics/third-party-resources/#existing-third-party-packages <ide> [discussion-group]: https://groups.google.com/forum/#!forum/django-rest-framework <ide> [djangorestframework-digestauth]: https://github.com/juanriaza/django-rest-framework-digestauth <ide> [django-oauth-toolkit]: https://github.com/evonove/django-oauth-toolkit
1
Python
Python
add amp for albert
31b0560ab4e5d5d3652dd931c11e630dbfbb3900
<ide><path>src/transformers/models/albert/modeling_tf_albert.py <ide> # limitations under the License. <ide> """ TF 2.0 ALBERT model. """ <ide> <del> <add>import math <ide> from dataclasses import dataclass <del>from typing import Dict, Optional, Tuple <add>from typing import Dict, Optional, Tuple, Union <ide> <add>import numpy as np <ide> import tensorflow as tf <ide> <ide> from ...activations_tf import get_tf_activation <ide> ) <ide> from ...modeling_tf_utils import ( <ide> TFMaskedLanguageModelingLoss, <add> TFModelInputType, <ide> TFMultipleChoiceLoss, <ide> TFPreTrainedModel, <ide> TFQuestionAnsweringLoss, <ide> ] <ide> <ide> <add>class TFAlbertPreTrainingLoss: <add> """ <add> Loss function suitable for ALBERT pretraining, that is, the task of pretraining a language model by combining SOP + <add> MLM. .. note:: Any label of -100 will be ignored (along with the corresponding logits) in the loss computation. <add> """ <add> <add> def compute_loss(self, labels: tf.Tensor, logits: tf.Tensor) -> tf.Tensor: <add> loss_fn = tf.keras.losses.SparseCategoricalCrossentropy( <add> from_logits=True, reduction=tf.keras.losses.Reduction.NONE <add> ) <add> # make sure only labels that are not equal to -100 <add> # are taken into account as loss <add> masked_lm_active_loss = tf.not_equal(tf.reshape(tensor=labels["labels"], shape=(-1,)), -100) <add> masked_lm_reduced_logits = tf.boolean_mask( <add> tensor=tf.reshape(tensor=logits[0], shape=(-1, shape_list(logits[0])[2])), <add> mask=masked_lm_active_loss, <add> ) <add> masked_lm_labels = tf.boolean_mask( <add> tensor=tf.reshape(tensor=labels["labels"], shape=(-1,)), mask=masked_lm_active_loss <add> ) <add> sentence_order_active_loss = tf.not_equal(tf.reshape(tensor=labels["sentence_order_label"], shape=(-1,)), -100) <add> sentence_order_reduced_logits = tf.boolean_mask( <add> tensor=tf.reshape(tensor=logits[1], shape=(-1, 2)), mask=sentence_order_active_loss <add> ) <add> sentence_order_label = tf.boolean_mask( <add> tensor=tf.reshape(tensor=labels["sentence_order_label"], shape=(-1,)), mask=sentence_order_active_loss <add> ) <add> masked_lm_loss = loss_fn(y_true=masked_lm_labels, y_pred=masked_lm_reduced_logits) <add> sentence_order_loss = loss_fn(y_true=sentence_order_label, y_pred=sentence_order_reduced_logits) <add> masked_lm_loss = tf.reshape(tensor=masked_lm_loss, shape=(-1, shape_list(sentence_order_loss)[0])) <add> masked_lm_loss = tf.reduce_mean(input_tensor=masked_lm_loss, axis=0) <add> <add> return masked_lm_loss + sentence_order_loss <add> <add> <ide> class TFAlbertEmbeddings(tf.keras.layers.Layer): <ide> """Construct the embeddings from word, position and token_type embeddings.""" <ide> <del> def __init__(self, config, **kwargs): <add> def __init__(self, config: AlbertConfig, **kwargs): <ide> super().__init__(**kwargs) <ide> <ide> self.vocab_size = config.vocab_size <ide> def build(self, input_shape: tf.TensorShape): <ide> self.weight = self.add_weight( <ide> name="weight", <ide> shape=[self.vocab_size, self.embedding_size], <del> initializer=get_initializer(initializer_range=self.initializer_range), <add> initializer=get_initializer(self.initializer_range), <ide> ) <ide> <ide> with tf.name_scope("token_type_embeddings"): <ide> self.token_type_embeddings = self.add_weight( <ide> name="embeddings", <ide> shape=[self.type_vocab_size, self.embedding_size], <del> initializer=get_initializer(initializer_range=self.initializer_range), <add> initializer=get_initializer(self.initializer_range), <ide> ) <ide> <ide> with tf.name_scope("position_embeddings"): <ide> self.position_embeddings = self.add_weight( <ide> name="embeddings", <ide> shape=[self.max_position_embeddings, self.embedding_size], <del> initializer=get_initializer(initializer_range=self.initializer_range), <add> initializer=get_initializer(self.initializer_range), <ide> ) <ide> <ide> super().build(input_shape) <ide> def call( <ide> return final_embeddings <ide> <ide> <del>class TFAlbertSelfOutput(tf.keras.layers.Layer): <del> def __init__(self, config, **kwargs): <del> super().__init__(**kwargs) <del> self.dense = tf.keras.layers.Dense( <del> config.hidden_size, kernel_initializer=get_initializer(config.initializer_range), name="dense" <del> ) <del> self.LayerNorm = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="LayerNorm") <del> self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob) <del> <del> def call(self, hidden_states, input_tensor, training=False): <del> hidden_states = self.dense(hidden_states) <del> hidden_states = self.dropout(hidden_states, training=training) <del> hidden_states = self.LayerNorm(hidden_states + input_tensor) <del> return hidden_states <del> <del> <ide> class TFAlbertAttention(tf.keras.layers.Layer): <ide> """ Contains the complete attention sublayer, including both dropouts and layer norm. """ <ide> <del> def __init__(self, config, **kwargs): <add> def __init__(self, config: AlbertConfig, **kwargs): <ide> super().__init__(**kwargs) <ide> <del> self.hidden_size = config.hidden_size <del> self.output_attentions = config.output_attentions <add> if config.hidden_size % config.num_attention_heads != 0: <add> raise ValueError( <add> f"The hidden size ({config.hidden_size}) is not a multiple of the number " <add> f"of attention heads ({config.num_attention_heads})" <add> ) <add> <ide> self.num_attention_heads = config.num_attention_heads <del> assert config.hidden_size % config.num_attention_heads == 0 <ide> self.attention_head_size = int(config.hidden_size / config.num_attention_heads) <ide> self.all_head_size = self.num_attention_heads * self.attention_head_size <add> self.sqrt_att_head_size = math.sqrt(self.attention_head_size) <add> self.output_attentions = config.output_attentions <add> <ide> self.query = tf.keras.layers.Dense( <del> self.all_head_size, kernel_initializer=get_initializer(config.initializer_range), name="query" <add> units=self.all_head_size, kernel_initializer=get_initializer(config.initializer_range), name="query" <ide> ) <ide> self.key = tf.keras.layers.Dense( <del> self.all_head_size, kernel_initializer=get_initializer(config.initializer_range), name="key" <add> units=self.all_head_size, kernel_initializer=get_initializer(config.initializer_range), name="key" <ide> ) <ide> self.value = tf.keras.layers.Dense( <del> self.all_head_size, kernel_initializer=get_initializer(config.initializer_range), name="value" <add> units=self.all_head_size, kernel_initializer=get_initializer(config.initializer_range), name="value" <ide> ) <ide> self.dense = tf.keras.layers.Dense( <del> config.hidden_size, kernel_initializer=get_initializer(config.initializer_range), name="dense" <add> units=config.hidden_size, kernel_initializer=get_initializer(config.initializer_range), name="dense" <ide> ) <ide> self.LayerNorm = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="LayerNorm") <del> self.pruned_heads = set() <ide> # Two different dropout probabilities; see https://github.com/google-research/albert/blob/master/modeling.py#L971-L993 <del> self.attention_dropout = tf.keras.layers.Dropout(config.attention_probs_dropout_prob) <del> self.output_dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob) <add> self.attention_dropout = tf.keras.layers.Dropout(rate=config.attention_probs_dropout_prob) <add> self.output_dropout = tf.keras.layers.Dropout(rate=config.hidden_dropout_prob) <ide> <del> def transpose_for_scores(self, x, batch_size): <add> def transpose_for_scores(self, tensor: tf.Tensor, batch_size: int) -> tf.Tensor: <ide> # Reshape from [batch_size, seq_length, all_head_size] to [batch_size, seq_length, num_attention_heads, attention_head_size] <del> x = tf.reshape(x, (batch_size, -1, self.num_attention_heads, self.attention_head_size)) <add> tensor = tf.reshape(tensor=tensor, shape=(batch_size, -1, self.num_attention_heads, self.attention_head_size)) <ide> <del> return tf.transpose(x, perm=[0, 2, 1, 3]) <add> # Transpose the tensor from [batch_size, seq_length, num_attention_heads, attention_head_size] to [batch_size, num_attention_heads, seq_length, attention_head_size] <add> return tf.transpose(tensor, perm=[0, 2, 1, 3]) <ide> <del> def prune_heads(self, heads): <del> raise NotImplementedError <del> <del> def call(self, input_tensor, attention_mask, head_mask, output_attentions, training=False): <add> def call( <add> self, <add> input_tensor: tf.Tensor, <add> attention_mask: tf.Tensor, <add> head_mask: tf.Tensor, <add> output_attentions: bool, <add> training: bool = False, <add> ) -> Tuple[tf.Tensor]: <ide> batch_size = shape_list(input_tensor)[0] <del> mixed_query_layer = self.query(input_tensor) <del> mixed_key_layer = self.key(input_tensor) <del> mixed_value_layer = self.value(input_tensor) <del> <add> mixed_query_layer = self.query(inputs=input_tensor) <add> mixed_key_layer = self.key(inputs=input_tensor) <add> mixed_value_layer = self.value(inputs=input_tensor) <ide> query_layer = self.transpose_for_scores(mixed_query_layer, batch_size) <ide> key_layer = self.transpose_for_scores(mixed_key_layer, batch_size) <ide> value_layer = self.transpose_for_scores(mixed_value_layer, batch_size) <ide> <ide> # Take the dot product between "query" and "key" to get the raw attention scores. <ide> # (batch size, num_heads, seq_len_q, seq_len_k) <ide> attention_scores = tf.matmul(query_layer, key_layer, transpose_b=True) <del> # scale attention_scores <del> dk = tf.cast(shape_list(key_layer)[-1], tf.float32) <del> attention_scores = attention_scores / tf.math.sqrt(dk) <add> dk = tf.cast(self.sqrt_att_head_size, dtype=attention_scores.dtype) <add> attention_scores = tf.divide(attention_scores, dk) <ide> <ide> if attention_mask is not None: <ide> # Apply the attention mask is (precomputed for all layers in TFAlbertModel call() function) <del> attention_scores = attention_scores + attention_mask <add> attention_scores = tf.add(attention_scores, attention_mask) <ide> <ide> # Normalize the attention scores to probabilities. <del> attention_probs = tf.nn.softmax(attention_scores, axis=-1) <add> attention_probs = tf.nn.softmax(logits=attention_scores, axis=-1) <ide> <ide> # This is actually dropping out entire tokens to attend to, which might <ide> # seem a bit unusual, but is taken from the original Transformer paper. <del> attention_probs = self.attention_dropout(attention_probs, training=training) <add> attention_probs = self.attention_dropout(inputs=attention_probs, training=training) <ide> <ide> # Mask heads if we want to <ide> if head_mask is not None: <del> attention_probs = attention_probs * head_mask <add> attention_probs = tf.multiply(attention_probs, head_mask) <ide> <ide> context_layer = tf.matmul(attention_probs, value_layer) <del> <ide> context_layer = tf.transpose(context_layer, perm=[0, 2, 1, 3]) <del> context_layer = tf.reshape( <del> context_layer, (batch_size, -1, self.all_head_size) <del> ) # (batch_size, seq_len_q, all_head_size) <ide> <add> # (batch_size, seq_len_q, all_head_size) <add> context_layer = tf.reshape(tensor=context_layer, shape=(batch_size, -1, self.all_head_size)) <ide> self_outputs = (context_layer, attention_probs) if output_attentions else (context_layer,) <del> <ide> hidden_states = self_outputs[0] <del> <del> hidden_states = self.dense(hidden_states) <del> hidden_states = self.output_dropout(hidden_states, training=training) <del> attention_output = self.LayerNorm(hidden_states + input_tensor) <add> hidden_states = self.dense(inputs=hidden_states) <add> hidden_states = self.output_dropout(inputs=hidden_states, training=training) <add> attention_output = self.LayerNorm(inputs=hidden_states + input_tensor) <ide> <ide> # add attentions if we output them <ide> outputs = (attention_output,) + self_outputs[1:] <ide> def call(self, input_tensor, attention_mask, head_mask, output_attentions, train <ide> <ide> <ide> class TFAlbertLayer(tf.keras.layers.Layer): <del> def __init__(self, config, **kwargs): <add> def __init__(self, config: AlbertConfig, **kwargs): <ide> super().__init__(**kwargs) <del> self.attention = TFAlbertAttention(config, name="attention") <ide> <add> self.attention = TFAlbertAttention(config, name="attention") <ide> self.ffn = tf.keras.layers.Dense( <del> config.intermediate_size, kernel_initializer=get_initializer(config.initializer_range), name="ffn" <add> units=config.intermediate_size, kernel_initializer=get_initializer(config.initializer_range), name="ffn" <ide> ) <ide> <ide> if isinstance(config.hidden_act, str): <ide> def __init__(self, config, **kwargs): <ide> self.activation = config.hidden_act <ide> <ide> self.ffn_output = tf.keras.layers.Dense( <del> config.hidden_size, kernel_initializer=get_initializer(config.initializer_range), name="ffn_output" <add> units=config.hidden_size, kernel_initializer=get_initializer(config.initializer_range), name="ffn_output" <ide> ) <ide> self.full_layer_layer_norm = tf.keras.layers.LayerNormalization( <ide> epsilon=config.layer_norm_eps, name="full_layer_layer_norm" <ide> ) <del> self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob) <add> self.dropout = tf.keras.layers.Dropout(rate=config.hidden_dropout_prob) <ide> <del> def call(self, hidden_states, attention_mask, head_mask, output_attentions, training=False): <add> def call( <add> self, <add> hidden_states: tf.Tensor, <add> attention_mask: tf.Tensor, <add> head_mask: tf.Tensor, <add> output_attentions: bool, <add> training: bool = False, <add> ) -> Tuple[tf.Tensor]: <ide> attention_outputs = self.attention( <del> hidden_states, attention_mask, head_mask, output_attentions, training=training <add> input_tensor=hidden_states, <add> attention_mask=attention_mask, <add> head_mask=head_mask, <add> output_attentions=output_attentions, <add> training=training, <ide> ) <del> ffn_output = self.ffn(attention_outputs[0]) <add> ffn_output = self.ffn(inputs=attention_outputs[0]) <ide> ffn_output = self.activation(ffn_output) <del> ffn_output = self.ffn_output(ffn_output) <del> ffn_output = self.dropout(ffn_output, training=training) <del> <del> hidden_states = self.full_layer_layer_norm(ffn_output + attention_outputs[0]) <add> ffn_output = self.ffn_output(inputs=ffn_output) <add> ffn_output = self.dropout(inputs=ffn_output, training=training) <add> hidden_states = self.full_layer_layer_norm(inputs=ffn_output + attention_outputs[0]) <ide> <ide> # add attentions if we output them <ide> outputs = (hidden_states,) + attention_outputs[1:] <add> <ide> return outputs <ide> <ide> <ide> class TFAlbertLayerGroup(tf.keras.layers.Layer): <del> def __init__(self, config, **kwargs): <add> def __init__(self, config: AlbertConfig, **kwargs): <ide> super().__init__(**kwargs) <ide> <del> self.output_attentions = config.output_attentions <del> self.output_hidden_states = config.output_hidden_states <ide> self.albert_layers = [ <ide> TFAlbertLayer(config, name="albert_layers_._{}".format(i)) for i in range(config.inner_group_num) <ide> ] <ide> <del> def call(self, hidden_states, attention_mask, head_mask, output_attentions, output_hidden_states, training=False): <del> layer_hidden_states = () <del> layer_attentions = () <add> def call( <add> self, <add> hidden_states: tf.Tensor, <add> attention_mask: tf.Tensor, <add> head_mask: tf.Tensor, <add> output_attentions: bool, <add> output_hidden_states: bool, <add> training: bool = False, <add> ) -> Union[TFBaseModelOutput, Tuple[tf.Tensor]]: <add> layer_hidden_states = () if output_hidden_states else None <add> layer_attentions = () if output_attentions else None <ide> <ide> for layer_index, albert_layer in enumerate(self.albert_layers): <add> if output_hidden_states: <add> layer_hidden_states = layer_hidden_states + (hidden_states,) <add> <ide> layer_output = albert_layer( <del> hidden_states, attention_mask, head_mask[layer_index], output_attentions, training=training <add> hidden_states=hidden_states, <add> attention_mask=attention_mask, <add> head_mask=head_mask[layer_index], <add> output_attentions=output_attentions, <add> training=training, <ide> ) <ide> hidden_states = layer_output[0] <ide> <ide> if output_attentions: <ide> layer_attentions = layer_attentions + (layer_output[1],) <ide> <del> if output_hidden_states: <del> layer_hidden_states = layer_hidden_states + (hidden_states,) <del> <del> outputs = (hidden_states,) <add> # Add last layer <ide> if output_hidden_states: <del> outputs = outputs + (layer_hidden_states,) <del> if output_attentions: <del> outputs = outputs + (layer_attentions,) <del> # last-layer hidden state, (layer hidden states), (layer attentions) <del> return outputs <add> layer_hidden_states = layer_hidden_states + (hidden_states,) <add> <add> return tuple(v for v in [hidden_states, layer_hidden_states, layer_attentions] if v is not None) <ide> <ide> <ide> class TFAlbertTransformer(tf.keras.layers.Layer): <del> def __init__(self, config, **kwargs): <add> def __init__(self, config: AlbertConfig, **kwargs): <ide> super().__init__(**kwargs) <ide> <ide> self.num_hidden_layers = config.num_hidden_layers <ide> self.num_hidden_groups = config.num_hidden_groups <add> # Number of layers in a hidden group <add> self.layers_per_group = int(config.num_hidden_layers / config.num_hidden_groups) <ide> self.embedding_hidden_mapping_in = tf.keras.layers.Dense( <del> config.hidden_size, <add> units=config.hidden_size, <ide> kernel_initializer=get_initializer(config.initializer_range), <ide> name="embedding_hidden_mapping_in", <ide> ) <ide> def __init__(self, config, **kwargs): <ide> <ide> def call( <ide> self, <del> hidden_states, <del> attention_mask, <del> head_mask, <del> output_attentions, <del> output_hidden_states, <del> return_dict, <del> training=False, <del> ): <del> hidden_states = self.embedding_hidden_mapping_in(hidden_states) <add> hidden_states: tf.Tensor, <add> attention_mask: tf.Tensor, <add> head_mask: tf.Tensor, <add> output_attentions: bool, <add> output_hidden_states: bool, <add> return_dict: bool, <add> training: bool = False, <add> ) -> Union[TFBaseModelOutput, Tuple[tf.Tensor]]: <add> hidden_states = self.embedding_hidden_mapping_in(inputs=hidden_states) <ide> all_attentions = () if output_attentions else None <ide> all_hidden_states = (hidden_states,) if output_hidden_states else None <ide> <ide> for i in range(self.num_hidden_layers): <del> # Number of layers in a hidden group <del> layers_per_group = int(self.num_hidden_layers / self.num_hidden_groups) <del> <ide> # Index of the hidden group <ide> group_idx = int(i / (self.num_hidden_layers / self.num_hidden_groups)) <del> <ide> layer_group_output = self.albert_layer_groups[group_idx]( <del> hidden_states, <del> attention_mask, <del> head_mask[group_idx * layers_per_group : (group_idx + 1) * layers_per_group], <del> output_attentions, <del> output_hidden_states, <add> hidden_states=hidden_states, <add> attention_mask=attention_mask, <add> head_mask=head_mask[group_idx * self.layers_per_group : (group_idx + 1) * self.layers_per_group], <add> output_attentions=output_attentions, <add> output_hidden_states=output_hidden_states, <ide> training=training, <ide> ) <ide> hidden_states = layer_group_output[0] <ide> def call( <ide> <ide> if not return_dict: <ide> return tuple(v for v in [hidden_states, all_hidden_states, all_attentions] if v is not None) <add> <ide> return TFBaseModelOutput( <ide> last_hidden_state=hidden_states, hidden_states=all_hidden_states, attentions=all_attentions <ide> ) <ide> class TFAlbertPreTrainedModel(TFPreTrainedModel): <ide> <ide> <ide> class TFAlbertMLMHead(tf.keras.layers.Layer): <del> def __init__(self, config, input_embeddings, **kwargs): <add> def __init__(self, config: AlbertConfig, input_embeddings: tf.keras.layers.Layer, **kwargs): <ide> super().__init__(**kwargs) <ide> <ide> self.vocab_size = config.vocab_size <ide> def __init__(self, config, input_embeddings, **kwargs): <ide> # an output-only bias for each token. <ide> self.decoder = input_embeddings <ide> <del> def build(self, input_shape): <add> def build(self, input_shape: tf.TensorShape): <ide> self.bias = self.add_weight(shape=(self.vocab_size,), initializer="zeros", trainable=True, name="bias") <ide> self.decoder_bias = self.add_weight( <ide> shape=(self.vocab_size,), initializer="zeros", trainable=True, name="decoder/bias" <ide> ) <ide> <ide> super().build(input_shape) <ide> <del> def get_output_embeddings(self): <add> def get_output_embeddings(self) -> tf.keras.layers.Layer: <ide> return self.decoder <ide> <del> def set_output_embeddings(self, value): <add> def set_output_embeddings(self, value: tf.Variable): <ide> self.decoder.weight = value <ide> self.decoder.vocab_size = shape_list(value)[0] <ide> <del> def get_bias(self): <add> def get_bias(self) -> Dict[str, tf.Variable]: <ide> return {"bias": self.bias, "decoder_bias": self.decoder_bias} <ide> <del> def set_bias(self, value): <add> def set_bias(self, value: tf.Variable): <ide> self.bias = value["bias"] <ide> self.decoder_bias = value["decoder_bias"] <ide> self.vocab_size = shape_list(value["bias"])[0] <ide> <del> def call(self, hidden_states): <add> def call(self, hidden_states: tf.Tensor) -> tf.Tensor: <ide> hidden_states = self.dense(inputs=hidden_states) <ide> hidden_states = self.activation(hidden_states) <ide> hidden_states = self.LayerNorm(inputs=hidden_states) <ide> def call(self, hidden_states): <ide> class TFAlbertMainLayer(tf.keras.layers.Layer): <ide> config_class = AlbertConfig <ide> <del> def __init__(self, config, add_pooling_layer=True, **kwargs): <add> def __init__(self, config: AlbertConfig, add_pooling_layer: bool = True, **kwargs): <ide> super().__init__(**kwargs) <del> self.num_hidden_layers = config.num_hidden_layers <add> <ide> self.config = config <ide> <ide> self.embeddings = TFAlbertEmbeddings(config, name="embeddings") <ide> self.encoder = TFAlbertTransformer(config, name="encoder") <ide> self.pooler = ( <ide> tf.keras.layers.Dense( <del> config.hidden_size, <add> units=config.hidden_size, <ide> kernel_initializer=get_initializer(config.initializer_range), <ide> activation="tanh", <ide> name="pooler", <ide> def __init__(self, config, add_pooling_layer=True, **kwargs): <ide> else None <ide> ) <ide> <del> def get_input_embeddings(self): <add> def get_input_embeddings(self) -> tf.keras.layers.Layer: <ide> return self.embeddings <ide> <del> def set_input_embeddings(self, value): <add> def set_input_embeddings(self, value: tf.Variable): <ide> self.embeddings.weight = value <ide> self.embeddings.vocab_size = shape_list(value)[0] <ide> <ide> class PreTrainedModel <ide> <ide> def call( <ide> self, <del> input_ids=None, <del> attention_mask=None, <del> token_type_ids=None, <del> position_ids=None, <del> head_mask=None, <del> inputs_embeds=None, <del> output_attentions=None, <del> output_hidden_states=None, <del> return_dict=None, <del> training=False, <add> input_ids: Optional[TFModelInputType] = None, <add> attention_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> token_type_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> position_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> head_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> inputs_embeds: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> output_attentions: Optional[bool] = None, <add> output_hidden_states: Optional[bool] = None, <add> return_dict: Optional[bool] = None, <add> training: bool = False, <ide> **kwargs, <del> ): <add> ) -> Union[TFBaseModelOutputWithPooling, Tuple[tf.Tensor]]: <ide> inputs = input_processing( <ide> func=self.call, <ide> config=self.config, <ide> def call( <ide> raise ValueError("You have to specify either input_ids or inputs_embeds") <ide> <ide> if inputs["attention_mask"] is None: <del> inputs["attention_mask"] = tf.fill(input_shape, 1) <add> inputs["attention_mask"] = tf.fill(dims=input_shape, value=1) <ide> <ide> if inputs["token_type_ids"] is None: <del> inputs["token_type_ids"] = tf.fill(input_shape, 0) <add> inputs["token_type_ids"] = tf.fill(dims=input_shape, value=0) <add> <add> embedding_output = self.embeddings( <add> input_ids=inputs["input_ids"], <add> position_ids=inputs["position_ids"], <add> token_type_ids=inputs["token_type_ids"], <add> inputs_embeds=inputs["inputs_embeds"], <add> training=inputs["training"], <add> ) <ide> <ide> # We create a 3D attention mask from a 2D tensor mask. <ide> # Sizes are [batch_size, 1, 1, to_seq_length] <ide> def call( <ide> # positions we want to attend and -10000.0 for masked positions. <ide> # Since we are adding it to the raw scores before the softmax, this is <ide> # effectively the same as removing these entirely. <del> <del> extended_attention_mask = tf.cast(extended_attention_mask, tf.float32) <del> extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0 <add> extended_attention_mask = tf.cast(extended_attention_mask, dtype=embedding_output.dtype) <add> one_cst = tf.constant(1.0, dtype=embedding_output.dtype) <add> ten_thousand_cst = tf.constant(-10000.0, dtype=embedding_output.dtype) <add> extended_attention_mask = tf.multiply(tf.subtract(one_cst, extended_attention_mask), ten_thousand_cst) <ide> <ide> # Prepare head mask if needed <ide> # 1.0 in head_mask indicate we keep the head <ide> def call( <ide> if inputs["head_mask"] is not None: <ide> raise NotImplementedError <ide> else: <del> inputs["head_mask"] = [None] * self.num_hidden_layers <add> inputs["head_mask"] = [None] * self.config.num_hidden_layers <ide> <del> embedding_output = self.embeddings( <del> inputs["input_ids"], <del> inputs["position_ids"], <del> inputs["token_type_ids"], <del> inputs["inputs_embeds"], <del> training=inputs["training"], <del> ) <ide> encoder_outputs = self.encoder( <del> embedding_output, <del> extended_attention_mask, <del> inputs["head_mask"], <del> inputs["output_attentions"], <del> inputs["output_hidden_states"], <del> inputs["return_dict"], <add> hidden_states=embedding_output, <add> attention_mask=extended_attention_mask, <add> head_mask=inputs["head_mask"], <add> output_attentions=inputs["output_attentions"], <add> output_hidden_states=inputs["output_hidden_states"], <add> return_dict=inputs["return_dict"], <ide> training=inputs["training"], <ide> ) <ide> <ide> sequence_output = encoder_outputs[0] <del> pooled_output = self.pooler(sequence_output[:, 0]) if self.pooler is not None else None <add> pooled_output = self.pooler(inputs=sequence_output[:, 0]) if self.pooler is not None else None <ide> <ide> if not inputs["return_dict"]: <ide> return ( <ide> class TFAlbertForPreTrainingOutput(ModelOutput): <ide> heads. <ide> """ <ide> <add> loss: tf.Tensor = None <ide> prediction_logits: tf.Tensor = None <ide> sop_logits: tf.Tensor = None <ide> hidden_states: Optional[Tuple[tf.Tensor]] = None <ide> class TFAlbertForPreTrainingOutput(ModelOutput): <ide> ALBERT_START_DOCSTRING, <ide> ) <ide> class TFAlbertModel(TFAlbertPreTrainedModel): <del> def __init__(self, config, *inputs, **kwargs): <add> def __init__(self, config: AlbertConfig, *inputs, **kwargs): <ide> super().__init__(config, *inputs, **kwargs) <add> <ide> self.albert = TFAlbertMainLayer(config, name="albert") <ide> <ide> @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) <ide> def __init__(self, config, *inputs, **kwargs): <ide> ) <ide> def call( <ide> self, <del> input_ids=None, <del> attention_mask=None, <del> token_type_ids=None, <del> position_ids=None, <del> head_mask=None, <del> inputs_embeds=None, <del> output_attentions=None, <del> output_hidden_states=None, <del> return_dict=None, <del> training=False, <add> input_ids: Optional[TFModelInputType] = None, <add> attention_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> token_type_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> position_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> head_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> inputs_embeds: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> output_attentions: Optional[bool] = None, <add> output_hidden_states: Optional[bool] = None, <add> return_dict: Optional[bool] = None, <add> training: Optional[bool] = False, <ide> **kwargs, <del> ): <add> ) -> Union[TFBaseModelOutputWithPooling, Tuple[tf.Tensor]]: <ide> inputs = input_processing( <ide> func=self.call, <ide> config=self.config, <ide> def call( <ide> training=training, <ide> kwargs_call=kwargs, <ide> ) <del> <ide> outputs = self.albert( <del> inputs["input_ids"], <add> input_ids=inputs["input_ids"], <ide> attention_mask=inputs["attention_mask"], <ide> token_type_ids=inputs["token_type_ids"], <ide> position_ids=inputs["position_ids"], <ide> def serving_output(self, output: TFBaseModelOutputWithPooling) -> TFBaseModelOut <ide> """, <ide> ALBERT_START_DOCSTRING, <ide> ) <del>class TFAlbertForPreTraining(TFAlbertPreTrainedModel): <add>class TFAlbertForPreTraining(TFAlbertPreTrainedModel, TFAlbertPreTrainingLoss): <ide> # names with a '.' represents the authorized unexpected/missing layers when a TF model is loaded from a PT model <ide> _keys_to_ignore_on_load_unexpected = [r"predictions.decoder.weight"] <ide> <del> def __init__(self, config, *inputs, **kwargs): <add> def __init__(self, config: AlbertConfig, *inputs, **kwargs): <ide> super().__init__(config, *inputs, **kwargs) <add> <ide> self.num_labels = config.num_labels <ide> <ide> self.albert = TFAlbertMainLayer(config, name="albert") <del> self.predictions = TFAlbertMLMHead(config, self.albert.embeddings, name="predictions") <add> self.predictions = TFAlbertMLMHead(config, input_embeddings=self.albert.embeddings, name="predictions") <ide> self.sop_classifier = TFAlbertSOPHead(config, name="sop_classifier") <ide> <del> def get_lm_head(self): <add> def get_lm_head(self) -> tf.keras.layers.Layer: <ide> return self.predictions <ide> <ide> @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) <ide> @replace_return_docstrings(output_type=TFAlbertForPreTrainingOutput, config_class=_CONFIG_FOR_DOC) <ide> def call( <ide> self, <del> input_ids=None, <del> attention_mask=None, <del> token_type_ids=None, <del> position_ids=None, <del> head_mask=None, <del> inputs_embeds=None, <del> output_attentions=None, <del> output_hidden_states=None, <del> return_dict=None, <del> training=False, <add> input_ids: Optional[TFModelInputType] = None, <add> attention_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> token_type_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> position_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> head_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> inputs_embeds: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> output_attentions: Optional[bool] = None, <add> output_hidden_states: Optional[bool] = None, <add> return_dict: Optional[bool] = None, <add> labels: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> sentence_order_label: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> training: Optional[bool] = False, <ide> **kwargs, <del> ): <add> ) -> Union[TFAlbertForPreTrainingOutput, Tuple[tf.Tensor]]: <ide> r""" <ide> Return: <ide> <ide> def call( <ide> output_attentions=output_attentions, <ide> output_hidden_states=output_hidden_states, <ide> return_dict=return_dict, <add> labels=labels, <add> sentence_order_label=sentence_order_label, <ide> training=training, <ide> kwargs_call=kwargs, <ide> ) <del> <ide> outputs = self.albert( <del> inputs["input_ids"], <add> input_ids=inputs["input_ids"], <ide> attention_mask=inputs["attention_mask"], <ide> token_type_ids=inputs["token_type_ids"], <ide> position_ids=inputs["position_ids"], <ide> head_mask=inputs["head_mask"], <ide> inputs_embeds=inputs["inputs_embeds"], <ide> output_attentions=inputs["output_attentions"], <ide> output_hidden_states=inputs["output_hidden_states"], <del> return_dict=return_dict, <add> return_dict=inputs["return_dict"], <ide> training=inputs["training"], <ide> ) <ide> sequence_output, pooled_output = outputs[:2] <del> prediction_scores = self.predictions(sequence_output) <del> sop_scores = self.sop_classifier(pooled_output, training=inputs["training"]) <add> prediction_scores = self.predictions(hidden_states=sequence_output) <add> sop_scores = self.sop_classifier(pooled_output=pooled_output, training=inputs["training"]) <add> total_loss = None <add> <add> if inputs["labels"] is not None and inputs["sentence_order_label"] is not None: <add> d_labels = {"labels": inputs["labels"]} <add> d_labels["sentence_order_label"] = inputs["sentence_order_label"] <add> total_loss = self.compute_loss(labels=d_labels, logits=(prediction_scores, sop_scores)) <ide> <ide> if not inputs["return_dict"]: <del> return (prediction_scores, sop_scores) + outputs[2:] <add> output = (prediction_scores, sop_scores) + outputs[2:] <add> return ((total_loss,) + output) if total_loss is not None else output <ide> <ide> return TFAlbertForPreTrainingOutput( <add> loss=total_loss, <ide> prediction_logits=prediction_scores, <ide> sop_logits=sop_scores, <ide> hidden_states=outputs.hidden_states, <ide> attentions=outputs.attentions, <ide> ) <ide> <del> def serving_output(self, output): <add> def serving_output(self, output: TFAlbertForPreTrainingOutput) -> TFAlbertForPreTrainingOutput: <ide> hs = tf.convert_to_tensor(output.hidden_states) if self.config.output_hidden_states else None <ide> attns = tf.convert_to_tensor(output.attentions) if self.config.output_attentions else None <ide> <ide> def serving_output(self, output): <ide> <ide> <ide> class TFAlbertSOPHead(tf.keras.layers.Layer): <del> def __init__(self, config, **kwargs): <add> def __init__(self, config: AlbertConfig, **kwargs): <ide> super().__init__(**kwargs) <ide> <del> self.dropout = tf.keras.layers.Dropout(config.classifier_dropout_prob) <add> self.dropout = tf.keras.layers.Dropout(rate=config.classifier_dropout_prob) <ide> self.classifier = tf.keras.layers.Dense( <del> config.num_labels, <add> units=config.num_labels, <ide> kernel_initializer=get_initializer(config.initializer_range), <ide> name="classifier", <ide> ) <ide> <del> def call(self, pooled_output, training: bool): <del> dropout_pooled_output = self.dropout(pooled_output, training=training) <del> logits = self.classifier(dropout_pooled_output) <add> def call(self, pooled_output: tf.Tensor, training: bool) -> tf.Tensor: <add> dropout_pooled_output = self.dropout(inputs=pooled_output, training=training) <add> logits = self.classifier(inputs=dropout_pooled_output) <add> <ide> return logits <ide> <ide> <ide> class TFAlbertForMaskedLM(TFAlbertPreTrainedModel, TFMaskedLanguageModelingLoss) <ide> # names with a '.' represents the authorized unexpected/missing layers when a TF model is loaded from a PT model <ide> _keys_to_ignore_on_load_unexpected = [r"pooler", r"predictions.decoder.weight"] <ide> <del> def __init__(self, config, *inputs, **kwargs): <add> def __init__(self, config: AlbertConfig, *inputs, **kwargs): <ide> super().__init__(config, *inputs, **kwargs) <ide> <ide> self.albert = TFAlbertMainLayer(config, add_pooling_layer=False, name="albert") <del> self.predictions = TFAlbertMLMHead(config, self.albert.embeddings, name="predictions") <add> self.predictions = TFAlbertMLMHead(config, input_embeddings=self.albert.embeddings, name="predictions") <ide> <del> def get_lm_head(self): <add> def get_lm_head(self) -> tf.keras.layers.Layer: <ide> return self.predictions <ide> <ide> @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) <ide> def get_lm_head(self): <ide> ) <ide> def call( <ide> self, <del> input_ids=None, <del> attention_mask=None, <del> token_type_ids=None, <del> position_ids=None, <del> head_mask=None, <del> inputs_embeds=None, <del> output_attentions=None, <del> output_hidden_states=None, <del> return_dict=None, <del> labels=None, <del> training=False, <add> input_ids: Optional[TFModelInputType] = None, <add> attention_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> token_type_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> position_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> head_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> inputs_embeds: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> output_attentions: Optional[bool] = None, <add> output_hidden_states: Optional[bool] = None, <add> return_dict: Optional[bool] = None, <add> labels: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> training: Optional[bool] = False, <ide> **kwargs, <del> ): <add> ) -> Union[TFMaskedLMOutput, Tuple[tf.Tensor]]: <ide> r""" <ide> labels (:obj:`tf.Tensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): <ide> Labels for computing the masked language modeling loss. Indices should be in ``[-100, 0, ..., <ide> def call( <ide> kwargs_call=kwargs, <ide> ) <ide> outputs = self.albert( <del> inputs["input_ids"], <add> input_ids=inputs["input_ids"], <ide> attention_mask=inputs["attention_mask"], <ide> token_type_ids=inputs["token_type_ids"], <ide> position_ids=inputs["position_ids"], <ide> head_mask=inputs["head_mask"], <ide> inputs_embeds=inputs["inputs_embeds"], <ide> output_attentions=inputs["output_attentions"], <ide> output_hidden_states=inputs["output_hidden_states"], <del> return_dict=return_dict, <add> return_dict=inputs["return_dict"], <ide> training=inputs["training"], <ide> ) <ide> sequence_output = outputs[0] <del> prediction_scores = self.predictions(sequence_output, training=inputs["training"]) <del> loss = None if inputs["labels"] is None else self.compute_loss(inputs["labels"], prediction_scores) <add> prediction_scores = self.predictions(hidden_states=sequence_output, training=inputs["training"]) <add> loss = ( <add> None if inputs["labels"] is None else self.compute_loss(labels=inputs["labels"], logits=prediction_scores) <add> ) <ide> <ide> if not inputs["return_dict"]: <ide> output = (prediction_scores,) + outputs[2:] <ide> class TFAlbertForSequenceClassification(TFAlbertPreTrainedModel, TFSequenceClass <ide> _keys_to_ignore_on_load_unexpected = [r"predictions"] <ide> _keys_to_ignore_on_load_missing = [r"dropout"] <ide> <del> def __init__(self, config, *inputs, **kwargs): <add> def __init__(self, config: AlbertConfig, *inputs, **kwargs): <ide> super().__init__(config, *inputs, **kwargs) <add> <ide> self.num_labels = config.num_labels <ide> <ide> self.albert = TFAlbertMainLayer(config, name="albert") <del> self.dropout = tf.keras.layers.Dropout(config.classifier_dropout_prob) <add> self.dropout = tf.keras.layers.Dropout(rate=config.classifier_dropout_prob) <ide> self.classifier = tf.keras.layers.Dense( <del> config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name="classifier" <add> units=config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name="classifier" <ide> ) <ide> <ide> @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) <ide> def __init__(self, config, *inputs, **kwargs): <ide> ) <ide> def call( <ide> self, <del> input_ids=None, <del> attention_mask=None, <del> token_type_ids=None, <del> position_ids=None, <del> head_mask=None, <del> inputs_embeds=None, <del> output_attentions=None, <del> output_hidden_states=None, <del> return_dict=None, <del> labels=None, <del> training=False, <add> input_ids: Optional[TFModelInputType] = None, <add> attention_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> token_type_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> position_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> head_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> inputs_embeds: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> output_attentions: Optional[bool] = None, <add> output_hidden_states: Optional[bool] = None, <add> return_dict: Optional[bool] = None, <add> labels: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> training: Optional[bool] = False, <ide> **kwargs, <del> ): <add> ) -> Union[TFSequenceClassifierOutput, Tuple[tf.Tensor]]: <ide> r""" <ide> labels (:obj:`tf.Tensor` of shape :obj:`(batch_size,)`, `optional`): <ide> Labels for computing the sequence classification/regression loss. Indices should be in ``[0, ..., <ide> def call( <ide> kwargs_call=kwargs, <ide> ) <ide> outputs = self.albert( <del> inputs["input_ids"], <add> input_ids=inputs["input_ids"], <ide> attention_mask=inputs["attention_mask"], <ide> token_type_ids=inputs["token_type_ids"], <ide> position_ids=inputs["position_ids"], <ide> head_mask=inputs["head_mask"], <ide> inputs_embeds=inputs["inputs_embeds"], <ide> output_attentions=inputs["output_attentions"], <ide> output_hidden_states=inputs["output_hidden_states"], <del> return_dict=return_dict, <add> return_dict=inputs["return_dict"], <ide> training=inputs["training"], <ide> ) <ide> pooled_output = outputs[1] <del> pooled_output = self.dropout(pooled_output, training=inputs["training"]) <del> logits = self.classifier(pooled_output) <del> loss = None if inputs["labels"] is None else self.compute_loss(inputs["labels"], logits) <add> pooled_output = self.dropout(inputs=pooled_output, training=inputs["training"]) <add> logits = self.classifier(inputs=pooled_output) <add> loss = None if inputs["labels"] is None else self.compute_loss(labels=inputs["labels"], logits=logits) <ide> <ide> if not inputs["return_dict"]: <ide> output = (logits,) + outputs[2:] <ide> class TFAlbertForTokenClassification(TFAlbertPreTrainedModel, TFTokenClassificat <ide> _keys_to_ignore_on_load_unexpected = [r"pooler", r"predictions"] <ide> _keys_to_ignore_on_load_missing = [r"dropout"] <ide> <del> def __init__(self, config, *inputs, **kwargs): <add> def __init__(self, config: AlbertConfig, *inputs, **kwargs): <ide> super().__init__(config, *inputs, **kwargs) <add> <ide> self.num_labels = config.num_labels <ide> <ide> self.albert = TFAlbertMainLayer(config, add_pooling_layer=False, name="albert") <del> self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob) <add> self.dropout = tf.keras.layers.Dropout(rate=config.hidden_dropout_prob) <ide> self.classifier = tf.keras.layers.Dense( <del> config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name="classifier" <add> units=config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name="classifier" <ide> ) <ide> <ide> @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) <ide> def __init__(self, config, *inputs, **kwargs): <ide> ) <ide> def call( <ide> self, <del> input_ids=None, <del> attention_mask=None, <del> token_type_ids=None, <del> position_ids=None, <del> head_mask=None, <del> inputs_embeds=None, <del> output_attentions=None, <del> output_hidden_states=None, <del> return_dict=None, <del> labels=None, <del> training=False, <add> input_ids: Optional[TFModelInputType] = None, <add> attention_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> token_type_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> position_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> head_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> inputs_embeds: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> output_attentions: Optional[bool] = None, <add> output_hidden_states: Optional[bool] = None, <add> return_dict: Optional[bool] = None, <add> labels: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> training: Optional[bool] = False, <ide> **kwargs, <del> ): <add> ) -> Union[TFTokenClassifierOutput, Tuple[tf.Tensor]]: <ide> r""" <ide> labels (:obj:`tf.Tensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): <ide> Labels for computing the token classification loss. Indices should be in ``[0, ..., config.num_labels - <ide> def call( <ide> kwargs_call=kwargs, <ide> ) <ide> outputs = self.albert( <del> inputs["input_ids"], <add> input_ids=inputs["input_ids"], <ide> attention_mask=inputs["attention_mask"], <ide> token_type_ids=inputs["token_type_ids"], <ide> position_ids=inputs["position_ids"], <ide> def call( <ide> training=inputs["training"], <ide> ) <ide> sequence_output = outputs[0] <del> sequence_output = self.dropout(sequence_output, training=inputs["training"]) <del> logits = self.classifier(sequence_output) <del> loss = None if inputs["labels"] is None else self.compute_loss(inputs["labels"], logits) <add> sequence_output = self.dropout(inputs=sequence_output, training=inputs["training"]) <add> logits = self.classifier(inputs=sequence_output) <add> loss = None if inputs["labels"] is None else self.compute_loss(labels=inputs["labels"], logits=logits) <ide> <ide> if not inputs["return_dict"]: <ide> output = (logits,) + outputs[2:] <ide> class TFAlbertForQuestionAnswering(TFAlbertPreTrainedModel, TFQuestionAnsweringL <ide> # names with a '.' represents the authorized unexpected/missing layers when a TF model is loaded from a PT model <ide> _keys_to_ignore_on_load_unexpected = [r"pooler", r"predictions"] <ide> <del> def __init__(self, config, *inputs, **kwargs): <add> def __init__(self, config: AlbertConfig, *inputs, **kwargs): <ide> super().__init__(config, *inputs, **kwargs) <add> <ide> self.num_labels = config.num_labels <ide> <ide> self.albert = TFAlbertMainLayer(config, add_pooling_layer=False, name="albert") <ide> self.qa_outputs = tf.keras.layers.Dense( <del> config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name="qa_outputs" <add> units=config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name="qa_outputs" <ide> ) <ide> <ide> @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) <ide> def __init__(self, config, *inputs, **kwargs): <ide> ) <ide> def call( <ide> self, <del> input_ids=None, <del> attention_mask=None, <del> token_type_ids=None, <del> position_ids=None, <del> head_mask=None, <del> inputs_embeds=None, <del> output_attentions=None, <del> output_hidden_states=None, <del> return_dict=None, <del> start_positions=None, <del> end_positions=None, <del> training=False, <add> input_ids: Optional[TFModelInputType] = None, <add> attention_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> token_type_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> position_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> head_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> inputs_embeds: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> output_attentions: Optional[bool] = None, <add> output_hidden_states: Optional[bool] = None, <add> return_dict: Optional[bool] = None, <add> start_positions: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> end_positions: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> training: Optional[bool] = False, <ide> **kwargs, <del> ): <add> ) -> Union[TFQuestionAnsweringModelOutput, Tuple[tf.Tensor]]: <ide> r""" <ide> start_positions (:obj:`tf.Tensor` of shape :obj:`(batch_size,)`, `optional`): <ide> Labels for position (index) of the start of the labelled span for computing the token classification loss. <ide> def call( <ide> kwargs_call=kwargs, <ide> ) <ide> outputs = self.albert( <del> inputs["input_ids"], <add> input_ids=inputs["input_ids"], <ide> attention_mask=inputs["attention_mask"], <ide> token_type_ids=inputs["token_type_ids"], <ide> position_ids=inputs["position_ids"], <ide> head_mask=inputs["head_mask"], <ide> inputs_embeds=inputs["inputs_embeds"], <ide> output_attentions=inputs["output_attentions"], <ide> output_hidden_states=inputs["output_hidden_states"], <del> return_dict=return_dict, <add> return_dict=inputs["return_dict"], <ide> training=inputs["training"], <ide> ) <ide> sequence_output = outputs[0] <del> logits = self.qa_outputs(sequence_output) <del> start_logits, end_logits = tf.split(logits, 2, axis=-1) <del> start_logits = tf.squeeze(start_logits, axis=-1) <del> end_logits = tf.squeeze(end_logits, axis=-1) <add> logits = self.qa_outputs(inputs=sequence_output) <add> start_logits, end_logits = tf.split(value=logits, num_or_size_splits=2, axis=-1) <add> start_logits = tf.squeeze(input=start_logits, axis=-1) <add> end_logits = tf.squeeze(input=end_logits, axis=-1) <ide> loss = None <ide> <ide> if inputs["start_positions"] is not None and inputs["end_positions"] is not None: <ide> labels = {"start_position": inputs["start_positions"]} <ide> labels["end_position"] = inputs["end_positions"] <del> loss = self.compute_loss(labels, (start_logits, end_logits)) <add> loss = self.compute_loss(labels=labels, logits=(start_logits, end_logits)) <ide> <ide> if not inputs["return_dict"]: <ide> output = (start_logits, end_logits) + outputs[2:] <ide> class TFAlbertForMultipleChoice(TFAlbertPreTrainedModel, TFMultipleChoiceLoss): <ide> _keys_to_ignore_on_load_unexpected = [r"pooler", r"predictions"] <ide> _keys_to_ignore_on_load_missing = [r"dropout"] <ide> <del> def __init__(self, config, *inputs, **kwargs): <add> def __init__(self, config: AlbertConfig, *inputs, **kwargs): <ide> super().__init__(config, *inputs, **kwargs) <ide> <ide> self.albert = TFAlbertMainLayer(config, name="albert") <del> self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob) <add> self.dropout = tf.keras.layers.Dropout(rate=config.hidden_dropout_prob) <ide> self.classifier = tf.keras.layers.Dense( <del> 1, kernel_initializer=get_initializer(config.initializer_range), name="classifier" <add> units=1, kernel_initializer=get_initializer(config.initializer_range), name="classifier" <ide> ) <ide> <ide> @property <ide> def dummy_inputs(self): <ide> ) <ide> def call( <ide> self, <del> input_ids=None, <del> attention_mask=None, <del> token_type_ids=None, <del> position_ids=None, <del> head_mask=None, <del> inputs_embeds=None, <del> output_attentions=None, <del> output_hidden_states=None, <del> return_dict=None, <del> labels=None, <del> training=False, <add> input_ids: Optional[TFModelInputType] = None, <add> attention_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> token_type_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> position_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> head_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> inputs_embeds: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> output_attentions: Optional[bool] = None, <add> output_hidden_states: Optional[bool] = None, <add> return_dict: Optional[bool] = None, <add> labels: Optional[Union[np.ndarray, tf.Tensor]] = None, <add> training: Optional[bool] = False, <ide> **kwargs, <del> ): <add> ) -> Union[TFMultipleChoiceModelOutput, Tuple[tf.Tensor]]: <ide> r""" <ide> labels (:obj:`tf.Tensor` of shape :obj:`(batch_size,)`, `optional`): <ide> Labels for computing the multiple choice classification loss. Indices should be in ``[0, ..., <ide> def call( <ide> <ide> flat_input_ids = tf.reshape(inputs["input_ids"], (-1, seq_length)) if inputs["input_ids"] is not None else None <ide> flat_attention_mask = ( <del> tf.reshape(inputs["attention_mask"], (-1, seq_length)) if inputs["attention_mask"] is not None else None <add> tf.reshape(tensor=inputs["attention_mask"], shape=(-1, seq_length)) <add> if inputs["attention_mask"] is not None <add> else None <ide> ) <ide> flat_token_type_ids = ( <del> tf.reshape(inputs["token_type_ids"], (-1, seq_length)) if inputs["token_type_ids"] is not None else None <add> tf.reshape(tensor=inputs["token_type_ids"], shape=(-1, seq_length)) <add> if inputs["token_type_ids"] is not None <add> else None <add> ) <add> flat_position_ids = ( <add> tf.reshape(tensor=position_ids, shape=(-1, seq_length)) if position_ids is not None else None <ide> ) <del> flat_position_ids = tf.reshape(position_ids, (-1, seq_length)) if position_ids is not None else None <ide> flat_inputs_embeds = ( <del> tf.reshape(inputs["inputs_embeds"], (-1, seq_length, shape_list(inputs["inputs_embeds"])[3])) <add> tf.reshape(tensor=inputs["inputs_embeds"], shape=(-1, seq_length, shape_list(inputs["inputs_embeds"])[3])) <ide> if inputs["inputs_embeds"] is not None <ide> else None <ide> ) <del> <ide> outputs = self.albert( <del> flat_input_ids, <del> flat_attention_mask, <del> flat_token_type_ids, <del> flat_position_ids, <del> inputs["head_mask"], <del> flat_inputs_embeds, <del> inputs["output_attentions"], <del> inputs["output_hidden_states"], <add> input_ids=flat_input_ids, <add> attention_mask=flat_attention_mask, <add> token_type_ids=flat_token_type_ids, <add> position_ids=flat_position_ids, <add> head_mask=inputs["head_mask"], <add> inputs_embeds=flat_inputs_embeds, <add> output_attentions=inputs["output_attentions"], <add> output_hidden_states=inputs["output_hidden_states"], <ide> return_dict=inputs["return_dict"], <ide> training=inputs["training"], <ide> ) <del> <ide> pooled_output = outputs[1] <del> <del> pooled_output = self.dropout(pooled_output, training=inputs["training"]) <del> logits = self.classifier(pooled_output) <del> reshaped_logits = tf.reshape(logits, (-1, num_choices)) <del> <del> loss = None if inputs["labels"] is None else self.compute_loss(inputs["labels"], reshaped_logits) <add> pooled_output = self.dropout(inputs=pooled_output, training=inputs["training"]) <add> logits = self.classifier(inputs=pooled_output) <add> reshaped_logits = tf.reshape(tensor=logits, shape=(-1, num_choices)) <add> loss = None if inputs["labels"] is None else self.compute_loss(labels=inputs["labels"], logits=reshaped_logits) <ide> <ide> if not inputs["return_dict"]: <ide> output = (reshaped_logits,) + outputs[2:] <ide> def call( <ide> ] <ide> ) <ide> # Copied from transformers.models.bert.modeling_tf_bert.TFBertForMultipleChoice.serving <del> def serving(self, inputs: Dict[str, tf.Tensor]): <add> def serving(self, inputs: Dict[str, tf.Tensor]) -> TFMultipleChoiceModelOutput: <ide> output = self.call(input_ids=inputs) <ide> <ide> return self.serving_output(output) <ide><path>src/transformers/models/bert/modeling_tf_bert.py <ide> def build(self, input_shape: tf.TensorShape): <ide> self.weight = self.add_weight( <ide> name="weight", <ide> shape=[self.vocab_size, self.hidden_size], <del> initializer=get_initializer(initializer_range=self.initializer_range), <add> initializer=get_initializer(self.initializer_range), <ide> ) <ide> <ide> with tf.name_scope("token_type_embeddings"): <ide> self.token_type_embeddings = self.add_weight( <ide> name="embeddings", <ide> shape=[self.type_vocab_size, self.hidden_size], <del> initializer=get_initializer(initializer_range=self.initializer_range), <add> initializer=get_initializer(self.initializer_range), <ide> ) <ide> <ide> with tf.name_scope("position_embeddings"): <ide> self.position_embeddings = self.add_weight( <ide> name="embeddings", <ide> shape=[self.max_position_embeddings, self.hidden_size], <del> initializer=get_initializer(initializer_range=self.initializer_range), <add> initializer=get_initializer(self.initializer_range), <ide> ) <ide> <ide> super().build(input_shape) <ide> def call( <ide> key_layer = self.transpose_for_scores(mixed_key_layer, batch_size) <ide> value_layer = self.transpose_for_scores(mixed_value_layer, batch_size) <ide> <del> # Take the dot product between "query" and "key" to get the raw <del> # attention scores. <add> # Take the dot product between "query" and "key" to get the raw attention scores. <ide> # (batch size, num_heads, seq_len_q, seq_len_k) <ide> attention_scores = tf.matmul(query_layer, key_layer, transpose_b=True) <ide> dk = tf.cast(self.sqrt_att_head_size, dtype=attention_scores.dtype) <ide> def call( <ide> total_loss = self.compute_loss(labels=d_labels, logits=(prediction_scores, seq_relationship_score)) <ide> <ide> if not inputs["return_dict"]: <del> return (prediction_scores, seq_relationship_score) + outputs[2:] <add> output = (prediction_scores, seq_relationship_score) + outputs[2:] <add> return ((total_loss,) + output) if total_loss is not None else output <ide> <ide> return TFBertForPreTrainingOutput( <ide> loss=total_loss, <ide> def call( <ide> } <ide> ] <ide> ) <del> def serving(self, inputs: Dict[str, tf.Tensor]): <add> def serving(self, inputs: Dict[str, tf.Tensor]) -> TFMultipleChoiceModelOutput: <ide> output = self.call(input_ids=inputs) <ide> <ide> return self.serving_output(output) <ide><path>src/transformers/models/convbert/modeling_tf_convbert.py <ide> ] <ide> <ide> <del># Copied from transformers.models.albert.modeling_tf_albert.TFAlbertEmbeddings <add># Copied from transformers.models.albert.modeling_tf_albert.TFAlbertEmbeddings with Albert->ConvBert <ide> class TFConvBertEmbeddings(tf.keras.layers.Layer): <ide> """Construct the embeddings from word, position and token_type embeddings.""" <ide> <del> def __init__(self, config, **kwargs): <add> def __init__(self, config: ConvBertConfig, **kwargs): <ide> super().__init__(**kwargs) <ide> <ide> self.vocab_size = config.vocab_size <ide> def build(self, input_shape: tf.TensorShape): <ide> self.weight = self.add_weight( <ide> name="weight", <ide> shape=[self.vocab_size, self.embedding_size], <del> initializer=get_initializer(initializer_range=self.initializer_range), <add> initializer=get_initializer(self.initializer_range), <ide> ) <ide> <ide> with tf.name_scope("token_type_embeddings"): <ide> self.token_type_embeddings = self.add_weight( <ide> name="embeddings", <ide> shape=[self.type_vocab_size, self.embedding_size], <del> initializer=get_initializer(initializer_range=self.initializer_range), <add> initializer=get_initializer(self.initializer_range), <ide> ) <ide> <ide> with tf.name_scope("position_embeddings"): <ide> self.position_embeddings = self.add_weight( <ide> name="embeddings", <ide> shape=[self.max_position_embeddings, self.embedding_size], <del> initializer=get_initializer(initializer_range=self.initializer_range), <add> initializer=get_initializer(self.initializer_range), <ide> ) <ide> <ide> super().build(input_shape) <ide><path>src/transformers/models/electra/modeling_tf_electra.py <ide> def call( <ide> key_layer = self.transpose_for_scores(mixed_key_layer, batch_size) <ide> value_layer = self.transpose_for_scores(mixed_value_layer, batch_size) <ide> <del> # Take the dot product between "query" and "key" to get the raw <del> # attention scores. <add> # Take the dot product between "query" and "key" to get the raw attention scores. <ide> # (batch size, num_heads, seq_len_q, seq_len_k) <ide> attention_scores = tf.matmul(query_layer, key_layer, transpose_b=True) <ide> dk = tf.cast(self.sqrt_att_head_size, dtype=attention_scores.dtype) <ide> def call(self, hidden_states: tf.Tensor) -> tf.Tensor: <ide> class TFElectraEmbeddings(tf.keras.layers.Layer): <ide> """Construct the embeddings from word, position and token_type embeddings.""" <ide> <del> def __init__(self, config, **kwargs): <add> def __init__(self, config: ElectraConfig, **kwargs): <ide> super().__init__(**kwargs) <ide> <ide> self.vocab_size = config.vocab_size <ide> def build(self, input_shape: tf.TensorShape): <ide> self.weight = self.add_weight( <ide> name="weight", <ide> shape=[self.vocab_size, self.embedding_size], <del> initializer=get_initializer(initializer_range=self.initializer_range), <add> initializer=get_initializer(self.initializer_range), <ide> ) <ide> <ide> with tf.name_scope("token_type_embeddings"): <ide> self.token_type_embeddings = self.add_weight( <ide> name="embeddings", <ide> shape=[self.type_vocab_size, self.embedding_size], <del> initializer=get_initializer(initializer_range=self.initializer_range), <add> initializer=get_initializer(self.initializer_range), <ide> ) <ide> <ide> with tf.name_scope("position_embeddings"): <ide> self.position_embeddings = self.add_weight( <ide> name="embeddings", <ide> shape=[self.max_position_embeddings, self.embedding_size], <del> initializer=get_initializer(initializer_range=self.initializer_range), <add> initializer=get_initializer(self.initializer_range), <ide> ) <ide> <ide> super().build(input_shape) <ide><path>src/transformers/models/longformer/modeling_tf_longformer.py <ide> def build(self, input_shape: tf.TensorShape): <ide> self.weight = self.add_weight( <ide> name="weight", <ide> shape=[self.vocab_size, self.hidden_size], <del> initializer=get_initializer(initializer_range=self.initializer_range), <add> initializer=get_initializer(self.initializer_range), <ide> ) <ide> <ide> with tf.name_scope("token_type_embeddings"): <ide> self.token_type_embeddings = self.add_weight( <ide> name="embeddings", <ide> shape=[self.type_vocab_size, self.hidden_size], <del> initializer=get_initializer(initializer_range=self.initializer_range), <add> initializer=get_initializer(self.initializer_range), <ide> ) <ide> <ide> with tf.name_scope("position_embeddings"): <ide> self.position_embeddings = self.add_weight( <ide> name="embeddings", <ide> shape=[self.max_position_embeddings, self.hidden_size], <del> initializer=get_initializer(initializer_range=self.initializer_range), <add> initializer=get_initializer(self.initializer_range), <ide> ) <ide> <ide> super().build(input_shape) <ide><path>src/transformers/models/roberta/modeling_tf_roberta.py <ide> def build(self, input_shape: tf.TensorShape): <ide> self.weight = self.add_weight( <ide> name="weight", <ide> shape=[self.vocab_size, self.hidden_size], <del> initializer=get_initializer(initializer_range=self.initializer_range), <add> initializer=get_initializer(self.initializer_range), <ide> ) <ide> <ide> with tf.name_scope("token_type_embeddings"): <ide> self.token_type_embeddings = self.add_weight( <ide> name="embeddings", <ide> shape=[self.type_vocab_size, self.hidden_size], <del> initializer=get_initializer(initializer_range=self.initializer_range), <add> initializer=get_initializer(self.initializer_range), <ide> ) <ide> <ide> with tf.name_scope("position_embeddings"): <ide> self.position_embeddings = self.add_weight( <ide> name="embeddings", <ide> shape=[self.max_position_embeddings, self.hidden_size], <del> initializer=get_initializer(initializer_range=self.initializer_range), <add> initializer=get_initializer(self.initializer_range), <ide> ) <ide> <ide> super().build(input_shape) <ide> def call( <ide> key_layer = self.transpose_for_scores(mixed_key_layer, batch_size) <ide> value_layer = self.transpose_for_scores(mixed_value_layer, batch_size) <ide> <del> # Take the dot product between "query" and "key" to get the raw <del> # attention scores. <add> # Take the dot product between "query" and "key" to get the raw attention scores. <ide> # (batch size, num_heads, seq_len_q, seq_len_k) <ide> attention_scores = tf.matmul(query_layer, key_layer, transpose_b=True) <ide> dk = tf.cast(self.sqrt_att_head_size, dtype=attention_scores.dtype) <ide><path>templates/adding_a_new_model/cookiecutter-template-{{cookiecutter.modelname}}/modeling_tf_{{cookiecutter.lowercase_modelname}}.py <ide> def build(self, input_shape: tf.TensorShape): <ide> self.weight = self.add_weight( <ide> name="weight", <ide> shape=[self.vocab_size, self.hidden_size], <del> initializer=get_initializer(initializer_range=self.initializer_range), <add> initializer=get_initializer(self.initializer_range), <ide> ) <ide> <ide> with tf.name_scope("token_type_embeddings"): <ide> self.token_type_embeddings = self.add_weight( <ide> name="embeddings", <ide> shape=[self.type_vocab_size, self.hidden_size], <del> initializer=get_initializer(initializer_range=self.initializer_range), <add> initializer=get_initializer(self.initializer_range), <ide> ) <ide> <ide> with tf.name_scope("position_embeddings"): <ide> self.position_embeddings = self.add_weight( <ide> name="embeddings", <ide> shape=[self.max_position_embeddings, self.hidden_size], <del> initializer=get_initializer(initializer_range=self.initializer_range), <add> initializer=get_initializer(self.initializer_range), <ide> ) <ide> <ide> super().build(input_shape) <ide> def call( <ide> key_layer = self.transpose_for_scores(mixed_key_layer, batch_size) <ide> value_layer = self.transpose_for_scores(mixed_value_layer, batch_size) <ide> <del> # Take the dot product between "query" and "key" to get the raw <del> # attention scores. <add> # Take the dot product between "query" and "key" to get the raw attention scores. <ide> # (batch size, num_heads, seq_len_q, seq_len_k) <ide> attention_scores = tf.matmul(query_layer, key_layer, transpose_b=True) <ide> dk = tf.cast(self.sqrt_att_head_size, dtype=attention_scores.dtype) <ide> def call( <ide> "token_type_ids": tf.TensorSpec((None, None, None), tf.int32, name="token_type_ids"), <ide> }]) <ide> # Copied from transformers.models.bert.modeling_tf_bert.TFBertForMultipleChoice.serving <del> def serving(self, inputs: Dict[str, tf.Tensor]): <add> def serving(self, inputs: Dict[str, tf.Tensor]) -> TFMultipleChoiceModelOutput: <ide> output = self.call(input_ids=inputs) <ide> <ide> return self.serving_output(output) <ide><path>tests/test_modeling_tf_albert.py <ide> if is_tf_available(): <ide> import tensorflow as tf <ide> <add> from transformers import TF_MODEL_FOR_PRETRAINING_MAPPING <ide> from transformers.models.albert.modeling_tf_albert import ( <ide> TF_ALBERT_PRETRAINED_MODEL_ARCHIVE_LIST, <ide> TFAlbertForMaskedLM, <ide> class TFAlbertModelTest(TFModelTesterMixin, unittest.TestCase): <ide> test_head_masking = False <ide> test_onnx = False <ide> <add> # special case for ForPreTraining model <add> def _prepare_for_class(self, inputs_dict, model_class, return_labels=False): <add> inputs_dict = super()._prepare_for_class(inputs_dict, model_class, return_labels=return_labels) <add> <add> if return_labels: <add> if model_class in TF_MODEL_FOR_PRETRAINING_MAPPING.values(): <add> inputs_dict["sentence_order_label"] = tf.zeros(self.model_tester.batch_size, dtype=tf.int32) <add> <add> return inputs_dict <add> <ide> def setUp(self): <ide> self.model_tester = TFAlbertModelTester(self) <ide> self.config_tester = ConfigTester(self, config_class=AlbertConfig, hidden_size=37) <ide> def test_model_common_attributes(self): <ide> name = model.get_bias() <ide> assert name is None <ide> <del> def test_mixed_precision(self): <del> # TODO JP: Make ALBERT float16 compliant <del> pass <del> <ide> @slow <ide> def test_model_from_pretrained(self): <ide> for model_name in TF_ALBERT_PRETRAINED_MODEL_ARCHIVE_LIST[:1]:
8
Python
Python
handle string with only whitespaces as empty
21451ec6ba364de78c14e7d05a55913da2809844
<ide><path>transformers/tokenization_utils.py <ide> def split_on_token(tok, text): <ide> return result <ide> <ide> def split_on_tokens(tok_list, text): <del> if not text: <add> if not text.strip(): <ide> return [] <ide> if not tok_list: <ide> return self._tokenize(text, **kwargs)
1
Javascript
Javascript
fix incorrect types in the api documentation
ba4a07ce075a87fcbfdaaed6c22c1a1e14583d4b
<ide><path>src/display/api.js <ide> class PDFDocumentProxy { <ide> } <ide> <ide> /** <del> * @returns {Promise<Object<string,any>>} A promise that is resolved with <del> * a lookup table for mapping named destinations to reference numbers. <add> * @returns {Promise<Object<string, Array<any>>>} A promise that is resolved <add> * with a mapping from named destinations to references. <ide> * <ide> * This can be slow for large documents. Use `getDestination` instead. <ide> */ <ide> class PDFDocumentProxy { <ide> } <ide> <ide> /** <del> * @returns {Promise<Array<string> | null>} A promise that is <del> * resolved with an {Array} containing the page labels that correspond to <del> * the page indexes, or `null` when no page labels are present in the PDF <del> * file. <add> * @returns {Promise<Array<string> | null>} A promise that is resolved with <add> * an {Array} containing the page labels that correspond to the page <add> * indexes, or `null` when no page labels are present in the PDF file. <ide> */ <ide> getPageLabels() { <ide> return this._transport.getPageLabels(); <ide> class PDFDocumentProxy { <ide> } <ide> <ide> /** <del> * @returns {Promise<Object>} A promise that is resolved with an {Object} <del> * containing the viewer preferences. <add> * @returns {Promise<Object | null>} A promise that is resolved with an <add> * {Object} containing the viewer preferences, or `null` when no viewer <add> * preferences are present in the PDF file. <ide> */ <ide> getViewerPreferences() { <ide> return this._transport.getViewerPreferences(); <ide> class PDFDocumentProxy { <ide> } <ide> <ide> /** <del> * @returns {Promise<Array<string | null>>} A promise that is resolved with <add> * @returns {Promise<Array<number> | null>} A promise that is resolved with <ide> * an {Array} that contains the permission flags for the PDF document, or <ide> * `null` when no permissions are present in the PDF file. <ide> */ <ide> class RenderTask { <ide> * Callback for incremental rendering -- a function that will be called <ide> * each time the rendering is paused. To continue rendering call the <ide> * function that is the first argument to the callback. <del> * @callback <del> * @param {function} <add> * @type {function} <ide> */ <ide> this.onContinue = null; <ide> }
1
Ruby
Ruby
add a migration schema model
67fba0cfa93feaa183d546de625e63cb16c56d7d
<ide><path>activerecord/lib/active_record/migration.rb <ide> require "active_support/core_ext/module/delegation" <ide> require "active_support/core_ext/class/attribute_accessors" <ide> require 'active_support/deprecation' <add>require 'active_record/schema_migration' <ide> <ide> module ActiveRecord <ide> # Exception that can be raised to stop migrations from going backwards. <ide> def open(migrations_paths) <ide> end <ide> <ide> def schema_migrations_table_name <del> Base.table_name_prefix + 'schema_migrations' + Base.table_name_suffix <add> SchemaMigration.table_name <ide> end <ide> <ide> def get_all_versions <del> table = Arel::Table.new(schema_migrations_table_name) <del> Base.connection.select_values(table.project(table['version'])).map{ |v| v.to_i }.sort <add> SchemaMigration.all.map { |x| x.version.to_i }.sort <ide> end <ide> <ide> def current_version <ide><path>activerecord/lib/active_record/schema_migration.rb <add>require 'active_record' <add> <add>module ActiveRecord <add> class SchemaMigration < ActiveRecord::Base <add> def self.table_name <add> Base.table_name_prefix + 'schema_migrations' + Base.table_name_suffix <add> end <add> end <add>end
2
PHP
PHP
remove the morphclass
244a0a90c3e015aead0b49f871618fba95ccec58
<ide><path>src/Illuminate/Database/Eloquent/Model.php <ide> abstract class Model implements ArrayAccess, Arrayable, Jsonable, JsonSerializab <ide> */ <ide> protected $with = []; <ide> <del> /** <del> * The class name to be used in polymorphic relations. <del> * <del> * @var string <del> */ <del> protected $morphClass; <del> <ide> /** <ide> * Indicates if the model exists. <ide> * <ide> public function getMorphClass() <ide> return array_search($class, $morphMap, true); <ide> } <ide> <del> return $this->morphClass ?: $class; <add> return $class; <ide> } <ide> <ide> /**
1
Ruby
Ruby
copy edit[ci skip]
2642c2961cda2074cc1495a4635898ca8ab33adf
<ide><path>actionpack/lib/abstract_controller/base.rb <ide> def action_methods <ide> # <ide> # Notice that <tt>action_methods.include?("foo")</tt> may return <ide> # false and <tt>available_action?("foo")</tt> returns true because <del> # available action consider actions that are also available <add> # this method considers actions that are also available <ide> # through other means, for example, implicit render ones. <ide> # <ide> # ==== Parameters
1
Python
Python
fix xlm tests
a31e591d27a099ca6bd30949ecd7cc61213b8327
<ide><path>pytorch_transformers/modeling_xlm.py <ide> def forward(self, x, y=None): <ide> scores = self.proj(x) <ide> outputs = (scores,) + outputs <ide> if y is not None: <del> loss = F.cross_entropy(scores.view(-1, self.n_words), y, reduction='elementwise_mean') <add> loss = F.cross_entropy(scores.view(-1, self.n_words), y.view(-1), reduction='elementwise_mean') <ide> outputs = (loss,) + outputs <ide> else: <ide> scores = self.proj.log_prob(x) <ide><path>pytorch_transformers/tests/modeling_xlm_test.py <ide> def create_and_check_xlm_qa(self, config, input_ids, token_type_ids, input_lengt <ide> model.eval() <ide> <ide> outputs = model(input_ids) <del> start_top_log_probs, start_top_index, end_top_log_probs, end_top_index, cls_logits, mems = outputs <add> start_top_log_probs, start_top_index, end_top_log_probs, end_top_index, cls_logits = outputs <ide> <ide> outputs = model(input_ids, start_positions=sequence_labels, <ide> end_positions=sequence_labels,
2
Javascript
Javascript
fix docblock of reactfragment
a74138bdee6c78991d514438a82dae88854f589f
<ide><path>src/addons/ReactFragment.js <ide> * LICENSE file in the root directory of this source tree. An additional grant <ide> * of patent rights can be found in the PATENTS file in the same directory. <ide> * <del>* @providesModule ReactFragment <del>*/ <add> * @providesModule ReactFragment <add> */ <ide> <ide> 'use strict'; <ide>
1
PHP
PHP
avoid pass by reference error on 5.4
e4542827c85c6558833bb8ce1c8b63ef2cd12c76
<ide><path>lib/Cake/Routing/Filter/AssetDispatcher.php <ide> public function beforeDispatch($event) { <ide> return $response; <ide> } <ide> <del> $ext = array_pop(explode('.', $url)); <add> $pathSegments = explode('.', $url); <add> $ext = array_pop($pathSegments); <ide> $this->_deliverAsset($response, $assetFile, $ext); <ide> return $response; <ide> }
1
Python
Python
add missing code markers
f419af9f61160e8a1a52e5a42efd607648030ae9
<ide><path>keras/wrappers/scikit_learn.py <ide> def fit(self, x, y, **kwargs): <ide> <ide> # Arguments <ide> x : array-like, shape `(n_samples, n_features)` <del> Training samples where n_samples is the number of samples <del> and n_features is the number of features. <add> Training samples where `n_samples` is the number of samples <add> and `n_features` is the number of features. <ide> y : array-like, shape `(n_samples,)` or `(n_samples, n_outputs)` <del> True labels for X. <add> True labels for `x`. <ide> **kwargs: dictionary arguments <ide> Legal arguments are the arguments of `Sequential.fit` <ide> <ide> def filter_sk_params(self, fn, override=None): <ide> <ide> # Arguments <ide> fn : arbitrary function <del> override: dictionary, values to override sk_params <add> override: dictionary, values to override `sk_params` <ide> <ide> # Returns <ide> res : dictionary containing variables <del> in both sk_params and fn's arguments. <add> in both `sk_params` and `fn`'s arguments. <ide> """ <ide> override = override or {} <ide> res = {} <ide> def fit(self, x, y, **kwargs): <ide> <ide> # Arguments <ide> x : array-like, shape `(n_samples, n_features)` <del> Training samples where n_samples is the number of samples <del> and n_features is the number of features. <add> Training samples where `n_samples` is the number of samples <add> and `n_features` is the number of features. <ide> y : array-like, shape `(n_samples,)` or `(n_samples, n_outputs)` <del> True labels for X. <add> True labels for `x`. <ide> **kwargs: dictionary arguments <ide> Legal arguments are the arguments of `Sequential.fit` <ide> <ide> def predict(self, x, **kwargs): <ide> <ide> # Arguments <ide> x: array-like, shape `(n_samples, n_features)` <del> Test samples where n_samples is the number of samples <del> and n_features is the number of features. <add> Test samples where `n_samples` is the number of samples <add> and `n_features` is the number of features. <ide> **kwargs: dictionary arguments <ide> Legal arguments are the arguments <ide> of `Sequential.predict_classes`. <ide> def predict_proba(self, x, **kwargs): <ide> <ide> # Arguments <ide> x: array-like, shape `(n_samples, n_features)` <del> Test samples where n_samples is the number of samples <del> and n_features is the number of features. <add> Test samples where `n_samples` is the number of samples <add> and `n_features` is the number of features. <ide> **kwargs: dictionary arguments <ide> Legal arguments are the arguments <ide> of `Sequential.predict_classes`. <ide> def score(self, x, y, **kwargs): <ide> <ide> # Arguments <ide> x: array-like, shape `(n_samples, n_features)` <del> Test samples where n_samples is the number of samples <del> and n_features is the number of features. <add> Test samples where `n_samples` is the number of samples <add> and `n_features` is the number of features. <ide> y: array-like, shape `(n_samples,)` or `(n_samples, n_outputs)` <del> True labels for x. <add> True labels for `x`. <ide> **kwargs: dictionary arguments <ide> Legal arguments are the arguments of `Sequential.evaluate`. <ide> <ide> # Returns <ide> score: float <del> Mean accuracy of predictions on X wrt. y. <add> Mean accuracy of predictions on `x` wrt. `y`. <ide> <ide> # Raises <ide> ValueError: If the underlying model isn't configured to <ide> def predict(self, x, **kwargs): <ide> <ide> # Arguments <ide> x: array-like, shape `(n_samples, n_features)` <del> Test samples where n_samples is the number of samples <del> and n_features is the number of features. <add> Test samples where `n_samples` is the number of samples <add> and `n_features` is the number of features. <ide> **kwargs: dictionary arguments <ide> Legal arguments are the arguments of `Sequential.predict`. <ide> <ide> def score(self, x, y, **kwargs): <ide> <ide> # Arguments <ide> x: array-like, shape `(n_samples, n_features)` <del> Test samples where n_samples is the number of samples <del> and n_features is the number of features. <add> Test samples where `n_samples` is the number of samples <add> and `n_features` is the number of features. <ide> y: array-like, shape `(n_samples,)` <del> True labels for X. <add> True labels for `x`. <ide> **kwargs: dictionary arguments <ide> Legal arguments are the arguments of `Sequential.evaluate`. <ide> <ide> # Returns <ide> score: float <del> Mean accuracy of predictions on X wrt. y. <add> Mean accuracy of predictions on `x` wrt. `y`. <ide> """ <ide> kwargs = self.filter_sk_params(Sequential.evaluate, kwargs) <ide> loss = self.model.evaluate(x, y, **kwargs)
1
Ruby
Ruby
tap the subscriber for easier return value
234b9699463ba435086aa253ee143014a835bbe6
<ide><path>activesupport/lib/active_support/notifications/fanout.rb <ide> def initialize <ide> <ide> def subscribe(pattern = nil, &block) <ide> @listeners_for.clear <del> @subscribers << Subscriber.new(pattern, &block) <del> @subscribers.last <add> Subscriber.new(pattern, &block).tap do |s| <add> @subscribers << s <add> end <ide> end <ide> <ide> def unsubscribe(subscriber)
1
Mixed
Javascript
move htmltojsx.js to react-magic project
1d1b6ed07a25b122b5964df8fd235e3cf95060a7
<ide><path>docs/_js/html-jsx-lib.js <del>/** <del> * Copyright 2013-2014 Facebook, Inc. <del> * <del> * Licensed under the Apache License, Version 2.0 (the "License"); <del> * you may not use this file except in compliance with the License. <del> * You may obtain a copy of the License at <del> * <del> * http://www.apache.org/licenses/LICENSE-2.0 <del> * <del> * Unless required by applicable law or agreed to in writing, software <del> * distributed under the License is distributed on an "AS IS" BASIS, <del> * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. <del> * See the License for the specific language governing permissions and <del> * limitations under the License. <del> */ <del> <del>/** <del> * This is a very simple HTML to JSX converter. It turns out that browsers <del> * have good HTML parsers (who would have thought?) so we utilise this by <del> * inserting the HTML into a temporary DOM node, and then do a breadth-first <del> * traversal of the resulting DOM tree. <del> */ <del>;(function(global) { <del> 'use strict'; <del> <del> // https://developer.mozilla.org/en-US/docs/Web/API/Node.nodeType <del> var NODE_TYPE = { <del> ELEMENT: 1, <del> TEXT: 3, <del> COMMENT: 8 <del> }; <del> var ATTRIBUTE_MAPPING = { <del> 'for': 'htmlFor', <del> 'class': 'className' <del> }; <del> <del> /** <del> * Repeats a string a certain number of times. <del> * Also: the future is bright and consists of native string repetition: <del> * https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/repeat <del> * <del> * @param {string} string String to repeat <del> * @param {number} times Number of times to repeat string. Integer. <del> * @see http://jsperf.com/string-repeater/2 <del> */ <del> function repeatString(string, times) { <del> if (times === 1) { <del> return string; <del> } <del> if (times < 0) { throw new Error(); } <del> var repeated = ''; <del> while (times) { <del> if (times & 1) { <del> repeated += string; <del> } <del> if (times >>= 1) { <del> string += string; <del> } <del> } <del> return repeated; <del> } <del> <del> /** <del> * Determine if the string ends with the specified substring. <del> * <del> * @param {string} haystack String to search in <del> * @param {string} needle String to search for <del> * @return {boolean} <del> */ <del> function endsWith(haystack, needle) { <del> return haystack.slice(-needle.length) === needle; <del> } <del> <del> /** <del> * Trim the specified substring off the string. If the string does not end <del> * with the specified substring, this is a no-op. <del> * <del> * @param {string} haystack String to search in <del> * @param {string} needle String to search for <del> * @return {string} <del> */ <del> function trimEnd(haystack, needle) { <del> return endsWith(haystack, needle) <del> ? haystack.slice(0, -needle.length) <del> : haystack; <del> } <del> <del> /** <del> * Convert a hyphenated string to camelCase. <del> */ <del> function hyphenToCamelCase(string) { <del> return string.replace(/-(.)/g, function(match, chr) { <del> return chr.toUpperCase(); <del> }); <del> } <del> <del> /** <del> * Determines if the specified string consists entirely of whitespace. <del> */ <del> function isEmpty(string) { <del> return !/[^\s]/.test(string); <del> } <del> <del> /** <del> * Determines if the specified string consists entirely of numeric characters. <del> */ <del> function isNumeric(input) { <del> return input !== undefined <del> && input !== null <del> && (typeof input === 'number' || parseInt(input, 10) == input); <del> } <del> <del> var HTMLtoJSX = function(config) { <del> this.config = config || {}; <del> <del> if (this.config.createClass === undefined) { <del> this.config.createClass = true; <del> } <del> if (!this.config.indent) { <del> this.config.indent = ' '; <del> } <del> if (!this.config.outputClassName) { <del> this.config.outputClassName = 'NewComponent'; <del> } <del> }; <del> HTMLtoJSX.prototype = { <del> /** <del> * Reset the internal state of the converter <del> */ <del> reset: function() { <del> this.output = ''; <del> this.level = 0; <del> }, <del> /** <del> * Main entry point to the converter. Given the specified HTML, returns a <del> * JSX object representing it. <del> * @param {string} html HTML to convert <del> * @return {string} JSX <del> */ <del> convert: function(html) { <del> this.reset(); <del> <del> // It turns out browsers have good HTML parsers (imagine that). <del> // Let's take advantage of it. <del> var containerEl = document.createElement('div'); <del> containerEl.innerHTML = '\n' + this._cleanInput(html) + '\n'; <del> <del> if (this.config.createClass) { <del> if (this.config.outputClassName) { <del> this.output = 'var ' + this.config.outputClassName + ' = React.createClass({\n'; <del> } else { <del> this.output = 'React.createClass({\n'; <del> } <del> this.output += this.config.indent + 'render: function() {' + "\n"; <del> this.output += this.config.indent + this.config.indent + 'return (\n'; <del> } <del> <del> if (this._onlyOneTopLevel(containerEl)) { <del> // Only one top-level element, the component can return it directly <del> // No need to actually visit the container element <del> this._traverse(containerEl); <del> } else { <del> // More than one top-level element, need to wrap the whole thing in a <del> // container. <del> this.output += this.config.indent + this.config.indent + this.config.indent; <del> this.level++; <del> this._visit(containerEl); <del> } <del> this.output = this.output.trim() + '\n'; <del> if (this.config.createClass) { <del> this.output += this.config.indent + this.config.indent + ');\n'; <del> this.output += this.config.indent + '}\n'; <del> this.output += '});'; <del> } <del> return this.output; <del> }, <del> <del> /** <del> * Cleans up the specified HTML so it's in a format acceptable for <del> * converting. <del> * <del> * @param {string} html HTML to clean <del> * @return {string} Cleaned HTML <del> */ <del> _cleanInput: function(html) { <del> // Remove unnecessary whitespace <del> html = html.trim(); <del> // Ugly method to strip script tags. They can wreak havoc on the DOM nodes <del> // so let's not even put them in the DOM. <del> html = html.replace(/<script(.*?)<\/script>/g, ''); <del> return html; <del> }, <del> <del> /** <del> * Determines if there's only one top-level node in the DOM tree. That is, <del> * all the HTML is wrapped by a single HTML tag. <del> * <del> * @param {DOMElement} containerEl Container element <del> * @return {boolean} <del> */ <del> _onlyOneTopLevel: function(containerEl) { <del> // Only a single child element <del> if ( <del> containerEl.childNodes.length === 1 <del> && containerEl.childNodes[0].nodeType === NODE_TYPE.ELEMENT <del> ) { <del> return true; <del> } <del> // Only one element, and all other children are whitespace <del> var foundElement = false; <del> for (var i = 0, count = containerEl.childNodes.length; i < count; i++) { <del> var child = containerEl.childNodes[i]; <del> if (child.nodeType === NODE_TYPE.ELEMENT) { <del> if (foundElement) { <del> // Encountered an element after already encountering another one <del> // Therefore, more than one element at root level <del> return false; <del> } else { <del> foundElement = true; <del> } <del> } else if (child.nodeType === NODE_TYPE.TEXT && !isEmpty(child.textContent)) { <del> // Contains text content <del> return false; <del> } <del> } <del> return true; <del> }, <del> <del> /** <del> * Gets a newline followed by the correct indentation for the current <del> * nesting level <del> * <del> * @return {string} <del> */ <del> _getIndentedNewline: function() { <del> return '\n' + repeatString(this.config.indent, this.level + 2); <del> }, <del> <del> /** <del> * Handles processing the specified node <del> * <del> * @param {Node} node <del> */ <del> _visit: function(node) { <del> this._beginVisit(node); <del> this._traverse(node); <del> this._endVisit(node); <del> }, <del> <del> /** <del> * Traverses all the children of the specified node <del> * <del> * @param {Node} node <del> */ <del> _traverse: function(node) { <del> this.level++; <del> for (var i = 0, count = node.childNodes.length; i < count; i++) { <del> this._visit(node.childNodes[i]); <del> } <del> this.level--; <del> }, <del> <del> /** <del> * Handle pre-visit behaviour for the specified node. <del> * <del> * @param {Node} node <del> */ <del> _beginVisit: function(node) { <del> switch (node.nodeType) { <del> case NODE_TYPE.ELEMENT: <del> this._beginVisitElement(node); <del> break; <del> <del> case NODE_TYPE.TEXT: <del> this._visitText(node); <del> break; <del> <del> case NODE_TYPE.COMMENT: <del> this._visitComment(node); <del> break; <del> <del> default: <del> console.warn('Unrecognised node type: ' + node.nodeType); <del> } <del> }, <del> <del> /** <del> * Handles post-visit behaviour for the specified node. <del> * <del> * @param {Node} node <del> */ <del> _endVisit: function(node) { <del> switch (node.nodeType) { <del> case NODE_TYPE.ELEMENT: <del> this._endVisitElement(node); <del> break; <del> // No ending tags required for these types <del> case NODE_TYPE.TEXT: <del> case NODE_TYPE.COMMENT: <del> break; <del> } <del> }, <del> <del> /** <del> * Handles pre-visit behaviour for the specified element node <del> * <del> * @param {DOMElement} node <del> */ <del> _beginVisitElement: function(node) { <del> var tagName = node.tagName.toLowerCase(); <del> var attributes = []; <del> for (var i = 0, count = node.attributes.length; i < count; i++) { <del> attributes.push(this._getElementAttribute(node, node.attributes[i])); <del> } <del> <del> this.output += '<' + tagName; <del> if (attributes.length > 0) { <del> this.output += ' ' + attributes.join(' '); <del> } <del> if (node.firstChild) { <del> this.output += '>'; <del> } <del> }, <del> <del> /** <del> * Handles post-visit behaviour for the specified element node <del> * <del> * @param {Node} node <del> */ <del> _endVisitElement: function(node) { <del> // De-indent a bit <del> // TODO: It's inefficient to do it this way :/ <del> this.output = trimEnd(this.output, this.config.indent); <del> if (node.firstChild) { <del> this.output += '</' + node.tagName.toLowerCase() + '>'; <del> } else { <del> this.output += ' />'; <del> } <del> }, <del> <del> /** <del> * Handles processing of the specified text node <del> * <del> * @param {TextNode} node <del> */ <del> _visitText: function(node) { <del> var text = node.textContent; <del> // If there's a newline in the text, adjust the indent level <del> if (text.indexOf('\n') > -1) { <del> text = node.textContent.replace(/\n\s*/g, this._getIndentedNewline()); <del> } <del> this.output += text; <del> }, <del> <del> /** <del> * Handles processing of the specified text node <del> * <del> * @param {Text} node <del> */ <del> _visitComment: function(node) { <del> // Do not render the comment <del> // Since we remove comments, we also need to remove the next line break so we <del> // don't end up with extra whitespace after every comment <del> //if (node.nextSibling && node.nextSibling.nodeType === NODE_TYPE.TEXT) { <del> // node.nextSibling.textContent = node.nextSibling.textContent.replace(/\n\s*/, ''); <del> //} <del> this.output += '{/*' + node.textContent.replace('*/', '* /') + '*/}'; <del> }, <del> <del> /** <del> * Gets a JSX formatted version of the specified attribute from the node <del> * <del> * @param {DOMElement} node <del> * @param {object} attribute <del> * @return {string} <del> */ <del> _getElementAttribute: function(node, attribute) { <del> switch (attribute.name) { <del> case 'style': <del> return this._getStyleAttribute(attribute.value); <del> default: <del> var name = ATTRIBUTE_MAPPING[attribute.name] || attribute.name; <del> var result = name + '='; <del> // Numeric values should be output as {123} not "123" <del> if (isNumeric(attribute.value)) { <del> result += '{' + attribute.value + '}'; <del> } else { <del> result += '"' + attribute.value.replace('"', '&quot;') + '"'; <del> } <del> return result; <del> } <del> }, <del> <del> /** <del> * Gets a JSX formatted version of the specified element styles <del> * <del> * @param {string} styles <del> * @return {string} <del> */ <del> _getStyleAttribute: function(styles) { <del> var jsxStyles = new StyleParser(styles).toJSXString(); <del> return 'style={{' + jsxStyles + '}}'; <del> } <del> }; <del> <del> /** <del> * Handles parsing of inline styles <del> * <del> * @param {string} rawStyle Raw style attribute <del> * @constructor <del> */ <del> var StyleParser = function(rawStyle) { <del> this.parse(rawStyle); <del> }; <del> StyleParser.prototype = { <del> /** <del> * Parse the specified inline style attribute value <del> * @param {string} rawStyle Raw style attribute <del> */ <del> parse: function(rawStyle) { <del> this.styles = {}; <del> rawStyle.split(';').forEach(function(style) { <del> style = style.trim(); <del> var firstColon = style.indexOf(':'); <del> var key = style.substr(0, firstColon); <del> var value = style.substr(firstColon + 1).trim(); <del> if (key !== '') { <del> this.styles[key] = value; <del> } <del> }, this); <del> }, <del> <del> /** <del> * Convert the style information represented by this parser into a JSX <del> * string <del> * <del> * @return {string} <del> */ <del> toJSXString: function() { <del> var output = []; <del> for (var key in this.styles) { <del> if (!this.styles.hasOwnProperty(key)) { <del> continue; <del> } <del> output.push(this.toJSXKey(key) + ': ' + this.toJSXValue(this.styles[key])); <del> } <del> return output.join(', '); <del> }, <del> <del> /** <del> * Convert the CSS style key to a JSX style key <del> * <del> * @param {string} key CSS style key <del> * @return {string} JSX style key <del> */ <del> toJSXKey: function(key) { <del> return hyphenToCamelCase(key); <del> }, <del> <del> /** <del> * Convert the CSS style value to a JSX style value <del> * <del> * @param {string} value CSS style value <del> * @return {string} JSX style value <del> */ <del> toJSXValue: function(value) { <del> if (isNumeric(value)) { <del> // If numeric, no quotes <del> return value; <del> } else if (endsWith(value, 'px')) { <del> // "500px" -> 500 <del> return trimEnd(value, 'px'); <del> } else { <del> // Proably a string, wrap it in quotes <del> return '\'' + value.replace(/'/g, '"') + '\''; <del> } <del> } <del> }; <del> <del> // Expose public API <del> global.HTMLtoJSX = HTMLtoJSX; <del>}(window)); <ide>\ No newline at end of file <add>// This file has moved to http://reactjs.github.io/react-magic/htmltojsx.min.js <ide><path>docs/html-jsx.md <ide> id: html-jsx <ide> <div class="jsxCompiler"> <ide> <h1>HTML to JSX Compiler</h1> <ide> <div id="jsxCompiler"></div> <del> <script src="js/html-jsx-lib.js"></script> <add> <script src="http://reactjs.github.io/react-magic/htmltojsx.min.js"></script> <ide> <script src="js/html-jsx.js"></script> <ide> </div>
2
Ruby
Ruby
use cache_store for descriptions
fe6b78a3f390a72073b35d164d601bbd84db09a9
<ide><path>Library/Homebrew/cache_store.rb <ide> def created? <ide> cache_path.exist? <ide> end <ide> <add> # Returns the modification time of the cache file (if it already exists). <add> # <add> # @return [Time] <add> def mtime <add> return unless created? <add> cache_path.mtime <add> end <add> <add> # Performs a `select` on the underlying database. <add> # <add> # @return [Array] <add> def select(&block) <add> db.select(&block) <add> end <add> <add> # Returns `true` if the cache is empty. <add> # <add> # @return [Boolean] <add> def empty? <add> db.empty? <add> end <add> <ide> private <ide> <ide> # Lazily loaded database in read/write mode. If this method is called, a <ide> def update!(*) <ide> # stored <ide> # <ide> # @abstract <del> def fetch_type(*) <add> def fetch(*) <ide> raise NotImplementedError <ide> end <ide> <ide> # Deletes data from the cache based on a condition defined in a concrete class <ide> # <ide> # @abstract <del> def flush_cache! <add> def delete!(*) <ide> raise NotImplementedError <ide> end <ide> <ide><path>Library/Homebrew/cleanup.rb <ide> def clean! <ide> cleanup_portable_ruby <ide> return if dry_run? <ide> <del> cleanup_linkage_db <add> cleanup_old_cache_db <ide> rm_ds_store <ide> else <ide> args.each do |arg| <ide> def cleanup_portable_ruby <ide> end <ide> end <ide> <del> def cleanup_linkage_db <del> FileUtils.rm_rf [cache/"linkage.db", cache/"linkage.db.db"] <add> def cleanup_old_cache_db <add> FileUtils.rm_rf [ <add> cache/"desc_cache.json", <add> cache/"linkage.db", <add> cache/"linkage.db.db", <add> ] <ide> end <ide> <ide> def rm_ds_store(dirs = nil) <ide><path>Library/Homebrew/cmd/desc.rb <ide> <ide> require "descriptions" <ide> require "search" <add>require "description_cache_store" <ide> <ide> module Homebrew <ide> module_function <ide> def desc <ide> search_type << :either if ARGV.flag? "--search" <ide> search_type << :name if ARGV.flag? "--name" <ide> search_type << :desc if ARGV.flag? "--description" <add> if search_type.size > 1 <add> odie "Pick one, and only one, of -s/--search, -n/--name, or -d/--description." <add> elsif search_type.present? && ARGV.named.empty? <add> odie "You must provide a search term." <add> end <ide> <del> if search_type.empty? <add> results = if search_type.empty? <ide> raise FormulaUnspecifiedError if ARGV.named.empty? <ide> <ide> desc = {} <ide> ARGV.formulae.each { |f| desc[f.full_name] = f.desc } <del> results = Descriptions.new(desc) <del> results.print <del> elsif search_type.size > 1 <del> odie "Pick one, and only one, of -s/--search, -n/--name, or -d/--description." <del> elsif !ARGV.named.empty? <add> Descriptions.new(desc) <add> else <ide> arg = ARGV.named.join(" ") <ide> string_or_regex = query_regexp(arg) <del> results = Descriptions.search(string_or_regex, search_type.first) <del> results.print <del> else <del> odie "You must provide a search term." <add> CacheStoreDatabase.use(:descriptions) do |db| <add> cache_store = DescriptionCacheStore.new(db) <add> Descriptions.search(string_or_regex, search_type.first, cache_store) <add> end <ide> end <add> <add> results.print <ide> end <ide> end <ide><path>Library/Homebrew/cmd/update-report.rb <ide> require "descriptions" <ide> require "cleanup" <ide> require "update_migrator" <add>require "description_cache_store" <ide> <ide> module Homebrew <ide> module_function <ide> def update_report <ide> hub.dump <ide> hub.reporters.each(&:migrate_tap_migration) <ide> hub.reporters.each(&:migrate_formula_rename) <del> Descriptions.update_cache(hub) <add> CacheStoreDatabase.use(:descriptions) do |db| <add> DescriptionCacheStore.new(db) <add> .update_from_report!(hub) <add> end <ide> end <ide> puts if ARGV.include?("--preinstall") <ide> end <ide><path>Library/Homebrew/description_cache_store.rb <add>require "set" <add>require "cache_store" <add>require "searchable" <add> <add># <add># `DescriptionCacheStore` provides methods to fetch and mutate linkage-specific data used <add># by the `brew linkage` command <add># <add>class DescriptionCacheStore < CacheStore <add> include Searchable <add> <add> # Inserts a formula description into the cache if it does not exist or <add> # updates the formula description if it does exist <add> # <add> # @param [String] formula_name: the name of the formula to set <add> # @param [String] description: the description from the formula to set <add> # @return [nil] <add> def update!(formula_name, description) <add> database.set(formula_name, description) <add> end <add> <add> # Delete the formula description from the `DescriptionCacheStore` <add> # <add> # @param [String] formula_name: the name of the formula to delete <add> # @return [nil] <add> def delete!(formula_name) <add> database.delete(formula_name) <add> end <add> <add> # If the database is empty `update!` it with all known formulae. <add> # @return [nil] <add> def populate_if_empty! <add> return unless database.empty? <add> Formula.each { |f| update!(f.full_name, f.desc) } <add> end <add> <add> # Use an update report to update the `DescriptionCacheStore`. <add> # <add> # @param [Report] report: an update report generated by cmd/update.rb <add> # @return [nil] <add> def update_from_report!(report) <add> return if report.empty? <add> <add> renamings = report.select_formula(:R) <add> alterations = report.select_formula(:A) + <add> report.select_formula(:M) + <add> renamings.map(&:last) <add> <add> update_from_formula_names!(alterations) <add> delete_from_formula_names!(report.select_formula(:D) + <add> renamings.map(&:first)) <add> end <add> <add> # Use an array of formulae names to update the `DescriptionCacheStore`. <add> # <add> # @param [Array] formula_names: the formulae to update. <add> # @return [nil] <add> def update_from_formula_names!(formula_names) <add> formula_names.each do |name| <add> begin <add> update!(name, Formula[name].desc) <add> rescue FormulaUnavailableError, *FormulaVersions::IGNORED_EXCEPTIONS => e <add> p e <add> delete!(name) <add> end <add> end <add> end <add> <add> # Use an array of formulae names to delete them from the `DescriptionCacheStore`. <add> # <add> # @param [Array] formula_names: the formulae to delete. <add> # @return [nil] <add> def delete_from_formula_names!(formula_names) <add> formula_names.each(&method(:delete!)) <add> end <add> <add> private <add> <add> # Not implemented; access is through `Searchable`. <add> def fetch <add> super <add> end <add> <add> # `select` from the underlying database. <add> def select(&block) <add> database.select(&block) <add> end <add>end <ide><path>Library/Homebrew/descriptions.rb <ide> class Descriptions <ide> extend Homebrew::Search <ide> <del> CACHE_FILE = HOMEBREW_CACHE + "desc_cache.json" <del> <del> def self.cache <del> @cache || load_cache <del> end <del> <del> # If the cache file exists, load it into, and return, a hash; otherwise, <del> # return nil. <del> def self.load_cache <del> @cache = JSON.parse(CACHE_FILE.read) if CACHE_FILE.exist? <del> end <del> <del> # Write the cache to disk after ensuring the existence of the containing <del> # directory. <del> def self.save_cache <del> HOMEBREW_CACHE.mkpath <del> CACHE_FILE.atomic_write JSON.dump(@cache) <del> end <del> <del> # Create a hash mapping all formulae to their descriptions; <del> # save it for future use. <del> def self.generate_cache <del> @cache = {} <del> Formula.each do |f| <del> @cache[f.full_name] = f.desc <del> end <del> save_cache <del> end <del> <del> # Return true if the cache exists, and none of the Taps <del> # repos were updated more recently than it was. <del> def self.cache_fresh? <del> return false unless CACHE_FILE.exist? <del> <del> cache_mtime = File.mtime(CACHE_FILE) <del> <del> Tap.each do |tap| <del> next unless tap.git? <del> <del> repo_mtime = File.mtime(tap.path/".git/refs/heads/master") <del> return false if repo_mtime > cache_mtime <del> end <del> <del> true <del> end <del> <del> # Create the cache if it doesn't already exist. <del> def self.ensure_cache <del> generate_cache unless cache_fresh? && cache <del> end <del> <del> # Take a {Report}, as generated by cmd/update.rb. <del> # Unless the cache file exists, do nothing. <del> # If it does exist, but the Report is empty, just touch the cache file. <del> # Otherwise, use the report to update the cache. <del> def self.update_cache(report) <del> return unless CACHE_FILE.exist? <del> <del> if report.empty? <del> FileUtils.touch CACHE_FILE <del> else <del> renamings = report.select_formula(:R) <del> alterations = report.select_formula(:A) + report.select_formula(:M) + <del> renamings.map(&:last) <del> cache_formulae(alterations, save: false) <del> uncache_formulae(report.select_formula(:D) + <del> renamings.map(&:first)) <del> end <del> end <del> <del> # Given an array of formula names, add them and their descriptions to the <del> # cache. Save the updated cache to disk, unless explicitly told not to. <del> def self.cache_formulae(formula_names, options = { save: true }) <del> return unless cache <del> <del> formula_names.each do |name| <del> begin <del> @cache[name] = Formulary.factory(name).desc <del> rescue FormulaUnavailableError, *FormulaVersions::IGNORED_EXCEPTIONS <del> @cache.delete(name) <del> end <del> end <del> save_cache if options[:save] <del> end <del> <del> # Given an array of formula names, remove them and their descriptions from <del> # the cache. Save the updated cache to disk, unless explicitly told not to. <del> def self.uncache_formulae(formula_names, options = { save: true }) <del> return unless cache <del> <del> formula_names.each { |name| @cache.delete(name) } <del> save_cache if options[:save] <del> end <del> <ide> # Given a regex, find all formulae whose specified fields contain a match. <del> def self.search(string_or_regex, field = :either) <del> ensure_cache <del> <del> @cache.extend(Searchable) <add> def self.search(string_or_regex, field, cache_store) <add> cache_store.populate_if_empty! <ide> <ide> results = case field <ide> when :name <del> @cache.search(string_or_regex) { |name, _| name } <add> cache_store.search(string_or_regex) { |name, _| name } <ide> when :desc <del> @cache.search(string_or_regex) { |_, desc| desc } <add> cache_store.search(string_or_regex) { |_, desc| desc } <ide> when :either <del> @cache.search(string_or_regex) <add> cache_store.search(string_or_regex) <ide> end <ide> <ide> new(results) <ide> def print <ide> blank = Formatter.warning("[no description]") <ide> @descriptions.keys.sort.each do |full_name| <ide> short_name = short_names[full_name] <del> printed_name = (short_name_counts[short_name] == 1) ? short_name : full_name <add> printed_name = if short_name_counts[short_name] == 1 <add> short_name <add> else <add> full_name <add> end <ide> description = @descriptions[full_name] || blank <ide> puts "#{Tty.bold}#{printed_name}:#{Tty.reset} #{description}" <ide> end <ide> def short_names <ide> <ide> def short_name_counts <ide> @short_name_counts ||= <del> short_names.values.each_with_object(Hash.new(0)) { |name, counts| counts[name] += 1 } <add> short_names.values <add> .each_with_object(Hash.new(0)) do |name, counts| <add> counts[name] += 1 <add> end <ide> end <ide> end <ide><path>Library/Homebrew/keg.rb <ide> def uninstall <ide> CacheStoreDatabase.use(:linkage) do |db| <ide> break unless db.created? <ide> <del> LinkageCacheStore.new(path, db).flush_cache! <add> LinkageCacheStore.new(path, db).delete! <ide> end <ide> <ide> path.rmtree <ide><path>Library/Homebrew/linkage_cache_store.rb <ide> def update!(hash_values) <ide> # @param [Symbol] the type to fetch from the `LinkageCacheStore` <ide> # @raise [TypeError] error if the type is not in `HASH_LINKAGE_TYPES` <ide> # @return [Hash] <del> def fetch_type(type) <add> def fetch(type) <ide> unless HASH_LINKAGE_TYPES.include?(type) <ide> raise TypeError, <<~EOS <ide> Can't fetch types that are not defined for the linkage store <ide> def fetch_type(type) <ide> fetch_hash_values(type) <ide> end <ide> <add> # Delete the keg from the `LinkageCacheStore` <add> # <ide> # @return [nil] <del> def flush_cache! <add> def delete! <ide> database.delete(@keg_path) <ide> end <ide> <ide><path>Library/Homebrew/linkage_checker.rb <ide> def check_dylibs(rebuild_cache:) <ide> keg_files_dylibs = nil <ide> <ide> if rebuild_cache <del> store&.flush_cache! <add> store&.delete! <ide> else <del> keg_files_dylibs = store&.fetch_type(:keg_files_dylibs) <add> keg_files_dylibs = store&.fetch(:keg_files_dylibs) <ide> end <ide> <ide> keg_files_dylibs_was_empty = false <ide><path>Library/Homebrew/search.rb <ide> require "searchable" <add>require "description_cache_store" <ide> <ide> module Homebrew <ide> module Search <ide> def query_regexp(query) <ide> <ide> def search_descriptions(string_or_regex) <ide> ohai "Formulae" <del> Descriptions.search(string_or_regex, :desc).print <add> CacheStoreDatabase.use(:descriptions) do |db| <add> cache_store = DescriptionCacheStore.new(db) <add> Descriptions.search(string_or_regex, :desc, cache_store).print <add> end <ide> end <ide> <ide> def search_taps(query, silent: false) <ide><path>Library/Homebrew/tap.rb <ide> require "extend/cachable" <ide> require "readall" <add>require "description_cache_store" <ide> <ide> # a {Tap} is used to extend the formulae provided by Homebrew core. <ide> # Usually, it's synced with a remote git repository. And it's likely <ide> def install(options = {}) <ide> <ide> formatted_contents = contents.presence&.to_sentence&.dup&.prepend(" ") <ide> puts "Tapped#{formatted_contents} (#{path.abv})." unless quiet <del> Descriptions.cache_formulae(formula_names) <add> CacheStoreDatabase.use(:descriptions) do |db| <add> DescriptionCacheStore.new(db) <add> .update_from_formula_names!(formula_names) <add> end <ide> <ide> return if options[:clone_target] <ide> return unless private? <ide> def uninstall <ide> formatted_contents = contents.presence&.to_sentence&.dup&.prepend(" ") <ide> <ide> unpin if pinned? <del> Descriptions.uncache_formulae(formula_names) <add> CacheStoreDatabase.use(:descriptions) do |db| <add> DescriptionCacheStore.new(db) <add> .delete_from_formula_names!(formula_names) <add> end <ide> Utils::Link.unlink_manpages(path) <ide> Utils::Link.unlink_completions(path) <ide> path.rmtree <ide><path>Library/Homebrew/test/cmd/desc_spec.rb <ide> describe "brew desc", :integration_test do <del> let(:desc_cache) { HOMEBREW_CACHE/"desc_cache.json" } <del> <ide> it "shows a given Formula's description" do <ide> setup_test_formula "testball" <ide> <ide> .and not_to_output.to_stderr <ide> .and be_a_success <ide> end <del> <del> describe "--description" do <del> it "creates a description cache" do <del> expect(desc_cache).not_to exist <del> <del> expect { brew "desc", "--description", "testball" }.to be_a_success <del> <del> expect(desc_cache).to exist <del> end <del> end <ide> end <ide><path>Library/Homebrew/test/cmd/search_spec.rb <ide> end <ide> <ide> describe "--desc" do <del> let(:desc_cache) { HOMEBREW_CACHE/"desc_cache.json" } <add> let(:desc_cache) { HOMEBREW_CACHE/"descriptions.json" } <ide> <ide> it "supports searching in descriptions and creates a description cache" do <ide> expect(desc_cache).not_to exist <ide><path>Library/Homebrew/test/description_cache_store_spec.rb <add>require "description_cache_store" <add> <add>describe DescriptionCacheStore do <add> subject(:cache_store) { described_class.new(database) } <add> <add> let(:database) { double("database") } <add> let(:formula_name) { "test_name" } <add> let(:description) { "test_description" } <add> <add> describe "#update!" do <add> it "sets the formula description" do <add> expect(database).to receive(:set).with(formula_name, description) <add> cache_store.update!(formula_name, description) <add> end <add> end <add> <add> describe "#delete!" do <add> it "deletes the formula description" do <add> expect(database).to receive(:delete).with(formula_name) <add> cache_store.delete!(formula_name) <add> end <add> end <add> <add> describe "#update_from_report!" do <add> let(:report) { double(select_formula: [], empty?: false) } <add> <add> it "reads from the report" do <add> cache_store.update_from_report!(report) <add> end <add> end <add> <add> describe "#update_from_formula_names!" do <add> it "sets the formulae descriptions" do <add> f = formula do <add> url "url-1" <add> desc "desc" <add> end <add> expect(Formulary).to receive(:factory).with(f.name).and_return(f) <add> expect(database).to receive(:set).with(f.name, f.desc) <add> cache_store.update_from_formula_names!([f.name]) <add> end <add> end <add> <add> describe "#delete_from_formula_names!" do <add> it "deletes the formulae descriptions" do <add> expect(database).to receive(:delete).with(formula_name) <add> cache_store.delete_from_formula_names!([formula_name]) <add> end <add> end <add>end <ide><path>Library/Homebrew/test/linkage_cache_store_spec.rb <ide> end <ide> end <ide> <del> describe "#flush_cache!" do <add> describe "#delete!" do <ide> it "calls `delete` on the `database` with `keg_name` as parameter" do <ide> expect(database).to receive(:delete).with(keg_name) <del> subject.flush_cache! <add> subject.delete! <ide> end <ide> end <ide> <del> describe "#fetch_type" do <add> describe "#fetch" do <ide> context "`HASH_LINKAGE_TYPES.include?(type)`" do <ide> before do <ide> expect(database).to receive(:get).with(keg_name).and_return(nil) <ide> end <ide> <ide> it "returns a `Hash` of values" do <del> expect(subject.fetch_type(:keg_files_dylibs)).to be_an_instance_of(Hash) <add> expect(subject.fetch(:keg_files_dylibs)).to be_an_instance_of(Hash) <ide> end <ide> end <ide> <ide> context "`type` not in `HASH_LINKAGE_TYPES`" do <ide> it "raises a `TypeError` if the `type` is not supported" do <del> expect { subject.fetch_type(:bad_type) }.to raise_error(TypeError) <add> expect { subject.fetch(:bad_type) }.to raise_error(TypeError) <ide> end <ide> end <ide> end
15
PHP
PHP
add array to url action method argument
74f1bdcd40000f8acf5e0b10f12370d50c5fea47
<ide><path>src/Illuminate/Support/Facades/URL.php <ide> /** <ide> * @method static \Illuminate\Contracts\Routing\UrlGenerator setRootControllerNamespace(string $rootNamespace) <ide> * @method static bool hasValidSignature(\Illuminate\Http\Request $request, bool $absolute = true) <del> * @method static string action(string $action, $parameters = [], bool $absolute = true) <add> * @method static string action(string|array $action, $parameters = [], bool $absolute = true) <ide> * @method static string asset(string $path, bool $secure = null) <ide> * @method static string secureAsset(string $path) <ide> * @method static string current()
1
Javascript
Javascript
replace map with array in cluster-net-listen tests
7d4dedbf6a34b8744fa2c58adff12737bf5e0e35
<ide><path>test/sequential/test-cluster-net-listen-ipv6only-none.js <ide> const host = '::'; <ide> const WORKER_ACCOUNT = 3; <ide> <ide> if (cluster.isMaster) { <del> const workers = new Map(); <add> const workers = []; <ide> <ide> const countdown = new Countdown(WORKER_ACCOUNT, () => { <ide> // Make sure the `ipv6Only` option works. This is the part of the test that <ide> if (cluster.isMaster) { <ide> countdown.dec(); <ide> })); <ide> <del> workers.set(i, worker); <add> workers[i] = worker; <ide> } <ide> } else { <ide> net.createServer().listen({ <ide><path>test/sequential/test-cluster-net-listen-ipv6only-rr.js <ide> const host = '::'; <ide> const WORKER_ACCOUNT = 3; <ide> <ide> if (cluster.isMaster) { <del> const workers = new Map(); <add> const workers = []; <ide> let address; <ide> <ide> const countdown = new Countdown(WORKER_ACCOUNT, () => { <ide> if (cluster.isMaster) { <ide> countdown.dec(); <ide> })); <ide> <del> workers.set(i, worker); <add> workers[i] = worker; <ide> } <ide> } else { <ide> // As the cluster member has the potential to grab any port
2
Go
Go
add platform to container, init on fromdisk()
f97fbba5cec3d27099e230f7c1dc278c54180d74
<ide><path>container/container.go <ide> import ( <ide> "net" <ide> "os" <ide> "path/filepath" <add> "runtime" <ide> "strconv" <ide> "strings" <ide> "sync" <ide> type Container struct { <ide> LogPath string <ide> Name string <ide> Driver string <add> Platform string <ide> // MountLabel contains the options for the 'mount' command <ide> MountLabel string <ide> ProcessLabel string <ide> func (container *Container) FromDisk() error { <ide> return err <ide> } <ide> <add> // Ensure the platform is set if blank. Assume it is the platform of the <add> // host OS if not, to ensure containers created before multiple-platform <add> // support are migrated <add> if container.Platform == "" { <add> container.Platform = runtime.GOOS <add> } <add> <ide> if err := label.ReserveLabel(container.ProcessLabel); err != nil { <ide> return err <ide> }
1
Javascript
Javascript
hide the overlay when fabric/tm are both disabled
951785de8a6a7df1b782701f8c480ae5ba691e38
<ide><path>Libraries/ReactNative/ReactNativeArchitectureIndicator.js <ide> function ReactNativeArchitectureIndicator(props: {| <ide> parts.push('TM'); <ide> } <ide> } <add> <add> if (parts.length === 0) { <add> return null; <add> } <add> <ide> return ( <ide> <View style={styles.container}> <ide> <Text style={styles.text}>{parts.join('+')}</Text>
1
PHP
PHP
apply fixes from styleci
dd58def4e5fca939c2f51fe58702a996ec29c520
<ide><path>src/Illuminate/Routing/Router.php <ide> namespace Illuminate\Routing; <ide> <ide> use Closure; <del>use Illuminate\Support\Arr; <ide> use Illuminate\Support\Str; <ide> use Illuminate\Http\Request; <ide> use Illuminate\Http\Response;
1
Javascript
Javascript
add test for debugger restart message issue
b2fa795159190e96bab2a4c13e91fab6e1aa8e6b
<ide><path>test/known_issues/test-debugger-restart-message.js <add>'use strict'; <add> <add>// Refs: https://github.com/nodejs/node/issues/39272 <add> <add>const common = require('../common'); <add> <add>const assert = require('assert'); <add> <add>// When this is moved out of known_issues, this skip can be removed. <add>if (common.isOSX) { <add> assert.fail('does not fail reliably on macOS in CI'); <add>} <add> <add>// When this is moved out of known_issues, this can be removed and replaced with <add>// the commented-out use of common.skipIfInspectorDisabled() below. <add>if (!process.features.inspector) { <add> assert.fail('Known issues test should fail, so if the inspector is disabled'); <add>} <add> <add>// Will need to uncomment this when moved out of known_issues. <add>// common.skipIfInspectorDisabled(); <add> <add>// This can be reduced to 2 or even 1 (and the loop removed) once the debugger <add>// is fixed. It's set higher to make sure that the error is tripped reliably <add>// in CI. On most systems, the error will be tripped on the first test, but <add>// on a few platforms in CI, it needs to be many times. <add>const RESTARTS = 16; <add> <add>const fixtures = require('../common/fixtures'); <add>const startCLI = require('../common/debugger'); <add> <add>// Using `restart` should result in only one "Connect/For help" message. <add>{ <add> const script = fixtures.path('debugger', 'three-lines.js'); <add> const cli = startCLI([script]); <add> <add> function onFatal(error) { <add> cli.quit(); <add> throw error; <add> } <add> <add> const listeningRegExp = /Debugger listening on/g; <add> <add> cli.waitForInitialBreak() <add> .then(() => cli.waitForPrompt()) <add> .then(() => { <add> assert.strictEqual(cli.output.match(listeningRegExp).length, 1); <add> }) <add> .then(async () => { <add> for (let i = 0; i < RESTARTS; i++) { <add> await cli.stepCommand('restart'); <add> assert.strictEqual(cli.output.match(listeningRegExp).length, 1); <add> } <add> }) <add> .then(() => cli.quit()) <add> .then(null, onFatal); <add>}
1
Javascript
Javascript
get router history working with flux
1c6e7612995e594312ddb86dde388e8d08e83475
<ide><path>client/index.js <ide> import { hydrate } from 'thundercats'; <ide> import { render$ } from 'thundercats-react'; <ide> <ide> import { app$ } from '../common/app'; <add>import synchroniseHistory from './synchronise-history'; <ide> <ide> const debug = debugFactory('fcc:client'); <ide> const DOMContianer = document.getElementById('fcc'); <ide> const appLocation = createLocation( <ide> location.pathname + location.search <ide> ); <ide> <del>function location$(history) { <del> return Rx.Observable.create(function(observer) { <del> const dispose = history.listen(function(location) { <del> observer.onNext(location); <del> }); <del> <del> return Rx.Disposable.create(() => { <del> dispose(); <del> }); <del> }); <del>} <del> <ide> // returns an observable <ide> app$({ history, location: appLocation }) <ide> .flatMap( <ide> app$({ history, location: appLocation }) <ide> ({ nextLocation, props }, appCat) => ({ nextLocation, props, appCat }) <ide> ) <ide> .doOnNext(({ appCat }) => { <del> const appActions = appCat.getActions('appActions'); <del> const appStore = appCat.getStore('appStore'); <del> <del> const route$ = location$(history) <del> .pluck('pathname') <del> .distinctUntilChanged(); <add> const { updateLocation, goTo, goBack } = appCat.getActions('appActions'); <add> const appStore$ = appCat.getStore('appStore'); <ide> <del> appStore <del> .pluck('route') <del> .filter(route => !!route) <del> .withLatestFrom( <del> route$, <del> (nextRoute, currentRoute) => ({ currentRoute, nextRoute }) <del> ) <del> // only continue when route change requested <del> .filter(({ currentRoute, nextRoute }) => currentRoute !== nextRoute) <del> .doOnNext(({ nextRoute }) => { <del> debug('route change', nextRoute); <del> history.pushState(history.state, nextRoute); <del> }) <del> .subscribeOnError(err => console.error(err)); <add> const routerState$ = appStore$ <add> .map(({ location }) => location) <add> .distinctUntilChanged( <add> location => location && location.key ? location.key : location <add> ); <ide> <del> appActions.goBack.subscribe(function() { <del> history.goBack(); <del> }); <del> <del> appActions <del> .updateRoute <del> .pluck('route') <del> .doOnNext(route => { <del> debug('update route', route); <del> history.pushState(history.state, route); <del> }) <del> .subscribeOnError(err => console.error(err)); <add> synchroniseHistory( <add> history, <add> updateLocation, <add> goTo, <add> goBack, <add> routerState$ <add> ); <ide> }) <ide> .flatMap(({ props, appCat }) => { <ide> props.history = history; <ide><path>client/synchronise-history.js <add>import { Disposable, Observable } from 'rx'; <add> <add>export function location$(history) { <add> return Observable.create(function(observer) { <add> const dispose = history.listen(function(location) { <add> observer.onNext(location); <add> }); <add> <add> return Disposable.create(() => { <add> dispose(); <add> }); <add> }); <add>} <add> <add>const emptyLocation = { <add> pathname: '', <add> search: '', <add> hash: '' <add>}; <add> <add>let prevKey; <add>let isSyncing = false; <add>export default function synchroniseHistory( <add> history, <add> updateLocation, <add> goTo, <add> goBack, <add> routerState$ <add>) { <add> routerState$.subscribe( <add> location => { <add> <add> if (!location) { <add> return null; <add> } <add> <add> // store location has changed, update history <add> if (location.key !== prevKey) { <add> isSyncing = true; <add> history.transitionTo({ ...emptyLocation, ...location }); <add> isSyncing = false; <add> } <add> } <add> ); <add> <add> location$(history) <add> .doOnNext(location => { <add> prevKey = location.key; <add> <add> if (isSyncing) { <add> return null; <add> } <add> <add> return updateLocation(location); <add> }) <add> .subscribe(() => {}); <add> <add> goTo <add> .doOnNext((route = '/') => { <add> history.push(route); <add> }) <add> .subscribe(() => {}); <add> <add> goBack <add> .doOnNext(() => { <add> history.goBack(); <add> }) <add> .subscribe(() => {}); <add>} <ide><path>common/app/flux/Actions.js <ide> export default Actions({ <ide> }); <ide> }, <ide> <del> updateRoute(route) { <del> return { route }; <del> }, <del> goBack: null <add> // routing <add> goTo: null, <add> goBack: null, <add> updateLocation(location) { <add> return { <add> transform(state) { <add> return { ...state, location }; <add> } <add> }; <add> } <ide> }); <ide><path>common/app/flux/Store.js <ide> export default Store({ <ide> value: initValue <ide> }, <ide> init({ instance: appStore, args: [cat] }) { <del> const { updateRoute, getUser, setTitle } = cat.getActions('appActions'); <add> const { <add> updateLocation, <add> getUser, <add> setTitle <add> } = cat.getActions('appActions'); <add> <ide> const register = createRegistrar(appStore); <ide> const { <ide> toggleQuestions, <ide> export default Store({ <ide> } = cat.getActions('hikesActions'); <ide> <ide> // app <del> register(setter(fromMany(getUser, setTitle, updateRoute))); <add> register( <add> fromMany( <add> setter( <add> fromMany( <add> getUser, <add> setTitle <add> ) <add> ), <add> updateLocation <add> ) <add> ); <ide> <ide> // hikes <ide> register( <ide><path>common/app/routes/Hikes/flux/Actions.js <ide> export default Actions({ <ide> const currentHike = findNextHike(hikes, id); <ide> <ide> // go to next route <del> state.route = currentHike && currentHike.dashedName ? <del> `/hikes/${ currentHike.dashedName }` : <del> '/hikes'; <add> state.location = { <add> action: 'PUSH', <add> pathname: currentHike && currentHike.dashedName ? <add> `/hikes/${ currentHike.dashedName }` : <add> '/hikes' <add> }; <ide> <ide> const hikesApp = { <ide> ...state.hikesApp,
5
Text
Text
add missing `blank=true` to model in tutorial
41d3fe09cf3dd80cb8efed13dd886645ae7be470
<ide><path>docs/tutorial/1-serialization.md <ide> For the purposes of this tutorial we're going to start by creating a simple `Sni <ide> <ide> class Snippet(models.Model): <ide> created = models.DateTimeField(auto_now_add=True) <del> title = models.CharField(max_length=100, default='') <add> title = models.CharField(max_length=100, blank=True, default='') <ide> code = models.TextField() <ide> linenos = models.BooleanField(default=False) <ide> language = models.CharField(choices=LANGUAGE_CHOICES,
1
Go
Go
remove mkdirallnewas and update tests
6150ebf7b483197f4b8755df60e750b6410e95ca
<ide><path>libcontainerd/client_unix.go <ide> func (clnt *client) prepareBundleDir(uid, gid int) (string, error) { <ide> } <ide> if os.IsNotExist(err) || fi.Mode()&1 == 0 { <ide> p = fmt.Sprintf("%s.%d.%d", p, uid, gid) <del> if err := idtools.MkdirAs(p, 0700, uid, gid); err != nil && !os.IsExist(err) { <add> if err := idtools.MkdirAndChown(p, 0700, idtools.IDPair{uid, gid}); err != nil && !os.IsExist(err) { <ide> return "", err <ide> } <ide> } <ide> func (clnt *client) Create(containerID string, checkpoint string, checkpointDir <ide> } <ide> }() <ide> <del> if err := idtools.MkdirAllAs(container.dir, 0700, uid, gid); err != nil && !os.IsExist(err) { <add> if err := idtools.MkdirAllAndChown(container.dir, 0700, idtools.IDPair{uid, gid}); err != nil && !os.IsExist(err) { <ide> return err <ide> } <ide> <ide><path>pkg/chrootarchive/archive.go <ide> func untarHandler(tarArchive io.Reader, dest string, options *archive.TarOptions <ide> options.ExcludePatterns = []string{} <ide> } <ide> <del> rootUID, rootGID, err := idtools.GetRootUIDGID(options.UIDMaps, options.GIDMaps) <add> idMappings := idtools.NewIDMappingsFromMaps(options.UIDMaps, options.GIDMaps) <add> rootIDs, err := idMappings.RootPair() <ide> if err != nil { <ide> return err <ide> } <ide> <ide> dest = filepath.Clean(dest) <ide> if _, err := os.Stat(dest); os.IsNotExist(err) { <del> if err := idtools.MkdirAllNewAs(dest, 0755, rootUID, rootGID); err != nil { <add> if err := idtools.MkdirAllAndChownNew(dest, 0755, rootIDs); err != nil { <ide> return err <ide> } <ide> } <ide><path>pkg/idtools/idtools.go <ide> func MkdirAllAs(path string, mode os.FileMode, ownerUID, ownerGID int) error { <ide> return mkdirAs(path, mode, ownerUID, ownerGID, true, true) <ide> } <ide> <del>// MkdirAllNewAs creates a directory (include any along the path) and then modifies <del>// ownership ONLY of newly created directories to the requested uid/gid. If the <del>// directories along the path exist, no change of ownership will be performed <del>// Deprecated: Use MkdirAllAndChownNew <del>func MkdirAllNewAs(path string, mode os.FileMode, ownerUID, ownerGID int) error { <del> return mkdirAs(path, mode, ownerUID, ownerGID, true, false) <del>} <del> <ide> // MkdirAs creates a directory and then modifies ownership to the requested uid/gid. <ide> // If the directory already exists, this function still changes ownership <ide> // Deprecated: Use MkdirAndChown with a IDPair <ide><path>pkg/idtools/idtools_unix_test.go <ide> import ( <ide> "path/filepath" <ide> "syscall" <ide> "testing" <add> <add> "github.com/stretchr/testify/require" <ide> ) <ide> <ide> type node struct { <ide> func TestMkdirAllAs(t *testing.T) { <ide> } <ide> } <ide> <del>func TestMkdirAllNewAs(t *testing.T) { <del> <add>func TestMkdirAllAndChownNew(t *testing.T) { <ide> dirName, err := ioutil.TempDir("", "mkdirnew") <del> if err != nil { <del> t.Fatalf("Couldn't create temp dir: %v", err) <del> } <add> require.NoError(t, err) <ide> defer os.RemoveAll(dirName) <ide> <ide> testTree := map[string]node{ <ide> func TestMkdirAllNewAs(t *testing.T) { <ide> "lib/x86_64": {45, 45}, <ide> "lib/x86_64/share": {1, 1}, <ide> } <del> <del> if err := buildTree(dirName, testTree); err != nil { <del> t.Fatal(err) <del> } <add> require.NoError(t, buildTree(dirName, testTree)) <ide> <ide> // test adding a directory to a pre-existing dir; only the new dir is owned by the uid/gid <del> if err := MkdirAllNewAs(filepath.Join(dirName, "usr", "share"), 0755, 99, 99); err != nil { <del> t.Fatal(err) <del> } <add> err = MkdirAllAndChownNew(filepath.Join(dirName, "usr", "share"), 0755, IDPair{99, 99}) <add> require.NoError(t, err) <add> <ide> testTree["usr/share"] = node{99, 99} <ide> verifyTree, err := readTree(dirName, "") <del> if err != nil { <del> t.Fatal(err) <del> } <del> if err := compareTrees(testTree, verifyTree); err != nil { <del> t.Fatal(err) <del> } <add> require.NoError(t, err) <add> require.NoError(t, compareTrees(testTree, verifyTree)) <ide> <ide> // test 2-deep new directories--both should be owned by the uid/gid pair <del> if err := MkdirAllNewAs(filepath.Join(dirName, "lib", "some", "other"), 0755, 101, 101); err != nil { <del> t.Fatal(err) <del> } <add> err = MkdirAllAndChownNew(filepath.Join(dirName, "lib", "some", "other"), 0755, IDPair{101, 101}) <add> require.NoError(t, err) <ide> testTree["lib/some"] = node{101, 101} <ide> testTree["lib/some/other"] = node{101, 101} <ide> verifyTree, err = readTree(dirName, "") <del> if err != nil { <del> t.Fatal(err) <del> } <del> if err := compareTrees(testTree, verifyTree); err != nil { <del> t.Fatal(err) <del> } <add> require.NoError(t, err) <add> require.NoError(t, compareTrees(testTree, verifyTree)) <ide> <ide> // test a directory that already exists; should NOT be chowned <del> if err := MkdirAllNewAs(filepath.Join(dirName, "usr"), 0755, 102, 102); err != nil { <del> t.Fatal(err) <del> } <add> err = MkdirAllAndChownNew(filepath.Join(dirName, "usr"), 0755, IDPair{102, 102}) <add> require.NoError(t, err) <ide> verifyTree, err = readTree(dirName, "") <del> if err != nil { <del> t.Fatal(err) <del> } <del> if err := compareTrees(testTree, verifyTree); err != nil { <del> t.Fatal(err) <del> } <add> require.NoError(t, err) <add> require.NoError(t, compareTrees(testTree, verifyTree)) <ide> } <ide> <ide> func TestMkdirAs(t *testing.T) {
4
Text
Text
fix markdown style in doc
5eb96dfbbaf21db9e400a4024d37dd51c7965d4e
<ide><path>libnetwork/docs/bridge.md <ide> It creates a single bridge, called `docker0` by default, and attaches a `veth pa <ide> <ide> The bridge driver supports configuration through the Docker Daemon flags. <ide> <del>## Usage <add>## Usage <ide> <ide> This driver is supported for the default "bridge" network only and it cannot be used for any other networks.
1
Go
Go
add a nil check for sandbox.ossbox
d07d6814f3e9f5929b5f28b6d38473125cc41869
<ide><path>libnetwork/sandbox.go <ide> func (sb *sandbox) ResolveIP(ip string) string { <ide> } <ide> <ide> func (sb *sandbox) ExecFunc(f func()) error { <del> return sb.osSbox.InvokeFunc(f) <add> sb.Lock() <add> osSbox := sb.osSbox <add> sb.Unlock() <add> if osSbox != nil { <add> return osSbox.InvokeFunc(f) <add> } <add> return fmt.Errorf("osl sandbox unavailable in ExecFunc for %v", sb.ContainerID()) <ide> } <ide> <ide> func (sb *sandbox) ResolveService(name string) ([]*net.SRV, []net.IP) {
1
Javascript
Javascript
use arrow functions for callbacks
175c318bc18beb8643fa9948a117cdef84f518e0
<ide><path>test/addons/make-callback-recurse/test.js <ide> const makeCallback = binding.makeCallback; <ide> const mustCallCheckDomains = common.mustCall(checkDomains); <ide> <ide> // Make sure that using MakeCallback allows the error to propagate. <del>assert.throws(function() { <del> makeCallback({}, function() { <add>assert.throws(() => { <add> makeCallback({}, () => { <ide> throw new Error('hi from domain error'); <ide> }); <ide> }, /^Error: hi from domain error$/); <ide> assert.throws(function() { <ide> <ide> // Processing of the MicrotaskQueue is manually handled by node. They are not <ide> // processed until after the nextTickQueue has been processed. <del> Promise.resolve(1).then(common.mustCall(function() { <add> Promise.resolve(1).then(common.mustCall(() => { <ide> results.push(7); <ide> })); <ide> <ide> // The nextTick should run after all immediately invoked calls. <del> process.nextTick(common.mustCall(function() { <add> process.nextTick(common.mustCall(() => { <ide> results.push(3); <ide> <ide> // Run same test again but while processing the nextTickQueue to make sure <ide> // the following MakeCallback call breaks in the middle of processing the <ide> // queue and allows the script to run normally. <del> process.nextTick(common.mustCall(function() { <add> process.nextTick(common.mustCall(() => { <ide> results.push(6); <ide> })); <ide> <del> makeCallback({}, common.mustCall(function() { <add> makeCallback({}, common.mustCall(() => { <ide> results.push(4); <ide> })); <ide> <ide> assert.throws(function() { <ide> // MakeCallback is calling the function immediately, but should then detect <ide> // that a script is already in the middle of execution and return before <ide> // either the nextTickQueue or MicrotaskQueue are processed. <del> makeCallback({}, common.mustCall(function() { <add> makeCallback({}, common.mustCall(() => { <ide> results.push(1); <ide> })); <ide> <ide> assert.throws(function() { <ide> // and process them immediately. <ide> results.push(2); <ide> <del> setImmediate(common.mustCall(function() { <add> setImmediate(common.mustCall(() => { <ide> for (let i = 0; i < results.length; i++) { <ide> assert.strictEqual(results[i], i, <ide> `verifyExecutionOrder(${arg}) results: ${results}`); <ide> assert.throws(function() { <ide> // The tests are first run on bootstrap during LoadEnvironment() in <ide> // src/node.cc. Now run the tests through node::MakeCallback(). <ide> setImmediate(function() { <del> makeCallback({}, common.mustCall(function() { <add> makeCallback({}, common.mustCall(() => { <ide> verifyExecutionOrder(2); <ide> })); <ide> }); <ide> } else if (arg === 2) { <ide> // Make sure there are no conflicts using node::MakeCallback() <ide> // within timers. <del> setTimeout(common.mustCall(function() { <add> setTimeout(common.mustCall(() => { <ide> verifyExecutionOrder(3); <ide> }), 10); <ide> } else if (arg === 3) { <ide> assert.throws(function() { <ide> function checkDomains() { <ide> // Check that domains are properly entered/exited when called in multiple <ide> // levels from both node::MakeCallback() and AsyncWrap::MakeCallback <del> setImmediate(common.mustCall(function() { <add> setImmediate(common.mustCall(() => { <ide> const d1 = domain.create(); <ide> const d2 = domain.create(); <ide> const d3 = domain.create(); <ide> <del> makeCallback({ domain: d1 }, common.mustCall(function() { <add> makeCallback({ domain: d1 }, common.mustCall(() => { <ide> assert.strictEqual(d1, process.domain); <del> makeCallback({ domain: d2 }, common.mustCall(function() { <add> makeCallback({ domain: d2 }, common.mustCall(() => { <ide> assert.strictEqual(d2, process.domain); <del> makeCallback({ domain: d3 }, common.mustCall(function() { <add> makeCallback({ domain: d3 }, common.mustCall(() => { <ide> assert.strictEqual(d3, process.domain); <ide> })); <ide> assert.strictEqual(d2, process.domain); <ide> function checkDomains() { <ide> })); <ide> })); <ide> <del> setTimeout(common.mustCall(function() { <add> setTimeout(common.mustCall(() => { <ide> const d1 = domain.create(); <ide> const d2 = domain.create(); <ide> const d3 = domain.create(); <ide> <del> makeCallback({ domain: d1 }, common.mustCall(function() { <add> makeCallback({ domain: d1 }, common.mustCall(() => { <ide> assert.strictEqual(d1, process.domain); <del> makeCallback({ domain: d2 }, common.mustCall(function() { <add> makeCallback({ domain: d2 }, common.mustCall(() => { <ide> assert.strictEqual(d2, process.domain); <del> makeCallback({ domain: d3 }, common.mustCall(function() { <add> makeCallback({ domain: d3 }, common.mustCall(() => { <ide> assert.strictEqual(d3, process.domain); <ide> })); <ide> assert.strictEqual(d2, process.domain); <ide> function checkDomains() { <ide> // Make sure nextTick, setImmediate and setTimeout can all recover properly <ide> // after a thrown makeCallback call. <ide> const d = domain.create(); <del> d.on('error', common.mustCall(function(e) { <add> d.on('error', common.mustCall((e) => { <ide> assert.strictEqual(e.message, `throw from domain ${id}`); <ide> })); <del> makeCallback({ domain: d }, function() { <add> makeCallback({ domain: d }, () => { <ide> throw new Error(`throw from domain ${id}`); <ide> }); <ide> throw new Error('UNREACHABLE');
1
Ruby
Ruby
handle apfs returning hash order
212367ee7eda101d4514b968e0e48d97b00b5695
<ide><path>Library/Homebrew/diagnostic.rb <ide> def __check_stray_files(dir, pattern, white_list, message) <ide> end <ide> return if files.empty? <ide> <del> inject_file_list(files, message) <add> inject_file_list(files.sort, message) <ide> end <ide> <ide> def check_for_stray_dylibs
1
PHP
PHP
add tests for saving decimal values
38b29727adf6a0c51231a90504e268a2492044ae
<ide><path>tests/Fixture/DatatypesFixture.php <ide> class DatatypesFixture extends TestFixture <ide> public $fields = [ <ide> 'id' => ['type' => 'biginteger'], <ide> 'cost' => ['type' => 'decimal', 'length' => 20, 'precision' => 1, 'null' => true], <add> 'fraction' => ['type' => 'decimal', 'length' => 20, 'precision' => 19, 'null' => true], <ide> 'floaty' => ['type' => 'float', 'null' => true], <ide> 'small' => ['type' => 'smallinteger', 'null' => true], <ide> 'tiny' => ['type' => 'tinyinteger', 'null' => true], <ide><path>tests/TestCase/Database/Type/DecimalTypeTest.php <ide> public function testMarshal() <ide> $result = $this->type->marshal(['3', '4']); <ide> $this->assertNull($result); <ide> <add> $result = $this->type->marshal('0.1234567890123456789'); <add> $this->assertSame('0.1234567890123456789', $result); <add> <ide> // This test is to indicate the problem that will occur if you use <ide> // float/double values which get converted to scientific notation by PHP. <ide> // To avoid this problem always using strings to indicate decimals values. <ide><path>tests/TestCase/ORM/QueryTest.php <ide> public function testNotSoFarMatchingWithContainOnTheSameAssociation() <ide> */ <ide> public function testSelectLargeNumbers() <ide> { <add> // Sqlite only supports maximum 16 digits for decimals. <ide> $this->skipIf($this->connection->getDriver() instanceof Sqlite); <ide> <ide> $this->loadFixtures('Datatypes'); <ide> public function testSelectLargeNumbers() <ide> ->first(); <ide> $this->assertNotEmpty($out, 'Should get a record'); <ide> $this->assertSame($big, $out->cost); <add> <add> $small = '0.1234567890123456789'; <add> $entity = $table->newEntity(['fraction' => $small]); <add> <add> $table->save($entity); <add> $out = $table->find() <add> ->where([ <add> 'fraction' => $small, <add> ]) <add> ->first(); <add> $this->assertNotEmpty($out, 'Should get a record'); <add> $this->assertSame($small, $out->fraction); <add> <add> $small = 0.1234567890123456789; <add> $entity = $table->newEntity(['fraction' => $small]); <add> <add> $table->save($entity); <add> $out = $table->find() <add> ->where([ <add> 'fraction' => $small, <add> ]) <add> ->first(); <add> $this->assertNotEmpty($out, 'Should get a record'); <add> // There will be loss of precision if too large/small value is set as float instead of string. <add> $this->assertSame('0.1234567890123500000', $out->fraction); <ide> } <ide> <ide> /**
3
Javascript
Javascript
fix jquery to jqlite binding on ie8
2170c06924b3a0dc1fef3b383d6a236e670dceea
<ide><path>test/jqLiteSpec.js <ide> describe('jqLite', function(){ <ide> }); <ide> <ide> <add> it('should be jqLite when jqLiteMode is on, otherwise jQuery', function() { <add> expect(jqLite).toBe(_jqLiteMode ? jqLiteWrap : _jQuery); <add> }); <add> <add> <ide> describe('construction', function(){ <ide> it('should allow construction with text node', function(){ <ide> var text = a.firstChild; <ide><path>test/jquery_alias.js <ide> 'use strict'; <ide> <del>var _jQuery = jQuery; <add>var _jQuery = jQuery, <add> _jqLiteMode = false; <ide><path>test/jquery_remove.js <ide> 'use strict'; <ide> <del>var _jQuery = jQuery.noConflict(true); <add>var _jQuery = jQuery.noConflict(true), <add> _jqLiteMode = true; <ide><path>test/testabilityPatch.js <ide> if (window.jstestdriver) { <ide> beforeEach(function(){ <ide> // This is to reset parsers global cache of expressions. <ide> compileCache = {}; <add> <add> // workaround for IE bug https://plus.google.com/104744871076396904202/posts/Kqjuj6RSbbT <add> // IE overwrite window.jQuery with undefined because of empty jQuery var statement, so we have to <add> // correct this, but only if we are not running in jqLite mode <add> if (!_jqLiteMode && _jQuery !== jQuery) { <add> jQuery = _jQuery; <add> } <add> <ide> // reset to jQuery or default to us. <ide> bindJQuery(); <ide> jqLite(document.body).html('');
4
Ruby
Ruby
use #to_s to convert range to json
dc05914be766583a22c959b2df64cfd2dfe88732
<ide><path>activesupport/lib/active_support/json/encoding.rb <ide> def as_json(options = nil) #:nodoc: <ide> end <ide> end <ide> <add>class Range <add> def as_json(options = nil) to_s end #:nodoc: <add>end <add> <ide> class Array <ide> def as_json(options = nil) #:nodoc: <ide> # use encoder as a proxy to call as_json on all elements, to protect from circular references <ide><path>activesupport/test/json/encoding_test.rb <ide> def as_json(options) <ide> ArrayTests = [[ ['a', 'b', 'c'], %([\"a\",\"b\",\"c\"]) ], <ide> [ [1, 'a', :b, nil, false], %([1,\"a\",\"b\",null,false]) ]] <ide> <add> RangeTests = [[ 1..2, %("1..2")], <add> [ 1...2, %("1...2")], <add> [ 1.5..2.5, %("1.5..2.5")]] <add> <ide> SymbolTests = [[ :a, %("a") ], <ide> [ :this, %("this") ], <ide> [ :"a b", %("a b") ]]
2
Ruby
Ruby
accept variation keys in #preview and #variant
62ff514d33d3a3b0930956a4b4866e6b228c278c
<ide><path>activestorage/app/models/active_storage/blob.rb <ide> def text? <ide> # This will create a URL for that specific blob with that specific variant, which the ActiveStorage::VariantsController <ide> # can then produce on-demand. <ide> def variant(transformations) <del> ActiveStorage::Variant.new(self, ActiveStorage::Variation.new(transformations)) <add> ActiveStorage::Variant.new(self, ActiveStorage::Variation.wrap(transformations)) <ide> end <ide> <ide> <ide> def variant(transformations) <ide> # whether a blob is accepted by any previewer, call ActiveStorage::Blob#previewable?. <ide> def preview(transformations) <ide> if previewable? <del> ActiveStorage::Preview.new(self, ActiveStorage::Variation.new(transformations)) <add> ActiveStorage::Preview.new(self, ActiveStorage::Variation.wrap(transformations)) <ide> else <ide> raise UnpreviewableError <ide> end <ide><path>activestorage/app/models/active_storage/variation.rb <ide> class ActiveStorage::Variation <ide> attr_reader :transformations <ide> <ide> class << self <del> def wrap(variation_or_key) <del> case variation_or_key <add> # Returns a Variation instance based on the given variator. If the variator is a Variation, it is <add> # returned unmodified. If it is a String, it is passed to ActiveStorage::Variation.decode. Otherwise, <add> # it is assumed to be a transformations Hash and is passed directly to the constructor. <add> def wrap(variator) <add> case variator <ide> when self <del> variation_or_key <add> variator <add> when String <add> decode variator <ide> else <del> decode variation_or_key <add> new variator <ide> end <ide> end <ide> <del> # Returns a variation instance with the transformations that were encoded by +encode+. <add> # Returns a Variation instance with the transformations that were encoded by +encode+. <ide> def decode(key) <ide> new ActiveStorage.verifier.verify(key, purpose: :variation) <ide> end
2
Text
Text
explain why gh4w
4afae028ec875f20cc78c23f0307f3f0a07c2d13
<ide><path>docs/build-instructions/windows.md <ide> cd atom <ide> script\build <ide> ``` <add> <add>## Why do I have to use GitHub for Windows? Can't I just use my existing Git? <add> <add>You totally can! GitHub for Windows's Git Shell just takes less work to set up. You need to have Posix tools in your `%PATH%` (i.e. `grep`, `sed`, et al.), which isn't the default configuration when you install Git. To fix this, you probably need to fiddle with your system PATH. <ide> <ide> ## Troubleshooting
1
Java
Java
add newline at the beginning of textarea jsp tags
44c32128dcdbd4fc848ae0873d4a6aa84383569c
<ide><path>spring-webmvc/src/main/java/org/springframework/web/servlet/tags/form/TextareaTag.java <ide> /* <del> * Copyright 2002-2012 the original author or authors. <add> * Copyright 2002-2016 the original author or authors. <ide> * <ide> * Licensed under the Apache License, Version 2.0 (the "License"); <ide> * you may not use this file except in compliance with the License. <ide> protected int writeTagContent(TagWriter tagWriter) throws JspException { <ide> writeOptionalAttribute(tagWriter, COLS_ATTRIBUTE, getCols()); <ide> writeOptionalAttribute(tagWriter, ONSELECT_ATTRIBUTE, getOnselect()); <ide> String value = getDisplayString(getBoundValue(), getPropertyEditor()); <del> tagWriter.appendValue(processFieldValue(getName(), value, "textarea")); <add> tagWriter.appendValue("\r\n" + processFieldValue(getName(), value, "textarea")); <ide> tagWriter.endTag(); <ide> return SKIP_BODY; <ide> } <ide><path>spring-webmvc/src/test/java/org/springframework/web/servlet/tags/form/TextareaTagTests.java <ide> /* <del> * Copyright 2002-2015 the original author or authors. <add> * Copyright 2002-2016 the original author or authors. <ide> * <ide> * Licensed under the Apache License, Version 2.0 (the "License"); <ide> * you may not use this file except in compliance with the License. <ide> public void customBind() throws Exception { <ide> assertBlockTagContains(output, "12.34f"); <ide> } <ide> <add> @Test <add> public void firstNewLine() throws Exception { <add> this.tag.setPath("name"); <add> this.tag.setReadonly(true); <add> <add> assertEquals(Tag.SKIP_BODY, this.tag.doStartTag()); <add> String output = getOutput(); <add> assertBlockTagContains(output, "\r\nRob"); <add> } <add> <ide> @Override <ide> protected TestBean createTestBean() { <ide> // set up test data
2
Python
Python
address a semantic difference between py2 and py3
4a26b70fd3c0dd6b4aca063a5ba504af3e91484f
<ide><path>numpy/f2py/cfuncs.py <ide> def get_needs(): <ide> else: <ide> out.append(outneeds[n][0]) <ide> del outneeds[n][0] <del> if saveout and (0 not in map(lambda x,y:x==y,saveout,outneeds[n])): <add> if saveout and (0 not in map(lambda x,y:x==y,saveout,outneeds[n])) \ <add> and outneeds[n] != []: <ide> print n,saveout <ide> errmess('get_needs: no progress in sorting needs, probably circular dependence, skipping.\n') <ide> out=out+saveout
1
Ruby
Ruby
do a single string interpolation
70357666bc86629c8d10501209105b144855ddbc
<ide><path>actionpack/lib/action_view/helpers/asset_tag_helper.rb <ide> def rewrite_asset_path(source, path = nil) <ide> if asset_id.empty? <ide> source <ide> else <del> source + "?#{asset_id}" <add> "#{source}?#{asset_id}" <ide> end <ide> end <ide>
1
Text
Text
add comment about highwatermark limit
b51d1cfbf27529346c7134f8fc4a855229543cc2
<ide><path>doc/api/stream.md <ide> A key goal of the `stream` API, particularly the [`stream.pipe()`][] method, <ide> is to limit the buffering of data to acceptable levels such that sources and <ide> destinations of differing speeds will not overwhelm the available memory. <ide> <add>The `highWaterMark` option is a threshold, not a limit: it dictates the amount <add>of data that a stream buffers before it stops asking for more data. It does not <add>enforce a strict memory limitation in general. Specific stream implementations <add>may choose to enforce stricter limits but doing so is optional. <add> <ide> Because [`Duplex`][] and [`Transform`][] streams are both `Readable` and <ide> `Writable`, each maintains *two* separate internal buffers used for reading and <ide> writing, allowing each side to operate independently of the other while
1
Text
Text
add/readme example stateless lifecycle
7699cbe9df231298bb6bc84b9728c5381a46b8ca
<ide><path>readme.md <ide> For the initial page load, `getInitialProps` will execute on the server only. `g <ide> <ide> _Note: `getInitialProps` can **not** be used in children components. Only in `pages`._ <ide> <add>You can also define the `getInitialProps` lifecycle method for stateless components: <add> <add>```jsx <add>const Page = ({ stars }) => <div>Next stars: {stars}</div> <add> <add>Page.getInitialProps = async ({ req }) => { <add> const res = await fetch('https://api.github.com/repos/zeit/next.js') <add> const json = await res.json() <add> return { stars: json.stargazers_count } <add>} <add> <add>export default Page <add>``` <add> <ide> `getInitialProps` receives a context object with the following properties: <ide> <ide> - `pathname` - path section of URL
1