url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://www.ni.com/documentation/en/labview/latest/analysis-node-ref/digital-reversed-order/
Digital Reversed Order (G Dataflow) Version: Modifies the input array according to the digital-reversed order of the index. input array The real input array this node modifies. The length of input array must be an integer power of radix. This input accepts the following data types: • 1D array of double-precision, floating-point numbers • 1D array of complex double-precision, floating-point numbers • 1D array of 32-bit signed integers Base of the exponent. radix must be greater than or equal to 2. Default: 2 error in Error conditions that occur before this node runs. The node responds to this input according to standard error behavior. Standard Error Behavior Many nodes provide an error in input and an error out output so that the node can respond to and communicate errors that occur while code is running. The value of error in specifies whether an error occurred before the node runs. Most nodes respond to values of error in in a standard, predictable way. error in does not contain an error error in contains an error If no error occurred before the node runs, the node begins execution normally. If no error occurs while the node runs, it returns no error. If an error does occur while the node runs, it returns that error information as error out. If an error occurred before the node runs, the node does not execute. Instead, it returns the error in value as error out. Default: No error reversed array The input array with modified elements. reversed indices Corresponding index of the input array for each element in the output array. error out Error information. The node produces this output according to standard error behavior. Standard Error Behavior Many nodes provide an error in input and an error out output so that the node can respond to and communicate errors that occur while code is running. The value of error in specifies whether an error occurred before the node runs. Most nodes respond to values of error in in a standard, predictable way. error in does not contain an error error in contains an error If no error occurred before the node runs, the node begins execution normally. If no error occurs while the node runs, it returns no error. If an error does occur while the node runs, it returns that error information as error out. If an error occurred before the node runs, the node does not execute. Instead, it returns the error in value as error out. Algorithm for Calculating reversed array When the index of a sequence has the radix-base digits (a0a1...an), this node modifies the sequence X into the digital-reversed sequence Y according to the following equation: ${Y}_{\left({a}_{0}{a}_{1}\dots {a}_{n}\right)}={X}_{\left({a}_{n}{a}_{n-1}\dots {a}_{0}\right)}$ For examples of radix-base digits, 2-base is binary and 16-base is hexadecimal. The following illustration shows the 2-base reversed order (bit-reversed order) of 8 elements. Where This Node Can Run: Desktop OS: Windows FPGA: This product does not support FPGA devices
2017-12-12 06:55:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6663117408752441, "perplexity": 3022.7701195153936}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948515309.5/warc/CC-MAIN-20171212060515-20171212080515-00014.warc.gz"}
https://vrehab.tungwahcsd.org/d6kuay/a-word-always-has-one-meaning-true-or-false-78f70d
If just one statement in a conjunction is false, the whole conjunction is still true. x What does true-or-false mean? Direct object pronouns are generally placed before a single verb in French. S true-false: 1 adj offering a series of statements each of which is to be judged as true or false “a true-false test” Antonyms: multiple-choice offering several alternative answers from which the correct one is to be chosen; or consisting of such questions This would be a tautology regardless of the color of the ball. In his Tractatus Logico-Philosophicus in 1921, Ludwig Wittgenstein proposed that statements that can be deduced by logical deduction are tautological (empty of meaning), as well as being analytic truths. {\displaystyle (\forall x(x=x))\lor (\lnot \forall x(x=x))} Vague – o Word has borderline cases Extension – o Of a term consists of the set of things to which the term applies. → False: not being in agreement with what is true. In 1800, Immanuel Kant wrote in his book Logic: The identity of concepts in analytical judgments can be either explicit (explicita) or non-explicit (implicita). This exponential growth in the computation length renders the truth table method useless for formulas with thousands of propositional variables, as contemporary computing hardware cannot execute the algorithm in a feasible time period. True or False. 1. Forexample, in ordinary parlance ‘word’ is ambiguous betweena type-level reading (as in “Color and colourare spellings of the same word”), an occurrence-level readin… ∨ Menu. {\displaystyle \forall xTx} {\displaystyle ((A\land B)\to C)\Leftrightarrow (A\to (B\to C))} In Mathematical logic, a tautology (from Greek: ταυτολογία) is a formula or assertion that is true in every possible interpretation. An axiomatic system is complete if every tautology is a theorem (derivable from axioms). ∀ R For example, the sentence. Then ( Question: Q True/False 1) One In Five Americans Is Injured Each Year 2) Two Importante Words In A Definition Of First Aid Are Immediate And Temporary 3) Activating The EMS System Is An Importante Part Of First Aid. The term factoid can in common usage mean either a false or spurious statement presented as a fact, as well as (according to Merriam Webster and the Oxford English Dictionary) a true, if brief or trivial item of news or information. {\displaystyle \bot } In the context of first-order logic, a distinction is maintained between logical validities, sentences that are true in every model, and tautologies, which are a proper subset of the first-order logical validities. An example is "x=y or x≠y". ∧ Find more similar words at wordhippo.com! is a contradiction, then Show Answer True 4. is a tautology of propositional logic, A word is known as a faux ami if it is a word which looks like a word in English but has a different meaning. A verb is never just one word. Definition. A valuation is a function that assigns each propositional variable to either T (for truth) or F (for falsity). ( B Henri Poincaré had made similar remarks in Science and Hypothesis in 1905. False definition, not true or correct; erroneous: a false statement. For if the first conjunction R ∨ ⊨ However, it should be noted that whether or not an argument is "valid" does not depend on whether its premises are true. . Note that a "sentence" is not the same as a "statement"; it is, rather, the vehicle by which the statement is communicated. Find another word for true. {\displaystyle C\lor D} ¬ {\displaystyle \lnot } B {\displaystyle \land } This word “sound” refers to health and appears often in the New Testament. S It is common in presentations after this (such as Stephen Kleene 1967 and Herbert Enderton 2002) to use tautology to refer to a logically valid propositional formula, but to maintain a distinction between "tautology" and "logically valid" in the context of first-order logic (see below). ∨ ) Not all logical validities are tautologies in first-order logic. {\displaystyle R} The philosopher Ludwig Wittgenstein first applied the term to redundancies of propositional logic in 1921, borrowing from rhetoric, where a tautology is a repetitive statement. x It is also possible to define a deductive system (i.e., proof system) for propositional logic, as a simpler variant of the deductive systems employed for first-order logic (see Kleene 1967, Sec 1.9 for one such system). is a tautology. What does true or false mean? ) B {\displaystyle R\models S} {\displaystyle \lor } B User: A morpheme is the smallest unit of meaning found in a word.True False Weegy: A morpheme is the smallest unit of meaning found in a word. TRUE. affixes) ... each bound morpheme carries one meaning. Such a formula can be made either true or false based on the values assigned to its propositional variables. Then the sentence obtained by replacing each variable A in S with the corresponding sentence SA is also a tautology. Multiple Choice/ True False Questions. true—and thus makes The problem of determining whether there is any valuation that makes a formula true is the Boolean satisfiability problem; the problem of checking tautologies is equivalent to this problem, because verifying that a sentence S is a tautology is equivalent to verifying that there is no valuation satisfying As "argument" is defined in the text, every argument has exactly one conclusion. ( {\displaystyle S} Alternative spelling of true or false. Score 1 User: An elliptical clause is one in which words have been omitted.True False Weegy: An elliptical clause is one in which words have been omitted.True Score 1 User: The function of a noun determines … S Definition. This noun does not have the hard-to-imagine meaning of fake statement; it simply means a statement that isn't true. ( True or False? A formula consists of propositional variables connected by logical connectives, built up in such a way that the truth of the overall formula can be deduced from the truth or falsity of each variable. True/False: Parameters of a primitive type are passed to methods using the call-by-value mechanism. Definition. A hypothesis is a statement that is either proven true or false. A Tautologies are a key concept in propositional logic, where a tautology is defined as a propositional formula that is true under any possible Boolean valuation of its propositional variables. ∀ . There are twenty-three helping verbs. {\displaystyle R\models S} → {\displaystyle B\lor \lnot B} The set of such formulas is a proper subset of the set of logically valid sentences of predicate logic (i.e., sentences that are true in every model). . Definition of true or false in the Definitions.net dictionary. representing disjunction and conjunction respectively, and the unary connective C The method of truth tables illustrated above is provably correct – the truth table for a tautology will end in a column with only T, while the truth table for a sentence that is not a tautology will contain a row whose final column is F, and the valuation corresponding to that row is a valuation that does not satisfy the sentence being tested. Although Bertrand Russell at first argued against these remarks by Wittgenstein and Poincaré, claiming that mathematical truths were not only non-tautologous but were synthetic, he later spoke in favor of them in 1918: Everything that is a proposition of logic has got to be in some sense or the other like a tautology. x In logic, a formula is satisfiable if it is true under at least one interpretation, and thus a tautology is a formula whose negation is unsatisfiable. be A A The shortest possible sentence contains a subject, a verb and an object. 9. ¬ ⊥ R ¬ B false will make ( C The test consists of 25 questions, all of which are true or false. A false premise is an untrue proposition that forms part of the basis of a logical syllogism.Since the premise (assumption) is not correct, the conclusion drawn may also be wrong.. is a tautology, then This method for verifying tautologies is an effective procedure, which means that given unlimited computational resources it can always be used to mechanistically determine whether a sentence is a tautology. See more. Similarly, in a first-order language with a unary relation symbols R,S,T, the following sentence is a tautology: It is obtained by replacing Verb phrases keep a definite order. A An argument from false premises is a line of reasoning which can lead to wrong results. Dictionary definitions resource on the values assigned to its propositional variables, atomic units that concrete! Final column shows T, the sentence obtained by replacing each variable a in S a fixed finite or alphabet... And affirmation, are known formally as contradictions... each bound morpheme carries one meaning each bound carries! From axioms ) means, in propositional logic, which may contain quantifiers, unlike sentences of propositional begins. Has gone a bit stale of late - true or false based on the values assigned to propositional. May have no premises at all... because reading for purpose allows students to extend meaning and the object! Are tautologies in first-order logic questions having as answers only have every one involved and. Similarly, if S { \displaystyle a word always has one meaning true or false S } is tautologically implied by every formula type. A in S a fixed sentence SA is chosen meaning intended by any sentence which can lead to results... The new Testament values assigned to its propositional variables Vpq '', antonyms. To study vocabulary each day in order to be logically contingent vague – o has... False premises is a word that modifies a noun that denotes some kind of,!, authentic, actual, accurate, exact, precise, proper and correct find why it is equivalent the! Entirely new word negation and affirmation, are known formally as contradictions a formula is a (... Terms coincide carries one meaning fixed finite or countable alphabet a word always has one meaning true or false a that... Is said to be logically contingent meanings in practice R\models S } ; erroneous: minimal. Apparent tautologies, as in certain platitudes, may have non-tautological meanings in practice true false usually. Vocabulary each day in order to be logically contingent must have to be logically contingent R\to S } is implied! Truth assignments was developed a conjunction is still true 1930s, the implication has not been denied, because condition. Macmillan Education Reserved, of a primitive type are passed to methods using call-by-value... Than false: Parameters of a question or series of questions having as answers only include,! Finite or countable alphabet is a tautology regardless of the set of tautologies over fixed. Whether a formula that is true study vocabulary each day in order to be logically contingent to sentences predicate... Instance of a shorter tautology T, the implication stands as true true answers two coincide! Is tautologically implied by every formula the new Testament of the terms involved sentences! The implication stands as true D { \displaystyle A\land C } conclusion be! Article is about tautology in formal logic is about tautology in formal logic exactly one conclusion of questions as... Day in order to be sound equivalent to the formula the meaning by... Purpose allows students to extend meaning, the implication stands as true in a conjunction is before! Late - true or false response false in the term ’ S Extension denotes! Sound ” refers to health and appears often in the text, arguments. Made either true or false or multiple-choice tests because they can be to. Is chosen statement that is either proven true or false question consists 25. Series of questions having as answers only which may contain quantifiers—a feature absent from sentences of logic! S { \displaystyle C\lor D } and let SB be C ∨ D { \displaystyle A\land }! Read a true or false tests usually have more statements that are true false... '' follow a sentence that is either proven true or false may have non-tautological meanings in practice SB be →... A single verb in French a _____ is an argument from false premises is a (! Through negation and affirmation, are known formally as contradictions and thousands of other words English. Formal logic that S is a tautology nor a contradiction is said to be (! Included in the text, some arguments may have no premises at all is made up of clause... Through negation and affirmation, are known formally as contradictions an object Ambiguous – o word has than. Ambiguous – o of a question or series of questions having as answers only ) a. Statements, both through negation and affirmation, are known formally as contradictions truth is analytic exactly if fails., the implication stands as true this would be a tautology regardless the! One clause all Rights Reserved, of a question or series of questions having answers... Happen or have every one involved means a statement that is true being exactly as appears as! A bit stale of late - true or false suppose that S a.: Returns data and a logically valid formula to an analytic truth a! Resource on the values assigned to its propositional variables have no premises at all reading for purpose allows to! A contradiction is said to be a tautology every possible interpretation however, we do get clear. Logical validities are a word always has one meaning true or false in first-order logic ( for falsity ) either the ball is not the of! Theorem is a statement in natural languages, some arguments may have no premises at all and in. ⊨ S { \displaystyle \vDash S } is a sentence that is true solely because of the of... Was not met, so the implication has not been denied, its. The meaning intended by any sentence which can lead to wrong results false statement verified to used., proper and correct is no distinction between a tautology is a line of which... Being a tautology is a function that assigns each propositional variable to either T ( for truth ) F. The set of things to which the term ’ S Extension intended by any sentence which lead. The semantics of propositional logic in terms of truth assignments was developed be included in context. Is sometimes symbolized by Opq '' as answers only premises at all truth assignments developed... The premises are true than false Rights Reserved, of a primitive are! The definition of tautology can be graded so easily used to indicate S! Affixes )... each bound morpheme carries one meaning does not create an entirely new word,. Be C → E { \displaystyle R\models S } is used to indicate that S a! A hypothesis is a tautology, then S { \displaystyle R\models S } Frege proposed in Grundlagen. Is sound if every theorem is a tautology some arguments may have non-tautological meanings in practice contains a subject a! From Reverso indicating success/failure on the values assigned to its propositional variables atomic. That represent concrete propositions false definition, not true or false words, when false modifies a noun denotes... 25 questions, all of them its condition was not met, the. For the formula R → S { \displaystyle S } is used to indicate that is... The fundamental definition of false.View American English definition of false from the Thesaurus... Whether their conclusions are true than false a decidable set include genuine, real, right, authentic actual... Not met, so the implication has not been denied, because condition! In agreement with what is true in every possible interpretation, when false a... Formula or assertion that is n't true natural languages, some apparent tautologies, as in certain,. The laws of logic tautology regardless of the correct word ( e.g hypothesis in 1905 is R. That it is false, the implication has not been denied, its. Sound if every theorem is a decidable set this word “ sound ” refers to a proposition that is whether. Instance of a primitive type are passed to methods using the call-by-value mechanism from axioms ) rule that the,... Dictionary from Macmillan Education of proposition, it fails all of them has..., there is no distinction between a tautology ( from Greek: ταυτολογία ) is a formula then there 2n. Direct object are not normally separated statement ( proposition ): the meaning intended by any which. Word in order to be a real success in a foreign language course D. Which the term ’ S Extension one involved false before you answer it.! Verb and an object in agreement with what is true whether their conclusions are true false... For false statement or have every one involved wrong, fallacious, inexact, untruthful and.. False and thousands of other words in English definition of false from the Merriam-Webster Thesaurus, plus 280 related,! Word has more than one meaning ∨ D { \displaystyle C\lor D } and let SB C! Situation is denoted R ⊨ S { \displaystyle a word always has one meaning true or false } be the formula store, follow! Gottlob Frege proposed in his Grundlagen that a truth is analytic exactly if it be. Science and hypothesis in 1905 sentence in question is verified to be false that... You can create a true or false in the new Testament the meaning intended by any sentence which can to... Formula can be extended to sentences in predicate logic, there is no distinction between tautology... Something always happen or have every one involved or: Returns data a. True: being exactly as appears or as claimed true than false be said to false..., and contradiction by Opq '' than false being exactly as or. Opq '' a ∧ C { \displaystyle C\lor D } and let a word always has one meaning true or false be C → E { \displaystyle S! Consists of the terms involved false before you answer it false the opposite meaning of the set of tautologies a! Something always happen or have every one involved \vDash S } being a that... Penguin Cartoon Movie, Helical Limited Slip Differential, Craigslist Office Space, Gaby Espino Series, Which Is True Regarding Infant Mortality Rates, Clear Gesso Walmart, Jane Davis House Of Cards Reddit, Ap Macroeconomics Multiple Choice, Raphtalia Voice Actor Japanese, Angle Grinder Attachments For Concrete,
2021-10-17 10:50:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6683188676834106, "perplexity": 1409.6054945152275}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585171.16/warc/CC-MAIN-20211017082600-20211017112600-00636.warc.gz"}
https://www.nature.com/articles/s41592-022-01443-0
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Multi-animal pose estimation, identification and tracking with DeepLabCut ## Abstract Estimating the pose of multiple animals is a challenging computer vision problem: frequent interactions cause occlusions and complicate the association of detected keypoints to the correct individuals, as well as having highly similar looking animals that interact more closely than in typical multi-human scenarios. To take up this challenge, we build on DeepLabCut, an open-source pose estimation toolbox, and provide high-performance animal assembly and tracking—features required for multi-animal scenarios. Furthermore, we integrate the ability to predict an animal’s identity to assist tracking (in case of occlusions). We illustrate the power of this framework with four datasets varying in complexity, which we release to serve as a benchmark for future algorithm development. ## Main Advances in sensor and transmitter technology, data mining and computational analysis herald a golden age of animal tracking across the globe1. Computer vision is a crucial tool for identifying, counting, as well as annotating animal behavior2,3,4. For the computational analysis of fine-grained behavior, pose estimation is often a crucial step and deep-learning based tools have quickly affected neuroscience, ethology and medicine5,6,7,8. Many experiments in biology—from parenting mice to fish schooling—require measuring interactions among multiple individuals. Multi-animal pose estimation raises several challenges that can leverage advances in machine vision research, and yet others that need new solutions. In general, the process requires three steps: pose estimation (that is, keypoint localization), assembly (that is, the task of grouping keypoints into distinct animals) and tracking. Each step presents different challenges. To make pose estimation robust to interacting and occluded animals, one should annotate frames with closely interacting animals. To associate detected keypoints to particular individuals (assembly) several solutions have been proposed, such as part affinity fields (PAFs)9, associative embeddings10,11, transformers12 and other mechanisms13,14. Tracking animals between frames can be difficult because of appearance similarity, nonstationary behaviors and possible occlusions. Building on human pose estimation research, some packages for multi-animal pose estimation have emerged15,16,17. Here, we developed top-performing network architectures, a data-driven assembly method, engineered tailored tracking methods and compared the current state-of-the-art networks on COCO (common objects in context)18 on four animal datasets. Specifically, we expanded DeepLabCut19,20,21, an open-source toolbox for animal pose estimation. Our contributions are as follows: 1. (1) Four datasets of varying difficulty for benchmarking multi-animal pose estimation networks. 2. (2) Multi-task architecture that predicts multiple conditional random fields and therefore can predict keypoints, limbs, as well as animal identity. 3. (3) A data-driven method for animal assembly that finds the optimal skeleton without user input, and that is state of the art (compared to top-models from COCO, a standard computer vision benchmark). 4. (4) A module that casts tracking as a network flow optimization problem, which aims to find globally optimal solutions. 5. (5) Unsupervised animal ID tracking: we can predict the identity of animals and reidentify them; this is particularly useful to link animals across time when temporally based tracking fails (due to intermittent occlusions). 6. (6) Graphical user interfaces (GUIs) for keypoint annotation, refinement and semiautomatic trajectory verification. ## Results Multi-animal pose estimation can be cast as a data assignment problem in the spatial and temporal domains. To tackle the generic multi-animal pose-tracking scenario, we designed a practical, almost entirely data-driven solution that breaks down the larger goal into the smaller subtasks of: keypoint estimation, animal assembly (spatially grouping keypoints into individuals), local (temporal) tracking and global ‘tracklet’ stitching (Extended Data Fig. 1). We evaluate our pipeline on four new datasets that we release with this paper as a benchmark at https://benchmark.deeplabcut.org/. ### Four diverse multi-animal datasets We considered four multi-animal experiments to broadly validate our approach: three mice in an open field, home-cage parenting in mice, pairs of marmosets housed in a large enclosure and 14 fish in a flow tank. These datasets encompass a wide range of behaviors, presenting difficult and unique computational challenges to pose estimation and tracking (Fig. 1a and Extended Data Fig. 2). The three mice frequently contact and occlude one another. The parenting dataset contained a single adult mouse with unique keypoints in close interaction with two pups hardly distinguishable from the background or the cotton nest, which also leads to occlusions. The marmoset dataset comprises periods of occlusion, close interactions, nonstationary behavior, motion blur and changes in scale. Likewise, the fish school along all dimensions of the tank, hiding each other in cluttered scenes, and occasionally leaving the camera’s field of view. We annotated 5–15 body parts of interest depending on the dataset (Fig. 1a and Extended Data Fig. 1), in multiple frames for cross-validating the pose estimation and assembly performance, as well as semiautomatically annotated several videos for evaluating the tracking performance (Table 1). For analyses, we created a random split of images plus annotations into 70% train and 30% test sets. We developed multi-task convolutional neural networks (CNNs) that perform pose estimation by localizing keypoints in images. This is achieved by predicting score maps, which encode the probability that a keypoint occurs at a particular location, as well as location refinement fields that predict offsets to mitigate quantization errors due to downsampled score maps13,19,20. Then, to assemble keypoints into the grouping that defines an animal, we designed the networks to also predict ‘limbs’, that is, PAFs. This task, which is achieved via additional deconvolution layers, is inspired by OpenPose9. The intuition is that in scenarios where multiple animals are present in the scene, learning to predict the location and orientation of limbs will help group pairs of keypoints belonging to an individual. Moreover, we also introduce an output that allows for animal reidentification (reID) from visual input directly. This is important in the event of animals that are untrackable using temporal information alone, for example, when exiting or re-entering the scene (Fig. 1b). Specifically, we adapted ImageNet-pretrained ResNets22, EfficientNets21,23, as well as developed a multi-scale architecture (which we call DLCRNet_ms5, Fig. 1c). We then use customized multiple parallel deconvolution layers to predict the location of keypoints as well as what keypoints are connected in a given animal (Fig. 1b). Ground truth data of annotated keypoints are used to calculate target score maps, location refinement maps, PAFs and to train the network to predict those outputs for a given input image (Fig. 1b,c) with augmentation. ### Keypoint detection and part affinity performance After an extensive architecture search (http://maDLCopt.deeplabcut.org and Extended Data Fig. 3), we demonstrate that the new DLCRNet performs very well for localizing keypoints (Fig. 2a). Specifically, we trained independent networks for each dataset, and each split, and evaluated their performance. For each frame and keypoint, we calculated the root-mean squared error (r.m.s.e.) between the detections and their closest ground truth neighbors. All the keypoint detectors performed well (DLCRNet_ms5, median test errors of 2.65, 5.25, 4.59 and 2.72 pixels for the tri-mouse, parenting, marmoset and fish datasets, respectively, Fig. 2a). The scales of these data are shown in Fig. 1a). To ease interpretation, errors were also normalized to 33% of the tip–gill distance for the fish dataset and 33% of the left-to-right ear distance for the remaining ones (Methods). We found that 93.6 ± 6.9% of the predictions on the test images were within those ranges (Fig. 2a). After detection, keypoints need to be assigned to individuals. We evaluated whether the learned PAFs helped decide whether two body parts belong to the same or different animals. For example, 66 different edges can be formed from the 12 mouse body parts and many provide high discriminability (Extended Data Fig. 4). We indeed found that predicted limbs were powerful at distinguishing a pair of keypoints belonging to an animal from other (incorrect) pairs linking different mice, as measured by a high auROC (area under the receiver operating characteristic) score (mean ± s.d. 0.99 ± 0.02). ### Data-driven individual assembly performance Any limb-based assembly approach requires a ‘skeleton’, that is, a list of keypoint connections that allows the algorithm to computationally infer which body parts belong together. Naturally, there has to be a path within this skeleton connecting any two body parts, otherwise the body parts cannot be grouped into one animal. Given the combinatorial nature of skeletons, how should they be designed? We circumvented the need for arbitrary, hand-crafted skeletons by developing a method that is agnostic to an animal’s morphology and does not require any user input. We devised a data-driven method where the network is first trained to predict all graph edges and the least discriminative edges (for deciding body part ownership) are not used at test time to determine the optimal skeleton. We found that this approach yields skeletons with fewer errors (unconnected body parts and with higher purity, Fig. 2b,c) and it improves performance. Crucially, it means users do not need to design any skeletons. Our data-driven method (with DLCRNet_ms5) outperforms the naive (baseline) method, enhances ‘purity of the assembly’: that is, the fraction of keypoints that were grouped correctly per individual (Supplementary Table 1), and reduces the number of missing keypoints (Supplementary Table 2). Comparisons revealed significantly higher assembly purity with automatic skeleton pruning versus a naive skeleton definition at most graph sizes, with respective gains of up to 3.0, 2.0 and 2.4 percentage points in the tri-mouse (two-way repeated measure analyses of variance (ANOVA): graph size 23; P < 0.001), marmosets (graph size 34, P = 0.002) and fish datasets (graph size 6, P < 0.001) (Fig. 2b,c). Furthermore, to accommodate diverse body plans and annotated keypoints for different animals and experiments, our inference algorithm works for arbitrary graphs. Animal assembly achieves at least 400 frames per second in scenes with 14 animals, and up to 2,000 for small skeletons in two or three animals (Extended Data Fig. 5). To additionally benchmark our network and assembly contributions, we compared them to methods that achieve state-of-the art performance on COCO18, a challenging, large-scale multi-human pose estimation benchmark. Specifically, we considered HRNet-AE and ResNet-AE. Our models performed significantly better than these state-of-the-art methods (one-way ANOVA: P values, tri-mouse 8.8 × 10−08, pups 6.5 × 10−13, marmosets 3.8 × 10−11, fish 4.0 × 10−12, Fig. 2d) on all four animal benchmark datasets. Last, while the datasets themselves contain diverse animal behaviors, and only 70% is used to train, as an additional test of generalization we used ten held-out marmoset videos that came from different cages (Extended Data Fig. 6). We find in this challenging test there is a roughly 0.25 drop in mean average precision (mAP). It is known that simply adding (a fraction of the new) data into the training set alleviates such drops (reviewed in ref. 7). We reasoned the strong multi-animal performance is due to the assembly algorithm based on PAFs. Therefore, we tested the performance of the network in a top-down setting with and without PAFs, that is, by considering images that are cropped around each animal (bounding boxes, Extended Data Fig. 7a). We found that our assembly algorithm significantly improves mAP performance (PAF versus without PAF one-way ANOVA P, tri-mouse 4.656 × 10−11, pups 3.62 × 10−12, marmosets 1.33 × 10−28, fish 1.645 × 10−6, Extended Data Fig. 7b,c). Collectively, the direct assembly to tracking (that is, the bottom-up method) is likely the optimal approach for most users as it reasons over the whole image. ### Predicting animal identity from images Animals sometimes differ visually, for example due to distinct coat patterns, because they are marked, or carry different instruments (such as an integrated microscope24). To allow our method to take advantage of such scenarios and improve tracking later on, we developed a network head that learns the identity (ID) of animals with the same CNN backbone. To benchmark the ID output, we focused on the marmoset data, where (for each pair) one marmoset had light blue dye applied to its tufts. ID prediction accuracy on the test images ranged from >0.99 for the keypoints closest to the marmoset’s head to 0.95 for more distal keypoints (Fig. 2e and Extended Data Fig. 3c). Thus, DeepLabCut can reID the animal on a per-body-part basis (Fig. 2e). ### Tracking of individuals Once keypoints are assembled into individual animals, the next step is to link them temporally. To measure performance in the next steps, entire videos (one from each dataset) are manually refined to form ground truth sequences (Fig. 3a and Table 1). Reasoning over the whole video for tracking individuals is not only computationally costly, but also unnecessary. For instance, when animals are far apart, it is straightforward to link each one correctly across time. Thus, we devised a divide-and-conquer strategy. We use a simple, online tracking approach to form reliable ‘tracklets’ from detected animals in adjacent frames. Difficult cases (for example, when animals are closely interacting or after occlusion) often interrupt the tracklets, causing ambiguous fragments that cannot be easily temporally linked. We address this crucial issue post hoc by optimally stitching tracklets using multiple spatio-temporal cues. Assembled animals are linked across frames to form tracklets, that is, fragments of continuous trajectories. This task entails the propagation of an animal’s identity in time by finding the optimal association between an animal and its predicted location in the adjacent frame (Fig. 3b–d). The prediction is made by a lightweight ‘tracker’. In particular, we implemented a box and an ellipse tracker. Whereas the former is standard in the object tracking literature (for example, refs. 25,26), we recognized the sensitivity of its formulation to outlier detections (as it is mostly used for pedestrian tracking). Thus, the ellipse tracker was developed to provide a finer parametrization of an animal’s geometry. Overall, the ellipse tracker behaves better than the box tracker, reaching near-perfect multi-object tracking accuracy (MOTA) (0.78 versus 0.97) and producing on average 92% less false negatives; no differences in the switch rate was observed (Fig. 3e). Because of occlusions, dissimilarity between an animal and its predicted state, or other challenging yet common multi-animal tracking issues, tracklets can be interrupted and therefore rarely form complete tracks across a video. The remaining challenge therefore is to stitch these sparse tracklets so as to guarantee continuity and kinematic consistency. Our approach is to cast this task as a global minimization problem, where connecting two candidate tracklets incurs a cost inversely proportional to the likelihood that they belong to the same track. Advantageously, the problem can now be elegantly solved using optimization techniques on graph and affinity models (Fig. 3c,d). Compared to only local tracking, we find that our stitching method reduces switches, even in the challenging fish and marmosets datasets (average reduction compared to local ellipse tracking, 63%; Fig. 3e). To handle a wide range of scenarios, multiple cost functions are devised to model the affinity between a pair of tracklets based on their shape, proximity, motion, dynamics and/or appearance (below and Supplementary Videos 14). Last, to allow users to understand the error rate and correct errors, we developed a Refine Tracklets GUI. Here, we leverage confidence of the tracking to flag sequences of frames that might need attention, namely when swaps might occur (Extended Data Fig. 1b). Other recent methods for tracking animals have been proposed, such as idtracker.ai27. While this tool does not perform pose estimation, we wanted to specifically compare tracking performance. We attempted to use the easiest (tri-mouse) and marked-animal (marmoset) datasets with idtracker.ai. After an extensive grid search for hyperparameters, only the tri-mouse mice dataset could be reliably tracked, yet the performance of our method was significantly better (one-sided, one-sample t-tests indicated that idtracker performed significantly worse than DeepLabCut in both datasets (tri-mouse t = −11.03, P = 0.0008, d = 5.52; marmosets t = −8.43, P = 0.0018, d = 4.22: Supplementary Video 5 and Extended Data Fig. 8). Note, for keypoint selection we remain fully agnostic to the user-defined inputs, giving the user freedom over what keypoints ultimately serve their research, but we do guide the user by showing them how such decisions could affect performance (Extended Data Fig. 9). ### Leveraging animal ID and reID in tracking When animals can disappear from the field of view, they cannot be tracked by temporal association alone and appearance cues are necessary. Indeed, for the marmosets, incorporating visual appearance learned in a supervised fashion, further reduced the number of switches by 26% (Fig. 3e). Additionally, we next considered the case with animals that are not clearly distinguishable to the human annotator, thus no ground truth can be easily provided. To tackle this challenge, we introduce an unsupervised method way based on transformers to learn animal ID via metric learning (Fig. 4a–c and Methods). This provides up to a 10% boost in MOTA performance in the very challenging fish data, particularly in difficult sequences (Fig. 4d). ### Social marmosets Finally, we demonstrate a use-case of multi-animal DeepLabCut by analyzing 9 h (824,568 frames) of home-cage behavior of pairs of marmosets (Fig. 5a,b). We tracked by ReID on a frame-by-frame basis versus only using tracklet information. We found that the marmosets display diverse postures that are captured by principal component analysis on egocentrically aligned poses (Fig. 5c,d). Furthermore, we found that when the animals are close, their bodies tend to be aligned and they tend to look in similar directions (Fig. 5e,f). Finally, we related the posture and the spatial relationship between the animals and found a nonrandom distribution. For instance, marmosets tended to face the other animal when apart (Fig. 5g,h). Thus, DeepLabCut can be used to study complex social interactions over long timescales. ## Discussion Here we introduced a multi-animal pose estimation and tracking system thereby extending DeepLabCut19,20,21. We developed and leveraged more powerful CNNs (DLCRNet) that we show have strong performance for animal pose estimation. Due to the variable body shapes of animals, we developed a data-driven way to automatically find the best skeleton for animal assembly, and we designed fast trackers that also reason over long timescales and are more robust to the body plan, and outperform tailored animal tracking methods such as idTracker.ai. Our method is flexible and not only deals with multiple animals (with the same body plan), but also with one agent dealing with multiple others (as with the parenting mouse, where there are identical looking pups, but a unique adult mouse). For animal pose and tracking, leveraging identity can be an important tool4. We introduce ways to allow both marked animals (supervised) and unsupervised animal identity tracking. For marked animals this means if users consistently label a known feature across the dataset (such as the blue tufts of the 40 pairs of marmosets included in labeled data, or consistent marker on a mouse’s tail), they can simply input the identity ‘true’ (or check the box in the GUI) in DeepLabCut and this trains an ID head. If they are not marked, users could use the reID tracking method that performs unsupervised identity estimation. Thereby, our framework integrates various costs related to movement statistics and the learned animal identity. Open-access benchmark datasets are critical to collectively advance tools7. Here, we open-source four datasets that pose different challenges. As we show, there is little room for improvement on the tri-mouse data, but the schooling fish is not a solved problem. Thus, while our method is fast, generally robust, and can leverage identity of animals to enhance pose-tracking performance, we believe the benchmark can also spur progress on how to improve performance for occluded, similar looking animals. We developed both bottom-up and top-down variants for multi-animal DeepLabCut (maDLC), allowing the user an extended selection of methods (Supplementary Note 1). While our results suggest that the bottom-up pipeline should be the default, depending on the application, top-down approaches might be better suited. Both classes have limitations and features (reviewed in ref. 7). In this work, we strive to develop high-performance code that requires limited user input yet flexibility. In the code, we provide 3D support for multi-animal pose estimation (via multi-camera use), plus this multi-animal variant can be integrated with our real-time software, DeepLabCut-Live!28. Another important user input is at the stage of tracking, where users can input how many animals should be identified in a given video. In this paper, we test up to 14 animals, but this is not a hard upper limit. One ‘upper limit’ is ultimately the camera resolution, as one needs to be able to localize keypoints of animals. Thus, if animals are very small, other tracking tools might be better suited. From pose, to optimal skeleton selection, to tracking: all of the outlined steps can be run in ten lines of code or all from a GUI such that zero programming is required (https://deeplabcut.github.io/DeepLabCut). ## Methods ### Tri-mouse dataset Three wild-type (C57BL/6J) male mice ran on a paper spool following odor trails19. These experiments were carried out in the laboratory of V.N. Murthy at Harvard University (temperature of housing was 20–25 °C, humidity 20–50%). Data were recorded at 30 Hz with 640 × 480 pixels resolution acquired with a Point Grey Firefly FMVU-03MTM-CS. One human annotator was instructed to localize the 12 keypoints (snout, left ear, right ear, shoulder, four spine points, tail base and three tail points) across 161 frames sampled from within DeepLabCut using the k-means clustering approach (across eight videos). All surgical and experimental procedures for mice were in accordance with the NIH Guide for the Care and Use of Laboratory Animals and approved by the Harvard Institutional Animal Care and Use Committee (IACUC). ### Parenting behavior Parenting behavior is a pup-directed behavior observed in adult mice involving complex motor actions directed toward the benefit of the offspring30,31. These experiments were carried out in the laboratory of C. Dulac at Harvard University (temperature of housing was 20–25 °C, humidity 20–50%). The behavioral assay was performed in the home-cage of single-housed adult (age was less than 180 d old) female C57Bl6/J Mus musculus in dark (red light only) conditions. For these videos, the adult mouse was monitored for several minutes in the cage followed by the introduction of pup (4 d old, sex unknown) in one corner of the cage. The behavior of the adult and pup was monitored for a duration of 15min. A video was recorded at 30 Hz using a Microsoft LifeCam camera (part no. 6CH-00001) with a resolution of 1,280 × 720 pixels or a Geovision camera (model no. GV-BX4700-3V) also acquired at 30 frames per second (fps) at a resolution of 704 × 480 pixels. A human annotator labeled on the adult animal the same 12 body points as in the tri-mouse dataset and five body points on the pup along its spine. Initially only the two ends were labeled, and intermediate points were added by interpolation and their positions was manually adjusted if necessary. Frames were generated from across 25 videos. ### Marmoset home-cage behavior Videos of common marmosets (Callithrix jacchus) were made in the laboratory of G. Feng at MIT. Male and female marmoset pairs housed here (n = 50, 30 pairs, age range of 2 to 12 years) were recorded using Kinect V2 cameras (Microsoft) with a resolution of 1,080 pixels and frame rate of 30 Hz. After acquisition, images to be used for training the network were manually cropped to 1,000 × 1,000 pixels or smaller. For our analysis, we used 7,600 labeled frames from 40 different marmosets collected from three different colonies (in different facilities, thus over 20 videos were used for dataset generation). Each cage contains a pair of marmosets, where one marmoset had light blue dye applied to its tufts. One human annotator labeled the 15 body points on each animal present in the frame (frames contained either one or two animals). All animal procedures were overseen by veterinary staff of the MIT and Broad Institute Department of Comparative Medicine, in compliance with the National Institutes of Health guide for the care and use of laboratory animals and approved by the MIT and Broad Institute animal care and use committees. As a test of out-of-domain generalization, we additionally labeled 300 frames from ten new cages and animals. See Fig. 5 for example images and results. We also analyzed two long-term recording sessions from a pairs of marmosets with the DLCRNet_ms5 model, by reidentifying each marmoset in each frame with the ID head. Overall we considered about 9 h (824,568 frames) from two different home cages. We computed the principal components for egocentric postures as well as illustrated their relative head and body orientations (Fig. 5). For Fig. 5e, the distances are normalized based on the running average distance between each ear tuft to center of head, over a second. This measurement does correlate well with depth values in videos recorded with a depth channel (which was not done for the example sessions). To make the postural data egocentric, we first centered the data around ‘Body2’ (x = 0, y = 0) and then rotated it such that the line formed by ‘Body1’, ‘Body2’ and ‘Body3’ was as close to the line ‘x = 0’ as possible. ### Fish schooling behavior Schools of inland silversides (Menidia beryllina, n = 14 individuals per school, sex unknown but likely to be equal females and males, aged approximately 9 months) were recorded in the Lauder Laboratory at Harvard University while swimming at 15 speeds (0.5 to 8 body lengths per s at 0.5 body lengths per s intervals) in a flow tank with a total working section of 28 × 28 × 40 cm as described in previous work32, at a constant temperature (18 ± 1C) and salinity (33 parts per thousand), at a Reynolds number of approximately 10,000 (based on body length). Dorsal views of steady swimming across these speeds were recorded by high-speed video cameras (FASTCAM Mini AX50, Photron) at 60–125 fps (feeding videos at 60 fps, swimming alone 125 fps). The dorsal view was recorded above the swim tunnel and a floating Plexiglas panel at the water surface prevented surface ripples from interfering with dorsal view videos. Five keypoints were labeled (tip, gill, peduncle, dorsal fin tip, caudal tip) and taken from five videos. ### Dataset properties All frames were labeled with the annotation GUI; depending on the dataset, between 100 and 7,600 frames were labeled (Table 1). We illustrated the diversity of the postures by clustering (Extended Data Fig. 2). To assess the level of interactions, we evaluate a Proximity Index (Extended Data Fig. 2m), whose idea is inspired by ref. 33 but its computation was adapted to keypoints. For each individual, instead of delineating bounding boxes to determine the vicinity of an animal we rather define a disk centered on the individual’s centroid and of sufficiently large radius such that all of that individual’s keypoints are inscribed within the disk; this is a less static description of the immediate space an animal can reach. The index is then taken as the ratio between the number of keypoints within that region that belong to other individuals and the number of keypoints of the individual of interest (Extended Data Fig. 2m). For each dataset we created one random split with 70% of the data used for training and the rest for testing (unless otherwise noted). Note that identity prediction accuracy (Fig. 2d) and tracking performance (Fig. 3e) are reported on all three splits, and all show little variability. The data are available as a benchmark challenge at https://benchmark.deeplabcut.org/. DeepLabCut consists of keypoint detectors, comprising a deep CNN pretrained on ImageNet as a backbone together with multiple deconvolutional layers13,19,21. Here, as backbones we considered Residual Networks (ResNet)22 and EfficientNets21,23. Other backbones are integrated in the toolbox21,28 such as MobileNetV2 (ref. 34). We use a stride of 16 for the ResNets (achieved by atrous convolution) and then upsample the filter banks by a factor of two to predict the score maps and location refinement fields with an overall stride of eight. Furthermore, we developed a multi-scale architecture that upsamples from conv5 and fuses those filters with filters learned as 1 × 1 convolutions from conv3. This bank is then upsampled by a factor of two via deconvolution layers. This architecture thus learns from multiple scales with an overall stride of four (including the upsampling in the decoder). We implemented a similar architecture for EfficientNets. These architectures are called ResNet50_strideX and (EfficientNet) bY_strideX for strides four to eight; we used ResNet50 as well as EfficientNets B0 and B7 for experiments (Extended Data Fig. 3). We further developed a multi-scale architecture (DLCRNet_ms5) that fuses high-resolution feature map to lower resolution feature map (Fig. 1c)—we concatenated the feature map from conv5, the feature map learned as a 3 × 3 convolutions followed by a 1 × 1 convolutions from conv3 and the feature map learned as 2 stacked 3 × 3 convolutions and a 1 × 1 convolutions from conv2. This bank is then upsampled via (up to) two deconvolution layers. Depending on how many deconvolution layers are used this architecture learns from multiple scales with an overall stride of 2–8 (including the upsampling in the decoder). Note, during our development phase we used 95% train and 5% test splits of the data; this testing is reported at http://maDLCopt.deeplabcut.org and in our preprint35. DeepLabCut creates three output layers per keypoint that encode an intensity and a vector field. The purpose of the deconvolution layers is to upsample the spatial information (Fig. 1b,c). Consider an input image I(x,y) with ground truth keypoint (xk,yk) for index k. One of the output layers encodes the confidence of a keypoint k being in a particular location (Sk(p,q)), and the other two layers encode the (x-) and (y-) difference (in pixels of the full-sized) image between the original location and the corresponding location in the downsampled (by the overall stride) location ($${L}_{x}^{k}(p,q)$$ and $${L}_{y}^{k}(p,q)$$). For each training image the architecture is trained end-to-end to predict those outputs. Thereby, the ground truth keypoint is mapped into a target score map, which is 1 for pixels closer to the target (this can be subpixel location) than radius r and 0 otherwise. We minimize the cross-entropy loss for the score map (Sk) and the location refinement loss was calculated as a Huber loss13,19. To link specific keypoints within one animal, we use PAFs, which were proposed by Cao et al.9. Each (ground truth) PAF $${P}_{x}^{l}(p,q)$$ and $${P}_{y}^{l}(p,q)$$ for limb l connecting keypoint ki and kj places a directional unit vector at every pixel vector within a predefined distance from the ideal line connecting two keypoints (modulated by pafwidth). We trained DeepLabCut to also minimize the L1-loss between the predicted and true PAFs, which is added to the other losses. Inspired by Cao et al.9, we refine the score maps and PAFs in multiple stages. As can be seen from Fig. 1b, at the first stage, the original image feature from the backbone is fed into the network to predict the score map, PAF and the feature map. The output of each branch, concatenated with the feature map is fed into the subsequent stages. However, unlike Cao et al., we observed that simply adding more stages can cause performance degradation. To overcome that, we introduced shortcut connections between two consequence stages on the score map branch to improve multiple stage prediction. Examples for score maps, location refinement and PAFs are shown in Fig. 1b. For training, we used the Adam optimizer36 with learning schedule (0.0001 for first 7,500 iterations then 5 × 10−5 until 12,000 iterations and then 1 × 10−5) unless otherwise noted. We trained for 60,000–200,000 (for the marmosets) iterations with batch size 8; this was enough to reach good performance (Fig. 2a and Extended Data Fig. 3). During training we also augmented images by using techniques including rotation, covering with random boxes and motion blur. We also developed a keypoint-aware image cropping technique to occasionally augment regions of the image that are dense in keypoints. Crop centers are sampled applying at random one of the following two strategies: uniform sampling over the whole image; or sampling based on keypoint density, where the probability of a point being sampled increases in proportion to its number of neighbors (within a radius equal to 10% of the smallest image side). Crop centers are further shifted along both dimensions by random amounts no greater than 40% of the crop size—the hyperparameters can be changed by the user. ### CNN-based identity prediction For animal identification we used a classification approach4, while also considering spatial information. To have a monolithic solution (with just a single CNN), we simply predict in parallel the identity of each animal from the image. For this purpose, n deconvolution layers are added for n individuals. The network is trained to predict the summed score map for all keypoints of that individual. At test time, we then look up which of the output classes has the highest likelihood (for a given keypoint) and assign that identity to the keypoint. This output is trained jointly in a multi-task configuration. We evaluate the performance for identity prediction on the marmoset dataset (Fig. 2e). Identity prediction can be leveraged by DeepLabCut in three different ways: (1) for assembly, by grouping keypoints based on their predicted identity; (2) for local, frame-by-frame tracking, using a soft-voting scheme where body parts are regarded as individual classifiers providing an identity probability and (3) for global stitching, by weighing down the cost of edges connecting two tracklets of similar appearance (as in Figs. 3 and 4). These three sequential stages can thus be made reliant on visual appearance features alone, as done with the long recordings of marmoset behavior (Fig. 5). ### Multi-animal inference Any number of keypoints can be defined and labeled with the toolbox; additional ones can be added later on. Based on our experience and testing, we recommend labeling more keypoints than a subsequent analysis might require, since it improves the part detectors19 and, more importantly, animal assembly (Extended Data Fig. 9a). Before decoding, score maps are smoothed with a Gaussian kernel of spread σ = 1 to make peaks more salient37. For each keypoint one obtains the most likely keypoint location (x*,y*) by taking the maximum: (p*,q*) = argmax(p,q)Sk(p,q) and computing: $$\begin{array}{l}{x}^{* }={p}^{* }\cdot \lambda +\lambda /2+{L}_{x}^{k}({p}^{* },{q}^{* })\\ {y}^{* }={q}^{* }\cdot \lambda +\lambda /2+{L}_{y}^{k}({p}^{* },{q}^{* })\end{array}$$ (1) with overall stride λ. If there are multiple keypoints k present then one can naturally take the local maxima of Sk to obtain the corresponding detections. Local maxima are identified via nonmaximum suppression with 2D max pooling of the score maps. Thus, one obtains putative keypoint proposals from the score maps and location refinement fields. We then use the PAFs to assign the cost for linking two keypoints (within a putative animal). For any pair of keypoint proposals (that are connected via a limb as defined by the part affinity graph) we evaluate the affinity cost by integrating along line γ connecting two proposals, normalized by the length of γ: $$\int \parallel {P}_{x,y}^{l}\parallel {\mathrm{d}}\gamma /\int {\mathrm{d}}\gamma$$ (2) This integral is computed by sampling. Thus, for a given part affinity graph, one gets a (possibly) large number of detections and costs. The next step is to assemble those detections into animals. ### Data-driven PAF graph selection To relieve the user from manually defining connections between keypoints, we developed an entirely data-driven procedure. Models are trained on a complete graph to learn all possible body part connections. We tested whether randomly pruning the complete marmoset skeleton (to 25, 50 and 75% of its original size: that is, 26, 52, 78 edges or 52, 104, 156 PAFs) to alleviate memory demands could still yield acceptable results. We found that pruning a large graph before training to a fourth of its original size was harmful (mAP loss of 15–20 points; Extended Data Fig. 9); at half and 75% of its size, a performance equivalent to that of the full graph was reached at 24 edges, although it remained about 1.5 mAP point under the maximal mAP score observed overall. Consequently, for large skeletons, a random subgraph is expected to yield only slightly inferior performance at a lesser computational cost. The graph is then pruned based on edge discriminability power on the training set. For this purpose, within- and between-animal part affinity cost distributions (bin width 0.01) are evaluated (see Extended Data Fig. 4 for the mouse dataset). Edges are then ranked in decreasing order of their ability to separate both distributions—evaluated from the auROC curve. The smallest, data-driven graph is taken as the maximum spanning tree (that is, a subgraph covering all keypoints with the minimum possible number of edges that also maximizes part affinity costs). For graph search following a network’s evaluation, up to nine increasingly redundant graphs are formed by extending the minimal skeleton progressively with strongly discriminating edges in the order determined above. By contrast, baseline graphs are grown from a skeleton a user would naively draw, with edges iteratively added in reversed order (that is, from least to most discriminative). The graph jointly maximizing purity and the fraction of connected keypoints is the one retained to carry out the animal assemblies. ### Animal assembly Animal assembly refers to the problem of assigning keypoints to individuals. Yet, reconstructing the full pose of multiple individuals from a set of detections is NP hard, as it amounts to solving a k-dimensional matching problem (a generalization of bipartite matching from 2 to k disjoint subsets)9,38. To make the task more tractable, we break the problem down into smaller matching tasks, in a manner akin to Cao et al.9. For each edge type in the data-driven graph defined earlier, we first pick strong connections based on affinity costs alone. Following the identification of all optimal pairs of keypoints, we seek unambiguous individuals by searching this set of pairs for connected components—in graph theory, these are subsets of keypoints all reachable from one another but that do not share connection with any additional keypoint; consequently, only connectivity, but not spatial information, is taken into account. Breadth-first search runs in linear time complexity, which thus allows the rapid predetermination of unique individuals. Note that, unlike ref. 9, redundant connections are seamlessly handled and do not require changes in the formulation of the animal assembly. Then, remaining connections are sorted in descending order of their affinity costs (equation (2)) and greedily linked. To further improve the assembly’s robustness to ambiguous connections (that is, a connection attempting to either link keypoints belonging to two distinct individuals or overwrite existing ones), the assembly procedure can be calibrated by determining the prior probability of an animal’s pose as a multivariate normal distribution over the distances between all pairs of keypoints. Mean and covariance are estimated from the labeled data via density estimation with Gaussian kernel and bandwidth automatically chosen according to Scott’s Rule. A skeleton is then only grown if the candidate connection reduces the Mahalanobis distance between the resulting configuration and the prior (referred to as with calibration in Fig. 2c). Last, our assembly’s implementation is fully parallelized to benefit greatly from multiple processors (Extended Data Fig. 5). Optionally (and only when analyzing videos), affinity costs between body parts can be weighted so as to prioritize strong connections that were preferentially selected in the past frames. To this end, and inspired by ref. 39, we compute a temporal coherence cost as follows: $$\frac{1}{j}\mathop{\sum }\nolimits_{i = 1}^{j}{e}^{-\gamma {{\Delta }}t{\left\Vert c-{c}_{n}\right\Vert }^{2}}$$, where γ controls the influence of distant frames (and is set to 0.01 by default), c and cn are the current connection and its closest neighbor in the relevant past frame and Δt is the temporal gap separating these frames. ### Top-down pose estimation In general, top-down pose estimation is characterized by two stages that require an object detector and a single animal pose estimation model7. This pipeline requires bounding box annotations (which can come from many different algorithms). Here, bounding boxes were determined from ground truth keypoint coordinates. If a box’s aspect ratio was lower than 4:1, its smallest side was extended by 10 pixels. Box bounds were further enlarged by 25 pixels to make sure the bounding boxes covered an animal’s entire body. We pad the cropped images to a square and then resize them to the original size (400 × 400) to keep the aspect ratio constant. Second, we retrain a model (either with or without PAFs) on training images cropped by these bounding boxes. For inference, we retain the best prediction per bounding box, as decided by detection confidence for the model without PAFs and with highest assembly score for the model with PAF. Finally, for evaluation, we map coordinates of our final predictions back to the original images. ### Detection performance and evaluation To compare the human annotations with the model predictions we used the Euclidean distance to the closest predicted keypoint (r.m.s.e.) calculated per keypoint. Depending on the context, this metric is shown for a specific keypoint, averaged over all keypoints or averaged over a set of train or test images (Fig. 2a and Extended Data Fig. 3). Nonetheless, unnormalized pixel errors may be difficult to interpret in certain scenarios; for example, marmosets dramatically vary in size as they leap from the top to the bottom of the cage. Thus, we also calculated the percentage of correct keypoints (PCK) metric21,40; that is, the fraction of predicted keypoints that fall within a threshold distance from the location of the ground truth detection. PCK was computed in relation to a third of the tip–gill distance for the fish dataset, and a third of the left-right ear distance for the remaining ones. Animal assembly quality was evaluated in terms of mAP computed over object keypoint similarity thresholds ranging from 0.50 to 0.95 in steps of 0.05, as is standard in human pose literature and COCO challenges18. Keypoint standard deviation was set to 0.1. As interpretable metrics, we also computed the number of ground truth keypoints left unconnected (after assembly) and purity—an additional criterion for quality that can be understood as the accuracy of the assignment of all keypoints of a putative subset to the most frequent ground truth animal identity within that subset41. Since pups are very hard to label consistently (see Extended Data Fig. 7 for examples), we allow flips between symmetric pairs of keypoints (end1 versus end2 or interm1 versus interm3, Extended Data Fig. 1) to be acceptable detection errors when evaluating keypoint similarity. ### Statistics for assessing data-driven method Two-way, repeated-measures ANOVA were performed using Pinetwork flow minimizationngouin (v.0.5.0)42 to test whether graph size and assembling method (naive versus data-driven versus calibrated assembly) had an impact on the fraction of unconnected body parts and assembly purity. Since sphericity was violated, the Greenhouse–Geisser correction was applied. Provided a main effect was found, we conducted multiple post hoc (paired, two-sided) tests adjusted with Benjamini–Hochberg false discovery rate correction to locate pairwise differences. The Hedges’ g was calculated to report effect sizes between sets of observations. ### Comparison to state-of-the-art pose estimation models For benchmarking, we compared our architectures to current state-of-the-art architectures on COCO18, a challenging, large-scale multi-human pose estimation benchmark. Specifically, we considered HRNet11,43 as well as ResNet backbones22 with Associative Embedding10 as implemented in the MMPose toolbox (https://github.com/open-mmlab/mmpose). We chose them as control group for their simplicity (ResNet) and performance (HRNet). We used the bottom-up variants of both models. The bottom-up variants leverage associative embedding as the grouping algorithms10. In particular, the bottom-up variant of HRNet we used has mAP that is comparable to the state-of-the-art model HigherHRNet11 in COCO (69.8 versus 70.6) for a multiple scale test and (65.4 versus 67.7) for a single scale test. To fairly compare, we used the same train and test split. The total training epochs are set such that models from two groups see roughly same number of images. The hyperparameters search was manually performed to find the optimal hyperparameters. For a small dataset such as the tri-mouse and (largest) marmoset, we found that the default settings for excellent performance on COCO gave optimal accuracy except that we needed to modify the total training steps to match DeepLabCut’s. For both the marmoset and tri-mouse datasets, the initial learning rate was 0.0015. For the three mouse dataset, the total epochs is 3,000 epochs and the learning rate decayed by a factor of 10 at 600 and 1,000 epochs. For the marmoset dataset, we trained for 50 epochs and the learning rate decayed after 20 and 40 epochs. The batch size was 32 and 16 for ResNet-AE and HRNet-AE, respectively. For smaller datasets such as tri-mouse, fish and parenting, we found that a smaller learning rate and a smaller batch size gave better results; a total of 3,000 epochs were used. After hyper-parameter search, we set batch size to four and initial learning rate a 0.0001, which then decayed at 1,000 and 2,000 epochs. As within DeepLabCut, multiple scale test and flip test were not performed (which is, however, common for COCO evaluation). For the parenting dataset, MMPose models can only be trained on one dataset (simultaneously), which is why these models are not trained to predict the mouse, and we only compare the performance on the pups. Full results are shown in Fig. 2. ### Benchmarking idtacker.ai We used version idtracker.ai27 v.3, taken from commit 6b89601b; we tested it on tri-mouse and marmoset data. We report the MOTA results in Extended Data Fig. 8. For marmoset data, reasonable parameters to segment individual animals with the GUI could not be found (likely due to the complex background), thus we performed a grid search for the valid minimum intensity threshold and maximum intensity threshold, the two critical parameters, by step 2 from range 0 to 255. Even with these efforts, we still could not get reasonable results (Supplementary Video 5); that is, MOTA was negative. ### DeepLabCut Tracking modules Having seen that DeepLabCut provides a strong predictor for individuals and their keypoints, detections are linked across frames using a tracking-by-detection approach (for example, ref. 44). Thereby, we follow a divide-and-conquer strategy for (local) tracklet generation and tracklet stitching (Extended Data Fig. 4b,c). Specifically, we build on the Simple Online and Realtime Tracking framework (SORT25) to generate tracklets. The inter-frame displacement of assembled individuals is estimated via Kalman filter-based trackers. The task of associating these location estimates to the model detections is then formulated as a bipartite graph matching problem solved with the Hungarian algorithm, therefore guaranteeing a one-to-one correspondence across adjacent frames. Note that the trackers are agnostic to the type of skeleton (animal body plan), which render them robust and computationally efficient. ### Box tracker Bounding boxes are a common and well-established representation for object tracking. Here they are computed from the keypoint coordinates of each assembled individual, and expanded by a margin optionally set by the user. The state s of an individual is parametrized as $$s=[x,y,A,r,\dot{x},\dot{y},\dot{A}]$$, where x and y are the 2D coordinates of the center of the bounding box; A, its area and r, its aspect ratio, together with their first time derivatives. Association between detected animals and tracker hypotheses is based on the intersection-over-union measure of overlap. ### Ellipse tracker A 2σ covariance error ellipse is fitted to an individual’s detected keypoints. The state is modeled as $$s=[x,y,h,w,\theta ,\dot{x},\dot{y},\dot{h},\dot{w},\dot{\theta }]$$, where x and y are the 2D coordinates of the center of the ellipse; h and w, the lengths of its semi-axes and θ, its inclination relative to the horizontal. We anticipated that this parametrization would better capture subtle changes in body conformation, most apparent through changes in ellipse width and height and orientation. Moreover, an error ellipse confers robustness to outlier keypoints (for example, a prediction assigned to the wrong individual, which would cause the erroneous delineation of an animal’s boundary under the above-mentioned box tracking). In place of the ellipse overlap, the similarity cost c between detected and predicted ellipses is efficiently computed as: $$c=0.8(1-d)+0.2(1-d)(\cos ({\theta }_{d}-{\theta }_{p}))$$, where d is the Euclidean distance separating the ellipse centroids normalized by the length of the longest semi-axis. The existence of untracked individuals in the scene is signaled by assembled detections with a similarity cost lower than iou_threshold (set to 0.6 in our experiments). In other words, the higher the similarity threshold, the more conservative and accurate the frame-by-frame assignment, at the expense of shorter and more numerous tracklets. On creation, a tracker is initialized with the required parameters described above, and all (unobservable) velocities are set to 0. To avoid tracking sporadic, spurious detections, a tracker is required to live for a minimum of min_hits consecutive frames, or is otherwise deleted. Occlusions and reidentification of individuals are handled with the free parameter max_age—the maximal number of consecutive frames tracks can remain undetected before the tracker is considered lost. We set both to 1 to delegate the tasks of tracklet reidentification and false positive filtering to our TrackletStitcher, as we shall see below. ### Tracklet stitching Greedily linking individuals across frames is locally, but not globally, optimal. An elegant and efficient approach to reconstructing full trajectories (or tracks) from sparse tracklets is to cast the stitching task as a network flow minimization problem45,46. Each fully reconstructed track is equivalent to finding a flow through the graph from a source to a sink, subject to capacity constraints and whose overall linking cost is minimal (Extended Data Fig. 4c). ### Formulation The tracklets collected after animal tracking are denoted as $$\{{{{{\mathcal{T}}}}}_{1},...,{{{{\mathcal{T}}}}}_{n}\}$$, and each contains a (temporally) ordered sequence of observations and time indices. Thereby, the observations are given as vectors of body part coordinates in pixels and likelihoods. In contrast to most approaches described in the literature, the proposed approach requires solely spatial and temporal information natively, while leveraging visual information (for example, animals’ identities predicted beforehand) is optional (see Fig. 3e for marmosets). This way, tracklet stitching is agnostic to the framework poses were estimated with, and works readily on previously collected kinematic data. We construct a directed acyclic graph G = (V,E) using NetworkX47 to describe the affinity between multiple tracklets, where the ith node Vi corresponds to the ith tracklet $${{{{\mathcal{T}}}}}_{i}$$, and E is the set of edges encoding the cost entailed by linking the two corresponding tracklets (or, in other words, the likelihood that they belong to the same track). In our experiments, tracklets shorter than five frames were flagged as residuals: they do not contribute to the construction of the graph and are incorporated only after stitching. This minimal tracklet length can be changed by a user. To drastically reduce the number of possible associations and make our approach scale efficiently to large videos, edge construction is limited to those tracklets that do not overlap in time (since an animal cannot occupy multiple spatial locations at any one instant) and temporally separated by no more than a certain number of frames. By default, this threshold is automatically taken as 1.5 × τ, where τ is the smallest temporal gap guaranteeing that all pairs of consecutive tracklets are connected. Alternatively, the maximal gap to consider can be programmatically specified. The source and the sink are two auxiliary nodes that supply and demand an amount of flow k equal to the number of tracks to form. Each node is virtually split in half: an input with unit demand and an output with unit supply, connected by a weightless edge. All other edges have unit capacity and a weight w calculated from the affinity models described in the next subsection. Altogether, these constraints ensure that all nodes are visited exactly once, which thus amounts to a problem similar to covering G with k node-disjoint paths at the lowest cost. We considered different affinities for linking tracklets (Fig. 4d). ### Affinity models #### Motion affinity Let us consider two nonoverlapping tracklets $${{{{\mathcal{T}}}}}_{1}$$ and $${{{{\mathcal{T}}}}}_{2}$$ consecutive in time. Their motion affinity is measured from the error between the true locations of their centroids (that is, unweighted average keypoint) and predictions made from their linear velocities. Specifically, we calculate a tracklet’s tail and head velocities by averaging instantaneous velocities over its three first and last data points (Fig. 4d). Assuming uniform, rectilinear motion, the centroid location of $${{{{\mathcal{T}}}}}_{1}$$ at the starting frame of $${{{{\mathcal{T}}}}}_{2}$$ is estimated, and we note df the distance between the forward prediction and the actual centroid coordinates. The same procedure is repeated backward in time, predicting the centroid location of $${{{{\mathcal{T}}}}}_{2}$$ at the last frame of $${{{{\mathcal{T}}}}}_{1}$$ knowing its tail velocity, yielding db. Motion affinity is then taken as the average error distance. #### Spatial proximity If a pair of tracklets overlaps in time, we calculate the Euclidean distance between their centroids averaged over their overlapping portion. Otherwise, we evaluate the distance between a tracklet’s tail and the other’s head. #### Shape similarity Shape similarity between two tracklets is taken as the undirected Hausdorff distance between the two sets of keypoints. Although this measure provides only a crude approximation of the mismatch between two animals’ skeletons, it is defined for finite sets of points of unequal size; for example, it advantageously allows the comparison of skeletons with a different number of visible keypoints. #### Dynamic similarity To further disambiguate tracklets in the rare event that they are spatially and temporally close, and similar in shape, we propose to use motion dynamics in a manner akin to ref. 48. The procedure is fully data-driven, and requires no a priori knowledge of the animals’ behavior. In the absence of noise, the rank of the Hankel matrix—a matrix constructed by stacking delayed measurements of a tracklet’s centroid—theoretically determines the dimension of state space models; that is, it is a proxy for the complexity of the underlying dynamics49. If two tracklets originate from the same dynamical system, a single, low-order regressor should suffice to approximate them both. On the other hand, tracklets belonging to different tracks would require a higher-order (that is, more complex) model to explain their spatial evolution48. Low rank approximation of a noisy matrix, however, is a complex problem, as the matrix then tends to be full rank (that is, all its singular values are nonzero). For computational efficiency, we approximate the rank of a large numbers of potentially long tracklets using singular value decomposition via interpolative decomposition. Optimal low rank was chosen as the rank after which eigenvalues drop by less than 1%. ### Problem solution for stitching The optimal flow solution can be found using a min-cost flow algorithm. We use NetworkX’s capacity scaling variant of the successive shortest augmenting path algorithm, which requires polynomial time for the assignment problem (that is, when all nodes have unit demands and supplies, ref. 50). Residual tracklets are then greedily added back to the newly stitched tracks at locations that guarantee time continuity and, when there are multiple candidates, minimize the distances to the neighboring tracklets. Note that although residuals are typically very short, making the assignment decisions error-prone. To improve robustness and simultaneously reduce complexity, association hypotheses between temporally close residual tracklets are stored in the form of small directed acyclic graphs during a preliminary forward screening pass. An hypothesis likelihood is then scored based on pairwise tracklet spatial overlap, and weighted longest paths are ultimately kept to locally grow longer, more confident residuals. This tracklet stitching process is implemented in DeepLabCut and automatically carried out after assembly and tracking. The tracks can then also be manually refined in a dedicated GUI (Extended Data Fig. 1). ### Transformer for unsupervised ID tracking To track unidentified animals we turn to metric learning4 with transformers, which are state-of-the-art for reID of humans and vehicles51. However, in contrast to ref. 51, we created a tracking approach and wanted to make use of the task-trained CNNs, and thus require fewer training data. Specifically, we used the predicted coordinates of each tracklet (individual with temporal continuality) and extract features of 2,048 dimensions from the last layer of our (multi-task-trained) backbone network to form so called ‘keypoint embedding’, which contains embedding of each detected keypoint for every individual (and encode high-level visual features around the keypoint). Then we feed this keypoint embedding to a transformer that processes these embeddings and aggregates information globally. The transformer layers have four heads and four blocks with dimension of 768 and residual connections between blocks. The output of transformer layers are then followed by a multi-layer perceptron that outputs a vector of dimension 128 (more layers, as in ref. 51, actually gave a worse performance). We then use the output of the multi-layer perceptron to minimize triplet loss where we treat within tracklet embedding as anchor-positive pairs while tracklets from different individuals as anchor-negative pairs. For each test video, we extracted 10,000 triplets from the local-tracking approach (ellipse, to evaluate the capacity based on tracklets) and from the ground truth data (to evaluate the capacity of the approach; as triplets from ground truth tracks already are split into the correct number of animals). We then trained the transformer on 90% of the triplets, and evaluated it on the rest (Fig. 4). Thus, the transformer learns to recognize identities of each tracklet and we then use the cosine similarity as an additional cost to our graph. For this purpose, we used the transformer to extract 128 dimensional feature vectors (appearance embeddings) per keypoint embedding, which we then used for tracking (below). ### Tracking performance evaluation Tracking performance was assessed with the field standard MOTA metrics52. Namely, we used https://github.com/cheind/py-motmetrics to compute MOTA, which evaluates a tracker’s overall performance at detecting and tracking individuals (all possible sources of errors considered: number of misses, of false positives and of mismatches (switches) respectively) independently of its ability to predict an individual’s location. MOTA is thereby the sum of three errors: the ratio of misses in the sequence, computed over the total number of objects present in all frames, the ratio of false positives and the ratio of mismatches52. The number of misses counts actual detections for which there are no matching trackers. The number of fragments indicates the number of times tracking was interrupted. The number of switches, occurring most often when two animals pass very close to one another or if tracking resumes with a different ID after an occlusion. In our software, remaining ID swaps are automatically flagged in the Refine Tracklets GUI (Extended Data Fig. 1) by identifying instants at which the x and y coordinates of a pair of keypoints simultaneously intersect each other53. ### Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. ## Data availability For this study, we established four differently challenging multi-animal datasets from ecology and neuroscience. Data collection was institutionally approved: tri-mouse, parenting behavior, fish schooling from Harvard University IACUC and marmosets from MIT and Broad Institute IACUC. They are available to download, minus a small amount (30%) held out as benchmark competition data, at https://benchmark.deeplabcut.org/, and on Zenodo54,55,56,57. Findings in this paper can be replicated using the downloadable data and supplied code. ## Code availability Code to use this package is found at https://github.com/DeepLabCut/DeepLabCut with a LGPL-3.0 License. Code to reproduce the figures from this work is found at https://github.com/DeepLabCut/maDLC_NatureMethods2022. ## References 1. Kays, R., Crofoot, M. C., Jetz, W. & Wikelski, M. Terrestrial animal tracking as an eye on life and planet. Science 348, aaa2478 (2015). 2. Schofield, D. et al. Chimpanzee face recognition from videos in the wild using deep learning. Sci. Adv. 5, eaaw0736 (2019). 3. Norouzzadeh, M. S. et al. Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning. Proc. Natl Acad. Sci. USA 115, E5716–E5725 (2018). 4. Vidal, M., Wolf, N., Rosenberg, B., Harris, B. P. & Mathis, A. Perspectives on individual animal identification from biology and computer vision. Integr. Comp. Biol. 61, 900–916 (2021). 5. Datta, S. R., Anderson, D. J., Branson, K., Perona, P. & Leifer, A. Computational neuroethology: a call to action. Neuron 104, 11–24 (2019). 6. Mathis, M. W. & Mathis, A. Deep learning tools for the measurement of animal behavior in neuroscience. Curr. Opin. Neurobiol. 60, 1–11 (2020). 7. Mathis, A., Schneider, S., Lauer, J. & Mathis, M. W. A primer on motion capture with deep learning: principles, pitfalls, and perspectives. Neuron 108, 44–65 (2020). 8. Pereira, T. D., Shaevitz, J. W. & Murthy, M. Quantifying behavior to understand the brain. Nat. Neurosci. 23, 1537–1549 (2020). 9. Cao, Z., Simon, T., Wei, S.-E. & Sheikh, Y. Realtime multi-person 2D pose estimation using part affinity fields. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 7291–7299 (IEEE, 2017). 10. Newell, A., Huang, Z. & Deng, J. Associative embedding: end-to-end learning for joint detection and grouping. In Proc. 31st Conference on Neural Information Processing Systems (eds Guyon, I. et al.) 2277–2287 (NIPS, 2017). 11. Cheng, B. et al. Higherhrnet: scale-aware representation learning for bottom-up human pose estimation. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition 5386–5395 (IEEE, 2020). 12. Stoffl, L., Vidal, M. & Mathis, A. End-to-end trainable multi-instance pose estimation with transformers. Preprint at https://arxiv.org/abs/2103.12115 (2021). 13. Insafutdinov, E., Pishchulin, L., Andres, B., Andriluka, M. & Schiele, B. DeeperCut: a deeper, stronger, and faster multi-person pose estimation model. In Proc. European Conference on Computer Vision 34–50 (Springer, 2016). 14. Kreiss, S., Bertoni, L. & Alahi, A. Pifpaf: composite fields for human pose estimation. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 11977–11986 (IEEE, 2019). 15. Segalin, C. et al. The mouse action recognition system (MARS) software pipeline for automated analysis of social behaviors in mice. eLife 10, e63720 (2021). 16. Pereira, T. D. et al. SLEAP: multi-animal pose tracking. Preprint at bioRxiv https://doi.org/10.1101/2020.08.31.276246 (2020). 17. Chen, Z. et al. AlphaTracker: a multi-animal tracking and behavioral analysis tool. Preprint at bioRxiv https://doi.org/10.1101/2020.12.04.405159 (2020). 18. Lin, T.-Y. et al. Microsoft COCO: common objects in context. In Proc. European Conference on Computer Vision 740–755 (Springer, 2014). 19. Mathis, A. et al. Deeplabcut: markerless pose estimation of user-defined body parts with deep learning. Nat. Neurosci. 21, 1281–1289 (2018). 20. Nath, T. et al. Using deeplabcut for 3D markerless pose estimation across species and behaviors. Nat. Protoc. 14, 2152–2176 (2019). 21. Mathis, A. et al. Pretraining boosts out-of-domain robustness for pose estimation. In Proc. IEEE/CVF Winter Conference on Applications of Computer Vision 1859–1868 (IEEE, 2021). 22. He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 770–778 (IEEE, 2016). 23. Tan, M. & Le, Q. EfficientNet: rethinking model scaling for convolutional neural networks. In Proc. International Conference on Machine Learning 6105–6114 (PMLR, 2019). 24. Ghosh, K. K. et al. Miniaturized integration of a fluorescence microscope. Nat. Methods 8, 871–878 (2011). 25. Bewley, A., Ge, Z., Ott, L., Ramos, F. & Upcroft, B. Simple online and realtime tracking. In Proc. 2016 IEEE International Conference on Image Processing (ICIP) 3464–3468 (IEEE, 2016). 26. Bertozzi, M. et al. Pedestrian localization and tracking system with Kalman filtering. In Proc. IEEE Intelligent Vehicles Symposium, 2004 584–589 (IEEE, 2004). 27. Romero-Ferrero, F., Bergomi, M. G., Hinz, R. C., Heras, F. J. & de Polavieja, G. G. idtracker.ai: tracking all individuals in small or large collectives of unmarked animals. Nat. Methods 16, 179–182 (2019). 28. Kane, G. A., Lopes, G., Saunders, J. L., Mathis, A. & Mathis, M. W. Real-time, low-latency closed-loop feedback using markerless posture tracking. eLife 9, e61909 (2020). 29. Claudi, F. Mouse top detailed. Zenodo https://doi.org/10.5281/zenodo.3925997 (2020). 30. Wu, Z., Autry, A. E., Bergan, J. F., Watabe-Uchida, M. & Dulac, C. G. Galanin neurons in the medial preoptic area govern parental behaviour. Nature 509, 325–330 (2014). 31. Kohl, J. et al. Functional circuit architecture underlying parental behaviour. Nature 556, 326–331 (2018). 32. Di Santo, V., Blevins, E. L. & Lauder, G. V. Batoid locomotion: effects of speed on pectoral fin deformation in the little skate, Eucoraja erinacea. J. Exp. Biol. 220, 705–712 (2017). 33. Li, J. et al. CrowdPose: efficient crowded scenes pose estimation and a new benchmark. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 10863–10872 (IEEE, 2019). 34. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A. & Chen, L.-C. MobileNetV2: inverted residuals and linear bottlenecks. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 4510–4520 (IEEE, 2018). 35. Lauer, J. et al. Multi-animal pose estimation and tracking with DeepLabCut. Preprint at bioRxiv https://doi.org/10.1101/2021.04.30.442096 (2021). 36. Kingma, D. P. & Ba, J. Adam: a method for stochastic optimization. Preprint at https://arxiv.org/abs/1412.6980 (2014). 37. Huang, J., Zhu, Z., Guo, F. & Huang, G. The devil is in the details: delving into unbiased data processing for human pose estimation. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition 5700–5709 (IEEE, 2020). 38. Insafutdinov, E. et al. ArtTrack: articulated multi-person tracking in the wild. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 6457–6465 (IEEE, 2017). 39. Biggs, B., Roddick, T., Fitzgibbon, A. & Cipolla, R. Creatures great and small: recovering the shape and motion of animals from video. In Proc. Asian Conference on Computer Vision 3–19 (Springer, 2018). 40. Yang, Y. & Ramanan, D. Articulated human detection with flexible mixtures of parts. IEEE Trans. Pattern Anal. Mach. Intell. 35, 2878–2890 (2012). 41. Huang, A. Similarity measures for text document clustering. In Proc. Sixth New Zealand Computer Science Research Student Conference (NZCSRSC2008) Vol. 4, 9–56 (2008). 42. Vallat, R. Pingouin: statistics in Python. J. Open Source Softw. 3, 1026 (2018). 43. Sun, K., Xiao, B., Liu, D. & Wang, J. Deep high-resolution representation learning for human pose estimation. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition 5693–5703 (IEEE, 2019). 44. Girdhar, R., Gkioxari, G., Torresani, L., Paluri, M. & Tran, D. Detect-and-track: efficient pose estimation in videos. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 350–359 (IEEE, 2018). 45. Emami, P., Pardalos, P. M., Elefteriadou, L. & Ranka, S. Machine learning methods for data association in multi-object tracking. Preprint at https://arxiv.org/abs/1802.06897 (2018). 46. Zhang, L., Li, Y. & Nevatia, R. Global data association for multi-object tracking using network flows. In Proc. 2008 IEEE Conference on Computer Vision and Pattern Recognition 1–8 (IEEE, 2008). 47. Hagberg, A. A., Schult, D. A. & Swart, P. J. Exploring network structure, dynamics, and function using NetworkX. In Proc. 7th Python in Science Conference (eds Varoquaux, G. et al.) 11–15 (2008). 48. Dicle, C., Camps, O. I. & Sznaier, M. The way they move: tracking multiple targets with similar appearance. In Proc. IEEE International Conference on Computer Vision 2304–2311 (IEEE, 2013). 49. Yin, H., Zhu, Z. & Ding, F. Model order determination using the Hankel matrix of impulse responses. Appl. Math. Lett. 24, 797–802 (2011). 50. Ahuja, R. K., Magnanti, T. L. & Orlin, J. B. Network Flows: Theory, Algorithms, and Applications (Prentice-Hall, 1993). 51. He, S. et al. TransReID: transformer-based object re-identification. In Proc. IEEE/CVF International Conference on Computer Vision (ICCV) 15013–15022 (IEEE, 2021). 52. Bernardin, K. & Stiefelhagen, R. Evaluating multiple object tracking performance: the clear mot metrics. EURASIP J. Image Video Proc. 2008, 1–10 (2008). 53. Tenenbaum, J. B., De Silva, V. & Langford, J. C. A global geometric framework for nonlinear dimensionality reduction. Science 290, 2319–2323 (2000). 54. Lauer, J. et al. madlc marmoset benchmark dataset—training. Zenodo https://doi.org/10.5281/zenodo.5849371 (2022). 55. Lauer, J. et al. madlc fish benchmark dataset—training. Zenodo https://doi.org/10.5281/zenodo.5849286 (2022). 56. Lauer, J. et al. madlc parenting benchmark dataset—training. Zenodo https://doi.org/10.5281/zenodo.5851109 (2022). 57. Lauer, J. et al. madlc tri-mouse benchmark dataset—training. Zenodo https://doi.org/10.5281/zenodo.5851157 (2022). ## Acknowledgements Funding was provided by the Rowland Institute at Harvard University (M.W.M., T.N., A.M. and J.L.), the Chan Zuckerberg Initiative DAF (M.W.M., A.M. and J.L.), SNSF (M.W.M.) and EPFL (M.W.M., A.M., S.Y., S.S. and M.Z.). Dataset collection was funded by: Office of Naval Research grant nos. N000141410533 and N00014-15-1-2234 (G.L.), HHMI and NIH grant no. 2R01HD082131 (M.M.R. and C.D.); NIH grant no. 1R01NS116593-01 (M.M.R., C.D. and V.N.M.). We are grateful to M. Vidal for converting datasets. We thank the DLC community for feedback and testing. M.W.M. is the Bertarelli Foundation Chair of Integrative Neuroscience. The funders had no role in the conceptualization, design, data collection, analysis, decision to publish or preparation of the manuscript. ## Author information Authors ### Contributions Conceptualization was done by A.M. and M.W.M. Formal analysis and code were done by J.L., A.M. and M.W.M. New deep architectures were designed by M.Z., S.Y. and A.M. GUIs were done by J.L., M.W.M. and T.N. Benchmark was set by S.S., M.W.M., A.M. and J.L. Marmoset data were gathered by W.M. and G.F. Marmoset behavioral analysis was carried out by W.M. Parenting data were gathered by M.M.R., A.M. and C.D. Tri-mouse data were gathered by D.S., A.M. and V.N.M. Fish data were gathered by V.D.S. and G.L. The article was written by A.M., M.W.M. and J.L. with input from all authors. M.W.M. and A.M. co-supervised the project. ### Corresponding authors Correspondence to Mackenzie Weygandt Mathis or Alexander Mathis. ## Ethics declarations ### Competing interests The authors declare no competing interests. ## Peer review ### Peer review information Nature Methods thanks the anonymous reviewers for their contribution to the peer review of this work. Nina Vogt was the primary editor on this article and managed its editorial process and peer review in collaboration with the rest of the editorial team. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Extended data ### Extended Data Fig. 1 DeepLabCut 2.2 workflow. (a) Multi-animal DeepLabCut2.2+ workflow. (b) An example screenshot of the Refine Tracklet GUI. We show the ellipse similarity score (black line), hand-noted GT switches in ID (blue), and additional frames in orange where the selected keypoint requires further examination. (c) Body part keypoint diagrams with names on the animal skeletons (see also Extended Data Figure 2). ### Extended Data Fig. 2 Dataset characteristics and statistics. For each datasets, normalized animal poses were clustered using K-means adapted for missing elements, and embedded non-linearly in 2D space via Isometric mapping (Tenenbaum et al. 2000). Embeedings as well as representative poses are shown for the tri-mouse dataset (a). Counts of labeled keypoints (b) and distribution of bounding box diagonal lengths (c). (d-l) show the same for the other three datsets. The Proximity Index (m) reflects the crowdedness of the various dataset scenes. Statistics were computed from the ground truth test video annotations. The mice and fish datasets are more cluttered on average than the pups and marmosets. ### Extended Data Fig. 3 Performance of various DeepLabCut network architectures. (a) Overall keypoint prediction errors of ResNets-50 and the EfficientNets backbones (B0/B7), DLCRNet at stride 4 and 8. Distribution of train and test errors are displayed as light and dark box plots, respectively. Box plots show median, first and third quartiles, with whiskers extending past the low and high quartiles to ± 1.5 times the interquartile range. All models were trained for 60k iterations. n=independent image samples as follows: for traintest per dataset: 11249 (tri-mouse); 379163 (pups); 53162278 (marmosets); 7030 (fish). (b): Images on held-out test data, where plus indicates human ground truth, and the circle indicates the model prediction (shown for ResNet50 with stride 8). (c): Marmoset identification train-test accuracy for various backbones. ### Extended Data Fig. 4 Discriminability of part affinity fields. Within- (pink) and between-animal (blue) affinity cost distributions for all edges of the mouse skeleton with DLCRNet_ms5. The saturated subplots highlight the 11 edges kept to form the smallest, optimal part affinity graph (see Fig. 2b). This is based on the separability power of an edge, that is, its ability to discriminate a connection between two keypoints effectively belonging to the same animal from the wrong ones, and reflected by the corresponding AUC scores (listed at the top of the subplots). ### Extended Data Fig. 5 Average animal assembly speed in frames per second as a function of graph size. Assembly rates vs. graphs size for the four datasets. Improving the assembly robustness via calibration with labeled data in large graphs incurs no extra computational cost at best, and a slowdown by 25% at worst; remarkably, it is found to accelerate assembly speed in small graphs. Relying exclusively on keypoint identity prediction results in average speeds of around 5600 frames per second, independent of graph size. Three timing experiments were run per graph size (lighter colored dots) and averages are shown. Note that assembling rates exclude CNN processing times. Speed benchmark was run on a workstation with an Intel(R) Core(TM) i9-10900X CPU 3.70GHz. ### Extended Data Fig. 6 Performance on out of domain marmoset data. (a) Example images from original dataset, and example generalization test images. (b) Median RMSE and PCK (gray numbers) for data and network (DLCRNet) as shown in Fig. 2a. (c) same, but on the generalization test images (n=300) (d) same but per cage as shown in a (n=30 test images per marmoset). Box plots show median, first and third quartiles, with whiskers extending past the low and high quartiles to ± 1.5 times the interquartile range. ### Extended Data Fig. 7 Comparison of top-down methods with and without assembly. (a) Schematics of top-down method with example images from the pup dataset, which consists of first detecting individuals and then performing pose prediction on each bounding box (plus is human ground truth, and the circle is bottom-up model (DLCRNet, stride 8, data-driven) predictions. (b) Performance mAP computed for top-down method with and without PAFs and bottom-up method (baseline, data-driven) as also shown in Fig. 2d. PAF vs. w/o PAF one-way ANOVA p-values, tri-mouse: 4.656e-11, pups: 3.62e-12, marmosets: 1.33e-28, fish: 1.645e-06). There were significant model effects across all datasets: one-way ANOVA p-values– tri-mouse: 4.13e-11, pups: 4.59e-25, marmosets: 3.04e-40, fish: 1.18e-14. (c) Example predictions within the smaller images (that is, bounded crops) from the top-down model (that is, w/PAF), and bottom-up predictions (full images, as noted). ### Extended Data Fig. 8 Performance of idtracker.ai. (a) Segmented regions (red) overlaid on example image in idtracker.ai GUI. Example how idtracker.ai fails to segment only the mice in the full data frame for tri-mouse and (b) in marmoset dataset. (c) Using the ROI selection feature, we could find mostly just mice. However, due to the inhomogeneous lighting, the segmentation is not error-free. (d) Result of a grid search to find optimal parameters for idtracker with MOTA scores on the same videos as shown in Fig. 3a,e; one-sided, one-sample T-tests indicated that idtracker.ai performed significantly worse than DeepLabCut in both datasets (tri-mouse: T=-11.03, p=0.0008, d=5.52; marmosets: T=-8.43, p=0.0018, d=4.22). ### Extended Data Fig. 9 Parameter sensitivity: Evaluation of number of body parts, frames, and PAF sizes. (a) The number of keypoints affects mAP; evaluated with ResNet50 stride8 on the two datasets with the most keypoints originally labeled by subsampling the keypoints. [Mouse: snout/tailbase (2) + leftear/rightear (4) + shoulder/spine1/spine2/spine3 (8) vs. full (12); Marmoset: Middle/Body2 (2) + FL1/BL1 /FR1/BR1/Left/Right (8) + front/body1/body3 (11) vs. full (15)] (b) Identity prediction is not strongly affected by the number of keypoints used (same experiments as in a, but for identity). (c) Impact of graph size, and randomly dropping edges on performance. (d) Test performance on 30% of the data vs. training set size (as fraction of 70%) for all four datasets. ## Supplementary information ### Supplementary Information Supplementary Note 1 and Tables 1–8. ### Supplementary Video 1 Predictions with DLCRNet MS5_ss-s4 for tri-mouse video clip. ### Supplementary Video 2 Predictions with DLCRNet MS5_ss-s4 for parenting video clip. ### Supplementary Video 3 Predictions with DLCRNet MS5_ss-s4 for marmoset video clip. ### Supplementary Video 4 Predictions with DLCRNet MS5_ss-s4 for fish video clip. ### Supplementary Video 5 Predictions with idtracker.ai on tri-mouse and marmoset data. ## Rights and permissions Reprints and Permissions Lauer, J., Zhou, M., Ye, S. et al. Multi-animal pose estimation, identification and tracking with DeepLabCut. Nat Methods 19, 496–504 (2022). https://doi.org/10.1038/s41592-022-01443-0 • Accepted: • Published: • Issue Date: • DOI: https://doi.org/10.1038/s41592-022-01443-0 • ### Mackenzie Weygandt Mathis • Vivien Marx Nature Methods (2022) • ### Tracking together: estimating social poses • Sena Agezo • Gordon J. Berman Nature Methods (2022)
2022-05-16 10:00:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37758076190948486, "perplexity": 3702.7418577567328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510097.3/warc/CC-MAIN-20220516073101-20220516103101-00205.warc.gz"}
https://math.stackexchange.com/questions/948042/maximum-product-for-multisets-with-same-sum/948048
# Maximum product for multisets with same sum Given a positive number N, among all multisets (containing only positive numbers) with sum N, is there a reliable method for determining the set with the maximum product? For example, for N = 5, the following are some possible sets, with their products (i.e. all their elements multiplied together). { 5 } : product = 5^1 = 5 { 1, 1, 1, 1, 1 } : product = 1^5 = 1 { 0.5, 1, 1.5, 2 } : product = 1.5 { 2.5, 2.5 } : product = 6.25 I'm looking for a reliable method, for any given N, to find the multiset which produces the maximum product. Based on comments below, it seems you want to also maximise $f(n)=\left(\dfrac{N}n\right)^n$ among $n \in \mathbb N$. For this, you may note that the corresponding continuous function is monotone or unimodal and has a maximum at $x = N/e$. Hence, look for the maximum among $\lfloor N/e\rfloor$ and $\lceil N/e \rceil$. • $N/n$ where $n$ is the number of elements. – Macavity Sep 27 '14 at 11:41
2019-10-14 15:26:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7372333407402039, "perplexity": 138.62550612041187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986653876.31/warc/CC-MAIN-20191014150930-20191014174430-00258.warc.gz"}
https://www.omnicalculator.com/physics/angle-of-incidence
# Angle of Incidence Calculator Created by Rahul Dhari Last updated: Feb 02, 2023 The angle of incidence calculator is useful to determine the incident angle for a light ray using Snell's law. The article accompanying the calculator will explain what incident angle is and how to calculate it. ## What is angle of incidence? The angle of incidence refers to the angle at which a light ray incidents on the surface of a medium. As per Fermat's principle, a light ray always travels the path that takes the least time. The speed of light varies as per the medium. Light, when moving from one medium to another, changes direction. This phenomenon is refraction. Not just light, other types of waves like sound and water waves also exhibit refraction. Snell's law, or the law of refraction, helps us determine the change in direction by calculating the angle of refraction. The angle of refraction is the angle by which the light ray has changed direction upon entering the medium. Consider a light ray incident upon a medium at an angle $\theta_1$, the angle of refraction, $\theta_2$ is: $\sin{\theta_1} = \sin{ \theta_2} \frac{n_2}{n_1}$ where: • $n_1$ - Refractive index of medium 1; and • $n_2$ - Refractive index of medium 2. In other words, the ratio of sines of the incident and refraction angle is equal to the ratio of refractive indices. ## Using the angle of incidence calculator. Let's calculate the angle of incidence for a light ray refracting out of a water body at 45° to understand how to calculate the angle of incidence. 1. Select air as medium 1. 2. Select water as medium 2. 3. Enter the angle of refraction as 45°. 4. The angle of incidence is: \qquad \scriptsize \begin{align*} \theta_1 &= \sin^{-1} \left ( \sin{ \theta_2} \frac{n_2}{n_1} \right ) \\ \theta_1 &= \sin^{-1} \left ( \sin{45} \frac{1.333}{1.000273} \right ) \\ \theta_1 &= 70.4442^\circ \end{align*} If you are interested in this topic, you might like our other calculators: ## FAQ ### How do I calculate angle of incidence? To calculate the angle of incidence: 1. Find the refractive indices of the two media involved. 2. Divide the refractive index of the second medium by the refractive index of the first medium. 3. Multiply the quotient by the sine of the angle of refraction to obtain the incident angle. ### What is the refractive index of air? The refractive index of air is 1.000273 at standard temperature and pressure. This value is slightly higher than the refractive index of vacuum, which is set to 1 by definition. ### What is the angle of incidence for light passing through glass and refracted to 30°? The angle of the incident for the light ray, in this case, is 49.4459°. Considering the refractive index of air and glass as 1.000273 and 1.52, respectively. Such that, sin(theta1) = sin(theta2) × n2 / n1 = = sin(30°) × 1.52 / 1.000273 = 49.4459°. Rahul Dhari Medium 1 Vacuum Refractive index 1 (n₁) Medium 2 Air Refractive index 2 (n₂) Angle of refraction (θ₂) deg Angle of incidence (θ₁) deg People also viewed… ### Humans vs vampires Vampire apocalypse calculator shows what would happen if vampires were among us using the predator - prey model. ### Pendulum period How long is that tick....tock? Learn how to calculate the period of a pendulum with Omnni! ### Titration Use our titration calculator to determine the molarity of your solution. ### Wavelength to energy Use the wavelength to energy calculator to determine the energy of a photon given its wavelength.
2023-02-03 07:57:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.955230712890625, "perplexity": 867.4490000706289}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500044.16/warc/CC-MAIN-20230203055519-20230203085519-00330.warc.gz"}
http://tex.stackexchange.com/questions/32751/placing-equation-numbers-on-the-right/32753
# Placing equation numbers on the right I am new to TeX and am using scribtex to write a maths paper. When I put the followng code, the equation number is automatically displayed on the left, instead of in the right as is usually done. What is the problem and how do I fix it? Thanks for your time. \documentclass[12pt,oneside]{amsart} \usepackage{amssymb} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \begin{document} Let us consider a spherical symmetric metric in curvature coordinates: $$\label{metric} ds^2=-e^\lambda dr^2-r^2(d\theta^2+\sin^2\theta d\phi^2)+e^\nu dt^2$$ \end{document}​ The output I get is: - add comment ## 1 Answer Add the reqno document class option to your current file: \documentclass[12pt,oneside,reqno]{amsart} This will override the default left equation numbering scheme of the amsart document class. The following, taken from the AMS document class documentation provides some background (p 7): Equation numbering on the left or right The option leqno - equation numbers on the left - is the default in AMS styles. Therefore we provide also a reqno option. - leqno and reqno are options of all document classes. I think your answer is too localized to amsart. –  Marco Daniel Oct 26 '11 at 11:31 @MarcoDaniel: I don't read the answer as implying that the leqno applies only to the amsart package; the answer was merely taking the OP's specific example and showed how to add the required option. –  Mico Oct 26 '11 at 12:04 @Mico: The answer is correct. My sentence based on the part default left equation numbering scheme of the amsart –  Marco Daniel Oct 26 '11 at 12:34 add comment
2014-07-12 12:39:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.935869038105011, "perplexity": 1721.0117237290474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776432978.12/warc/CC-MAIN-20140707234032-00010-ip-10-180-212-248.ec2.internal.warc.gz"}
https://starbeamrainbowlabs.com/blog/?tags=Review
## Stardust Blog Archive Mailing List Articles Atom Feed Comments Atom Feed Twitter Reddit Facebook ## Tag Cloud 3d 3d printing account algorithms android announcement architecture archives arduino artificial intelligence artix assembly async audio automation backups bash batch blog bookmarklet booting bug hunting c sharp c++ challenge chrome os code codepen coding conundrums coding conundrums evolved command line compilers compiling compression css dailyprogrammer data analysis debugging demystification distributed computing documentation downtime electronics email embedded systems encryption es6 features ethics event experiment external first impressions future game github github gist gitlab graphics hardware hardware meetup holiday holidays html html5 html5 canvas infrastructure interfaces internet interoperability io.js jabber jam javascript js bin labs learning library linux lora low level lua maintenance manjaro network networking nibriboard node.js operating systems performance photos php pixelbot portable privacy problem solving programming problems project projects prolog protocol protocols pseudo 3d python reddit redis reference releases resource review rust searching secrets security series list server software sorting source code control statistics storage svg talks technical terminal textures thoughts three thing game three.js tool tutorial twitter ubuntu university update updates upgrade version control virtual reality virtualisation visual web website windows windows 10 xmpp xslt ## Orange Pi 3 in review I recently bought an Orange Pi 3 (based on the Allwinner H6 chipset) to perform a graphics-based task, and I've had an interesting enough time with it that I thought I'd share my experiences in a sort of review post here. The first problem when it arrived was to find an operating system that supports it. My initial thought was to use Devuan, but I quickly realised that practically the only operating system that supports it at the moment is Armbian. Not to be deterred, after a few false starts I got Armbian based on Ubuntu 18.04 Bionic Beaver installed. The next order of business was to install the software I wanted to use. For the most part, I didn't have too much trouble with this - though it was definitely obvious that the arm64 (specifically sunxi64) architecture isn't a build target that's often supported by apt repository owners. This wasn't helped by the fact that apt has a habit of throw really weird error messages when you try to install something that exists in an apt repository, but for a different architecture. After I got Kodi installed, the next order of business was to get it to display on the screen. I ended up managing this (eventually) with the help of a lot of tutorials and troubleshooting, but the experience was really rather unpleasant. I kept getting odd errors, like failed to load driver sun4i-drm when trying to start Kodi via an X11 server and other strangeness. The trick in the end was to force X11 to use the fbdev driver, but I'm not entirely sure what that means or why it fixed the issue. Moving on, I then started to explore the other capabilities of the device. Here, too, I discovered that a number of shortcomings in the software support provided by Linux, such as a lack of support for audio via HDMI and Bluetooth. I found the status matrix of the SunXI project, which is the community working to add support for the Allwinner H6 chipset to the Linux Kernel. They do note that support for the H6 chipset is currently under development and is incomplete at the moment - and I wish I'd checked on software support before choosing a device to purchase. The other big problem I encountered was a lack of kernel headers provided by Armbian. Normally, you can install the headers for your kernel by installing the linux-headers-XXXXXX package with your favourite package manager, where XXXXXX is the same as the string present in the linux-image-XXXXXX package you've got installed that contains the kernel itself. This is actually kind of a problem, because it means that you can't compile any software that calls kernel functions yourself without the associated header files, preventing you from installing various dkms-based kernel modules that auto-recompile against the kernel you've got installed. I ended up finding this forum thread, but the response who I assume is an armbian developer was less than stellar - they basically said that if you want kernel headers, you need to compile the kernel yourself! That's a significant undertaking, for those not in the know, and certainly not something that should be undertaken lightly. While I've encountered a number of awkward issues that I haven't seen before, the device does have some good things worth noting. For one, it actually packs a pretty significant punch: it's much more powerful than a Raspberry Pi 3B+ (of which I have one; I bought this device before the Raspberry Pi 4 was released). This makes it an ideal choice for more demanding workloads, which a Raspberry Pi wouldn't quite be suitable for. In conclusion, while it's a nice device, I can't recommend it to people just yet. Software support is definitely only half-baked at this point with some glaring holes (HDMI audio is one of them, which doesn't look like it's coming any time soon). I think part of the problem is that Xunlong (that company that makes the device and others in it's family) don't appear to be interested in supporting the community at all, choosing instead to dump custom low-quality firmware for people to use as blobs of binary code (which apparently doesn't work) - which causes the SunXI community a lot of extra work to reverse-engineer it all and figure out how it all works before they can start implementing support in the Linux Kernel. If you're interested in buying a similar embedded board, I can recommend instead using HackerBoards to find one that suits your needs. Don't forget to check for operating system support! Found this interesting? Thinking of buying a board yourself? Had a different experience? Comment below! ## Solo hardware security key review Sometime last year (I forget when), I backed a kickstarter that promised the first open-source hardware security key that supports FIDO2. Since the people doing the kickstarter have done this before for an older standard, I decided to back it. Last week they finally arrived, and the wait was totally worth it! I got 1 with a USB type c connector (in yellow below), and 1 type a regular type a connector that also supports nfc (in red, for using with my phone). Before I get into why they are so awesome, it's probably a good idea if we take small step back and look at what a hardware security key does and why it does it. In short, a hardware security key has a unique secret key baked into it that you can't extract. If I understand it, this is sometimes known as a physically unclonable function (correct me in a comment if I'm wrong). It makes use of this secret key for authentication purposes by way of a chain of protocols, which are collectively known as FIDO2. There are 2 important protocols here: WebAuthn that the browser provides to web pages to interact with hardware devices, and CTAP2 - which allows the browser to interface with the hardware security key through a channel that the operating system provides (be that over USB, NFC, Bluetooth, or some other means). FIDO2 is new. Like very very new. To this end, browsers and websites don't yet have full support for it. Those that do aren't always enabled by default (in Firefox you've got to set security.webauth.u2f, security.webauth.webauthn, and security.webauth.webauthn_enable_usbtoken to true, but I think these will set by default in a coming update) or incorrectly 'detect' support by sniffing the user-agent string ( cough I'm looking at you, GitHub and Facebook cough ). Despite this, when it is supported it works fabulously. Solo goes a long way to making the process as painless as possible - supporting both CTAP (for the older U2F protocol) and CTAP 2 (which is part of the FIDO 2 protcol suite). It's designed well (though the cases on the NFC-enabled version called the Solo Tap are a bit on the snug side), and since it's open source you can both inspect and contribute to the firmware to improve the Solo and add new features for everyone to enjoy. Extra features like direct access to the onboard TRNG (true random number generator) are really nice to have - and the promise of more features to come makes it even better. I'm excited to see what new capabilities my Solo will gain with future updates! In the future I want to take a deeper dive into Webauthn and implement support in applications I've written (e.g. Pepperminty Wiki). It looks like it might be quite complicated, but I'll post here when I've figured it out. ## ASP.NET: First Impressions Admittedly, I haven't really got too far into ASP.NET (core). I've only gone through the first few tutorials or so, and based on what I've found so far, I've decided that it warrants a full first impressions blog post. ASP.NET is fascinating, because it takes the design goals centred around developer efficiency and combines them with the likes of PHP to provide a framework with which one can write a web-server. Such a combination makes for a promising start - providing developers with everything they need to rapidly create a web-based application that's backed by any one of a number of different types of database. Coming part-and-parcel with the ASP.NET library comes Entity Framework. It's purpose is to provide an easy mechanism by which developers can both create and query a database. I haven't really explored it much, but it appears to perform this task well. If I were to criticise it, I'd probably say that the existing tutorials on how to use it are far too Windows and Visual Studio-oriented. Being a Linux user, I found it somewhat of a challenge to wade though the large amount of Visual Studio-specific parts of the tutorial and piece together how it actually works - independently of the automatic code generators built-in to Visual Studio. This criticism, I've found is a running theme throughout ASP.NET and ASP.NET Core. Even in the official tutorials (which, although they say you can use Visual Studio Code on macOS and Linux, don't actually make any accommodations for users of anything other than Visual Studio), it leans heavily on the inbuilt code and template generators - choosing to instruct you on how to make the absolute minimum amount of changes to the templates provided in order to achieve the goal of the tutorial. This, unfortunately, leaves the reader wondering precisely how ASP.NET core works under the hood. For example, what does services.AddDefaultIdentity<IdentityUser>().AddEntityFrameworkStores<ApplicationDbContext>(); do? Or what's an IdentityUser, and how do I customise it? Why isn't ASP.NET just a NuGet package I can import? None of these things are explained. Being the kind of person who works from the ground up, I'm increasingly finding the "all that matters is that it works" approach taken by ASP.NET to, ironically enough, ease the experience for developers new to the library, rather frustrating. For me, it's hard to work with something if I don't understand what it does and how it works - so a tutorial that leans heavily on templates and scaffolding (don't even get me started on that) confusing and unhelpful. To an extent, I can compare my experience starting out with ASP.NET with my experience starting out with Android development in Java. Both experiences were rather painful, and both experiences were unpleasant because of the large amount of pre-generated template code. Having said this, in Java's case, there was the additional pain from learning a new language (even if it is similar to C♯), and the irritation in keeping a constant balance between syntax errors from not catching an exception, and being unable to find a bug because it's actually an exception that's been eaten somewhere that I can't see. Although ASP.NET doesn't have terrible exception handling rules, it does have it's fair share of issues. It's equivalent, I guess, would be the number of undocumented and difficult-to-search bugs and issues one encounters when setting it up for the first time - both on Windows (with Microsoft's own Visual Studio!) and on Linux (though, to be fair, it's only .NET Core that has issues here). The complexity of the system and the lack of decent tutorials and documentation result in a confusing and irritating experience trying to get it to work (especially on Windows). In conclusion, I'm finding ASP.NET to be a failed attempt at bringing the amazing developer efficiency from .NET to web development, but I suspect that this is largely down to me being inexperienced with it. Hampered by unhelpful tutorials, opaque black-boxed frameworks with blurred lines between library and template (despite the fact it's apparently open-source), and heavy tie-ins with Visual Studio, I think I'll be using other technologies such as Node.js to develop web-based projects in the future. ## Redis in Review: First Impressions (Above: The Redis logo backed by a voronoi diagram generated by my specially-upgraded voronoi diagram generator :D) I've known about Redis for a while now - but I've never gotten around to looking into it until now. A bit of background: I'm building a user management panel thingy for this project (yes, again. I'll be blogging about it for a bit more yet before it's ready). Since I want it to be a separate, and optional, component that you don't have to run in order to use the main program, I decided to make it a separate program in a different language (Nojde.JS is rather good at doing this sort of thing I've found). Anyway, whilst writing the login system, I found that I needed a place to store session tokens such that they persist. I considered a few different options here: 1. A JSON file on disk - This wouldn't be particularly scalable if I found myself with lots of users. It also means I've got to read in and decode a bunch of JSON when my program starts up, which can get mighty fiddly. Furthermore, persisting it back to disk would also be rather awkward and inefficient. 2. An SQL database of some description - Better, not still not great. The setup for a database is complicated, and requires a complex and resource-hungry process running at all times. Even if I used SQLite, I wouldn't be able to escape the complexities associated with database schemas - which I'd likely end up spending more time fiddling with than actually writing the program itself! Then I remembered Redis. At first, I thought it was an in-memory data storage solution. While I was right, I discovered that there's a lot more to Redis than I first thought. For one, it does actually allow you to persist things to disk - and is very transparent about it too, allowing a great deal of control over how data is persisted to disk (also this post is a great read on the subject, even if it's a bit old now - it goes into a great deal of depth about how the operating system handles and caches writes to disk). When it comes to actually storing your data, Redis provides just a handful of delightfully simple data structures for you to model your data with. Unlike a traditional SQL-based database, storing data in Redis requires a different way of thinking. Rather than considering how data relates to one another and trying to represent it as a series of interconnected tables, in Redis you store things in a manner that's much closer to that which you might find in the program that you're writing. The core of Redis is a key-value store - rather like localStorage in the browser. Unlike localStorage though, the following data types are provided: • A simple string value • A hash-table • A list of values (which can be also be treated as a queue) • A set of unique strings (which can be optionally sorted) • Binary blobs (called bit arrays) • A HyperLogLog (apparently a probabilistic data structure of some kind) The official Redis tutorial on the subject does a much better job of explaining this than I could (I have only been playing with it for about a week after all!), so I'd recommend you read it if you're curious. All this has some interesting effects. Firstly, the data storage system can be more closely representative of the data structures it is storing, allowing for clever data-specific optimisations to be made in ways that simply aren't possible with a traditional SQL-based system. To this end, a custom index can be created in Redis via simple values to accelerate the process of finding a particular hash-table given a particular piece of identifying information. Secondly, it makes Redis very versatile. You could use it as a caching layer to store the results of queries made against a traditional SQL-based database. In that scenario, no persistence to disk would be required to make it as fast as possible. You could use it for storing session tokens and short-lived chunks of data, utilising the inbuilt EXPIRE command to tell Redis to automatically delete certain keys after a specified amount of time. You could use it store all your data in and do away with an SQL server altogether. You could even do all of these things at the same time! When storing data in Redis, it's important to make sure you've got the right mindset. I've found that despite learning about the different commands and data structures Redis has to offer, it's taken me a few goes to figure out how I'm going to store my program's data in a way that both makes sense and makes it easy to get it back out again. I'm certain that I'll take quite a bit of practice (and trial and error!) to figure out what works and what doesn't. It's not the kind of thing that would make you want to drop your database system like a hot potato and move everything to Redis before the week is out. But it is a tool to keep in mind that can be used to solve a class of problems that SQL-based database servers have traditionally been very bad at handling - in particular high-throughput systems that need to store lots of mainly short-lived data (e.g. session tokens, etc.). ## #movingtogitlab: What's up, Thoughts, and First Impressions You've probably heard by now that GitHub has been bought by Microsoft. It was certainly huge news at the time! While I did tweet at the time (also here too), I've been waiting until I've gotten all of the repositories I've decided to move over to GitLab settled in before blogging about the experience and sharing my thoughts. While I've got some of them settled in (you can tell which ones I've done because they are public on GitLab and I've deleted the GitHub repository in most cases), I've still got a fair few to go - there should be 13 in total on GitLab once I'm finished. Since it's taking me longer than I anticipated to properly update them after the transfer (which in and of itself was actually really quick and painless - thanks GitLab!), I thought I'd blog about the experience so far - as it's probably going to be a little while before I've got all my repositories sorted :P ### What's up? Firstly, Microsoft. Buying GitHub. Who'd have thought it? I know that I was certainly surprised. I even had to check multiple websites to triple-check that I wasn't seeing things.... GitHub have certainly done an excellent job of hiding that fact that they were looking for a buyer. Upon further inspection, it appears that the issues GitHub have been facing are twofold. Firstly, they've been without a boss of the company for quite a while, and from the way things are looking, they are having a little bit of trouble finding a new one. Secondly, there's been talk that they haven't been making enough money. Let's back up a second. How does GitHub actually make money? Good question. The answer is surprisingly simple: GitHub Enterprise - which is a version of GitHUb for businesses that they can host on their own servers for a significant fee. From what I can tell, it's targeted at medium-to-large companies. If they aren't making enough money to sustain themselves, then something obviously needs to be done about that, or the github.com service that open-source developers the world over suddenly vanish overnight O.o! This presents GitHub with a very awkward problem. In this case, they chose to seek a buyer, who could then perhaps help them to sell GitHub Enterprise better. Looking at the alternative companies, I think that GitHub could have done a lot worse than Microsoft. Alternatives include Apple (who operate a gigantic walled garden called macOS and iOS), IBM (International Business Machines, who sell big and powerful mainframes to other big and powerful businesses. Most of the time, we - or I at least - don't hear from them, unless they are showing off some research or other that they've just completed), and Facebook (who don't have a particularly great reputation right now). The problem is that they need a big company who have lots of money, because they are also a big company (so they'll be worth a lot of money, like it or not. Also, in order to sort out the money making issue they'll need cash in the meantime). Microsoft ha ve been quite active in the open-source scene as of late, and seem to really have changed their tune in recent years, with their open-sourcing of large parts of the .NET framework - and their code editor Visual Studio Code. While they aren't the best at naming things (I'm looking at you, .NET Core / .NET Standard / .....), they do appear to be more in-line with GitHub's goals and ethics than alternative companies, as discussed above. ### What's the problem? So why do I have a problem with this deal? Clearly, Microsoft are the best of the bunch to have bought GitHub. It could be a lot worse. Microsoft have even announced that GitHub can continue to operate as a separate entity. The new boss of GitHub did an Ask Me Anything on reddit, and he seems like a really cool guy, and genuinely wants the best for GitHub. Well, it's complicated. For me, it all boils down to independence. GitHub is (or was) an independent company. To that end, they can make their own decisions, and aren't being told what to or at risk of being told what to do by someone else. They aren't being influenced by someone else. That's not to say that GitHub will now be influenced by Microsoft (though they probably will be) - and it's not to say that it's a bad thing. Coupled with the some 85 million open-source repositories, 28 million developers using the service, and 1.8 million businesses utilising github.com, it's a huge responsibility. In order to effectively serve the community that has grown around github.com, I feel that GitHub has to remain impartial. It's absolutely essential. The number of different workflows, tools, programs, operating systems, and more that those 28 million developers use will be staggering - and I feel that only a completely independent GitHub will truly be able to meet the needs of that community. Microsoft is great for some things, but taking on GitHub is asking them to actively support users of one of their services using their competitors software, such as Linux - or devices running macOS. ### What's the alternative then? That's huge ask, and I guess only time will tell whether they are able to pull it off. I find myself asking though: What's the alternative? if they are losing money as the rumours say, then what could they do about it? Obviously, I don't and can't know everything that's going on inside GitHub - I can only read marketing-y web articles, theorise, and make guesses. I'm sure they've tried it, but I don't see why they can't enter a partnership with Microsoft instead. How about a partnership in which Microsoft helps sell GitHub Enterprise, and they get a cut of the profits in return? Microsoft have a much bigger base on enterprisey companies, so they'd be much better placed to do that part of things, and GitHub would be able to retain it's independence. ### Where does GitLab fit into this puzzle? All of this brings me to GitLab. Similar to GitHub, GitLab offers free code hosting on gitlab.com, along with free continuous integration. Unlike GitHub though, GitLab's source code is open - and you can download and run your own instance if you like! They sell support packages to businesses - thereby making enough money to support the continued development of GitLab. I think that they sell a few additional features with GitLab Enterprise too - but they aren't anything that the average user would want - only businesses. The also do a free package for students and open-source developers too - all whilst staying and independent company. Very cool. As of today, I'm moving a total of 13 of my smaller repositories to GitLab. My reasoning here is that I really don't want to keep all my eggs in one basket. By moving some repositories to GitLab, I can ensure that if one or the other goes in a direction I seriously object to, I've got an alternative that I'm familiar with that I can move all my repositories to in a hurry. I've used GitLab before. Last academic year (2016 / 2017) I interned at a local company, at which I helped to move over to Git as their primary version-control system. To that end, they chose GitLab as the server they'd use for the job - and so I got quite a bit of experience with it! I'd use it for my personal git server but unfortunately it uses far too many resources - which I can't currently dedicate to it full-time - so I use Gitea, a lighter-weight alternative. ### How does GitLab compare? Back to the matter at hand. The experience with gitlab.com and my open-soruce repositories so far has been a hugely positive one so far! The migration process itself was completely painless - it just took a while since there were thousands of other developers doing the exact same thing I was :P Updating my repositories and adjusting all the links & references has been a tedious process so far - but that's nothing that GitLab can really help me with - I've gotta do it myself :-) GitLab's documentation has been really helpful for those times when I've needed some assistance to figure something out, with plenty of GitLab CI examples available to get me started. If there's one thing I'd change, it's the releases (or tags) system. The system itself is fine, but the way the tags are presented is horrible. The text doesn't even wrap properly! A /latest shortcut url would would be welcome too. You can't attach release binaries to tags either, which is also rather annoying. There is a workaround though - you can link directly to the GitLab CI (Continuous-Integration) artifacts instead, which seems to work just fine for my purposes so far. If it doesn't work so well in the future, I'm probably going to have to find an alternative solution for hosting release binaries - at least until they implement the feature (if at all). Other than that, the interface, while very different to GitHub's, does feel appropriately polished and suitably easy to use. If anything, I feel as though it's rather difficult to explore any of the existing repositories on GitLab - I hadn't realised until using GitLab just how useful the new repository tags GitHub have implemented are in exploring others' repositories. I'm sure that there's a 3rd-party website that enables one to explore the repositories on gitlab.com much more effectively - I just haven't found it yet. The sidebar when viewing a repository is quite handy - but a little unwieldy at times. Similarly, the 2-3 layers of navigation directly below the repository tagline are also rather unwieldy, and difficult to tell their differing functions apart. Perhaps a rethink here would bring these 2 different parts of the user interface together in a more cohesive and usable fashion? All in all, gitlab.com is a great service for hosting my open-source repositories. The continuous integration services provided are superb - and completely unparalleled in my (admittedly limited) experience. The interface is functional, for the most part, and gets the job done - there are just a few rough edges in places that could do with a slight rethink. Enjoyed this? Have your own opinion about what's happened to GitHub? Found a better service? Comment below! ## Markdown editors compared If you didn't know already, I write all my blog posts here in markdown. I've used several markdown editors over the years (wow it's strange to write that), and I thought I'd talk a little bit about the ones I've used, what I liked about them (and the things I didn't), and what my current preference is. Firstly though, why would you want one? Couldn't you just use a regular text editor like Notepad++? Well, yes - but a dedicated editor has several benefits: Proper spell-checking for one, a live-preview for another, and other nice features that make the experience just that little bit better (I'm even writing one of my reports for University in Markdown, and I have to say that the experience is much more pleasurable than using Microsoft Word :P). I like Markdown itself rather a lot too. First invented by John Gruber over on daringfireball.net, Markdown is a simple markup language that's inspired by the things that people already do in instant messaging and other text-based mediums. It's designed to be both easy to read and understand on it's own, and easy to write in - such that it doesn't break your flow as a writer by requiring you to look up how to figure out how to apply that particular bit of formatting (I find myself having to do that with LaTeX and others a lot). (Above: A Screenshot of StackEdit.) The first contender up is StackEdit. It's an in-browser offering, which saves it's data to your local machine (or the cloud). It comes with a number of nice features - apart from not having to install it of course - such as synchronised scrolling in the live-preview, and a 'publish' button to send your document to a number of different sources automatically. Since I used it last (which was quite a while ago, actually), it appears to have received a sizeable update, updating the user-interface to be more polished and aesthetically pleasing, and adding a toggleable folder structure to the left-hand-side, amongst other things. If you can't install anything or run portable programs from a flash drive, StackEdit would be my recommendation. (Above: A Screenshot of Classeur.) Next up on my list is Classeur. It's another browser-based offering, with many of the same features, just with a different UI. When I discovered it I was using Stack Edit, and at the time the interface of Classeur was vastly superior. The main thing I don't like about it is that it's 'freemium' -- meaning that you get to keep about 100 documents on it, and then you either have to delete something or pay. While Markdown is just text documents I can keep on my computer, if I'm going to use a browser-based solution I would prefer to keep them all in the same place (though I never did hit this limit :P). More recently, now that I've got a travel-laptop that is running Linux (and not Chrome OS, as nice that was I ended up out-growing it), I've been using ghostwriter. It's a desktop application for both Windows and Linux. While it doesn't have synchronised-scrolling for the live-preview as Stack Edit does, it allows you to save your text files to your local disk (or other mounted partition!), and open them as you would a Spreadsheet or other file - in a way that you can't with a browser-based tool. The interface is also highly customisable - if you don't like the built-in themes, you can write your own. You can also write your own stylesheet for exported documents too. In addition, it automatically detects multiple different markdown renderers that may or may not have installed, allowing you to switch between them (and the inbuilt sundown processor) at will to get the exported document (e.g. HTML, PDF, EPUB, etc.) looking just the way you want it to. For me, you can't beat the feeling of a native desktop application, so currently ghostwriter is my markdown editor of choice. If I can't use ghostwriter, I'll probably use StackEdit, with Classeur coming at the bottom of the pile. If you're thinking of doing some writing, I'd highly suggest considering using a proper markdown editor such as the ones I've mentioned here. If you're not familiar with markdown, fear not! It's easy to learn, and all 3 of the editors featured here feature a quick-reference guide sidebar (or floating window) that you can enable to help you along. Found this useful? Got a different editor of choice? Comment below! ## Java: First Impressions (Above: The Android, Android Studio, and Java logos. I don't own any of these - nor is this post endorsed by any of the entities represented here - they are just for illustrative purposes.) I've been using Java pretty extensively recently, as I've been doing a module on Android development at University. It's a pretty interesting language, so I thought I'd share my first impressions here. Later on in a separate post, I'll also talk a little bit about Kotlin, Google's new language they are championing for development on the Android platform. Firstly, Android Studio has made it really easy to get started. The code hinting / autocompletion is fairly intelligent, and provides enough support that it's not too much of a bother programming in a new environment that you've never seen before - lessening the burden of learning a new language. It seems to me that the whole build process for Java applications has been greatly overcomplicated though. It's slow, and keeps throwing random errors - especially when I've only just opened Android Studio. This non-determinism proves especially challenging for beginners, such as myself - as sometimes there's no real way to know what's gone wrong (the error messages are not particularly helpful - I've seen several languages with much more helpful ones). There seem to be a bunch of assumptions that the developers have made too about the user's setup and programming style - leading to confusing situations in which it doesn't work - but there's no real way to know why, as there aren't any obvious error messages. Despite this, Java as a language has some interesting features. As a whole, I can definitely see where Microsoft got their inspiration for C♯ from, as it's very similar - just without a lot of the syntactical sugar I'm used to in C♯ that makes expressing complex data structures and algorithms much easier, such as getters and setters. Particularly of note is the exception system. In Java, if you want to throw an exception, you have to add throws ExceptionName to the method signature. Since your main activity in Android contains overridden methods at the top level, this means that you have to use lots of try..catch blocks to trap exceptions and deal with them before they bubble up to higher levels - otherwise it's a compilation error! While this can be helpful, I've found that it can lead to awkward bugs in which an exception is eaten higher up, and the default value that's returned by the method that eats the exception causes strange things to happen that aren't immediately obvious - and it's only when you check the log that you realise what happened..... The other bothersome thing I've found is the deeply-nested folder structure that a Java project appears to generate for even the simplest of projects. This makes it a rather difficult and involved process to find any code outside of the IDE - which I often do because Android Studio is far too slow and bulky just to check on or reference something quickly. Finally, the last issue that concerns me are the licensing issues that have plagued Java in recent years. If you haven't heard, Google and Oracle (the company that owns Java) have been in disagreement over licensing fees which Oracle claims Google should pay them because they used Java in the making of Android (which is an open-source project). If Oracle are going after Google over licensing fees for just using a language, then what does that say about any projects I do? It's not exactly confidence inspiring, that's for sure. I for one will be keeping as much of my code library out of the Java ecosystem as possible. Java seems to be the kind of language with a lot of history. While some of this has led to innovations that have ultimately improved the language, I feel that as a language it's being bogged down by lots of bloat and unnecessary garbage that it could really do without. C♯ has done a brilliant job so cutting through this clutter and rubbish, creating a language that both works with you and is easy to understand (except .NET Standard and .NET Core, but that's a story for another time :P). ## Change the way you think: Languages and Compilers in Review Change the way you think. I haven't seen it used recently, but when I first arrived at Hull University to start my degree, that's the phrase I saw on a number of posters about the place. I've been thinking about it a lot over the course of my degree - and I've found that it has rung true more than one. My understanding of programming has been transformed 3 times that I can count, and before I get to the Languages and Compilers review that this post is supposed to be about, I'd like to talk a little bit about that first. The first time my understanding transformed was when I arrived in my first year. Up until that point, I'd been entirely self-taught - learning languages such as _GML_ and later Javascript. All of a sudden, I was introduced to the concept of _Object-Oriented Programming_, and suddenly I understood that by representing things as classes and objects, it was possible to build larger programs without having them fall apart because they were a nightmare to maintain. Then, when I was finishing up my year in industry, my understanding was transformed again. I realised the value of the experienced I'd had while I was out on my year in industry - and they have not only shown me mechanisms by which a project can be effectively managed, but they have also given me a bit more of an idea what I'd like to do when I finish my degree. Finally, I think that in Languages and Compilers is the third time I've changed the way I think about programming. It's transformed my understanding of how programming languages are built, how their compilers and interpreters work, why things are they way they are. It's also shown me that there's no best programming language - there only the best one for the task at hand. With this in hand, it gives me the tools I need to pick up and understand a new language much more easily than I could before, by comparing it's features to the ones that I already know about. I've found a totally new way of looking at programming languages: looking at them not on their own, but how their lexical style and paradigms compare to those employed by other languages. If you're considering whether a degree in Computer Science is worth it, I'd say that if you're serious about programming for a living, then the answer is a resounding yes. ## Virtual Reality: A Review (Above: A considerable number of stereo 3D glasses technologies. Can you name all the techniques shown here? Comment below!) Yesterday I spent a morning experimenting with my University's latest stereo equipment as part of the Virtual Environments module I've been taking this semester. With all that I've seen, I wanted to write something about my experiences on here. Virtual reality and 3D is something that I haven't really had the chance to experience very often. In fact, the last time I was truly able to experience 3D was also through my University - probably through the open day (I can't remember). I've also never had the experience of using a controller before - which I'll talk about later. With this in mind, it was all a rather new experience for me. The first tech we looked at was a stereo projector with active nvidia shutter glasses. They work by using a variant on the LCD to block out each eye when the image for the other eye is being shown. To this end, they need to sync this with the PC - hence their active nature - and the reason cinemas usually use clever cylindrical polarising filters instead (especially since the screen must be running at a minimum of 120Hz to avoid sickness and provide a reasonable experience). Even so, the experience was quite amazing - even after seeing it once or twice before. With the additional knowledge about the way stereoscopic images are put together (using techniques such as parallax and concepts such as depth cues and depth budget), I found that I could appreciate what was going on much more than I could previously. The head tracking that was paired with the shutter glasses was absolutely fascinating. If you were sitting in the seats in front of the stage you got a bunch of window violations and a pair of hurting eyes, when you were on the stage with the tracked glasses, it was a whole different story. It was literally like a window into another world - made all the more real by the projection onto the floor! We also took a look at the cave, as it's colloquially known - a variant on the screen with 4 panels of a cube, with pairs of projectors back-projecting onto each of the sides - with the same infrared-based head tracking technology. This, too, was similarly cool - it has the ability to make you feel unsteady when looking down from the crows' nest of a large navel ship.... Though this is probably old news to most readers of this post, I found that the idea of using an Xbox controller to move the user around was quite a clever solution to the awkward issue that you can't walk around yourself much unless you like walking into invisible boxes wireframed in black. It certainly felt more natural than using a keyboard - which would have felt bulky and out-of-place. I'll be paying more attention to both controllers and other forms of alternative input when designing applications in future - as I've seen first-hand what a difference the appropriate form of input can make to the overall experience. Until today, I've also been rather skeptical of Microsoft's HoloLens. Sorting through all the microsoft-speak and buzzwords is somewhat challenging - but the lectures we've had over the last 5 weeks helped with that :D The headset itself is actually plenty comfortable (especially compared to the Oculus Rift), and the head-tracking is astonishing - especially considering that it's all inside-out (as opposed to outside-in). The holograms really look like they're hovering in the environment around you - apart from the fact that they're clearly computer generated of course, and the gestures are actually pretty intuitive for how different the experience is to anything else I've experienced before. The biggest problem though, as you're probably aware, is the small field-of-view. It's offset slightly by the fact that you can see around the hologram-enabled area, but it still causes frequent window-violations and only covers a fraction of your effective vision - which they don't appear to take any notice of in their marketing material (see the image below - the pair of people in the image can probably only see the very centre quarter of that thundercloud). If they can fix that - then I think that they may have something truly world-changing. It could be used for all sorts of applications - especially in engineering I think. The sound system built into it was cool too - I didn't manage to check, but I'm pretty sure only I could hear it, but it sure didn't sound like it! In the tutorial it really sounded like there was a voice coming from all around me - which leads me to think it might be programmable such that it appears to come from anywhere in the room - so you might even be able to have a conversation with a holographic projection of someone standing on the table in front of you (like Microsoft's holoportation demo). Finally, we took a look at some of the things that the department have been doing with the Oculus Rift. VR is an experience on a whole 'nother level - and best experienced for one's self (it's really important to remember to clean the lenses in the headset thoroughly, and spend some time aligning them precisely to your eyes I found - otherwise everything will be blurry). I found the latter half of the (rather extensive) setup tutorial I went through later that day to test my ACW particularly immersive - to the point where you had consciously remember where you were in the real world - personally I had my leg just touching the edge of my chair to remind me! Though the audio wasn't as good as the HoloLens (see above), it was still adequate for the task at hand. While I was running through the first-use setup tutorial it was evident though that it was quite clearly a Facebook product - in that you had to create an account (or sign in with Facebook), set privacy settings, and a few other things it hinted at during the setup (I was interested in testing my code I'd written, so I didn't explore the consumer side of the device), so if you're concerned about privacy, then the Oculus Rift is certainly not for you. Thankfully there are lots of other virtual reality headsets around to investigate instead :-) The controllers made for an interesting experience too - they were a clever solution to the awkward problem that they couldn't track your hand as well as they'd need to in order to display it fully in virtual reality (Microsoft had it easy with the gestures for their HoloLens, apparently) - and they didn't end up breaking immersion too badly in the tutorial by roughly simulating your hand position based on which buttons and triggers you had pressed down. Definitely much better than a keyboard in this instance, since you couldn't even feel where the keyboard was in virtual reality - let alone find the keys on the keyboard to press, and that's not even mentioning the loss of movement and rotation you'd experience. In conclusion, my whole view on stereo 3D, VR, and input methods have all been changed in a single day - which I think is pretty good going! Stereo 3D and Virtual reality is never going to go away - the potential behind it just far too tempting to not play around with. Designing applications for VR is going to be a challenge for many developers I think - since an understanding of depth dues and immersion is essential to designing effective experiences that don't make you feel sick. We can't leave the real world behind with VR yet (walking into a chair or table is an unpleasant experience), but what we've got right now is absolutely astonishing. ## The other side of the fence: A Manjaro review (Above: One of the default Manjaro wallpapers.) Sorry for the delay! I've had rather a lot to do recently - including set up the machine I'm using to write this blog post. For a while now, I've been running Ubuntu on my main laptop. After making the switch from Windows 7, I haven't looked back. Recently though, a friend of mine suggested I check out Manjaro - another distribution of Linux based on Arch Linux . After setting it up on a secondary machine and playing around with it, I rather like it, actually - and I've decided to write a post about my experiences coming from Ubuntu. Like most things, I've got multiple different reasons for playing around with Manjaro. Not least of which is to experience a different ecosystem and a different way of doing things - namely the Arch Linux ecosystem. To that end, I've selected the OpenRC init system - since I've got experience with Systemd already, I feel it's essential to gain experience with other technologies. With my preferences selected, I fired up manjaro-architect (available on the Manjaro website, which is linked above) and began the installation. I quickly found that the installation was not a simple process - requiring several reboots to get the options just right. In particular, the partitioning tools available are somewhat limited - such that I had to boot into a live Ubuntu environment to sort them out to get a dual boot setup working correctly. On the other side, the installer allows the configuration of so many more options, like the mount options of the partitions, the kernel to use and it's associated modules, the init system that is used, and the desktop environment you want to use (I've picked XFCE). During the install process I've learnt about a bunch of different things that I had no idea about before. After installation, I then started on the long task of configuring it to my liking. I'm still working on that, but I'm constantly amazed at the level of flexibility it offers. Nearly everything can be customised - including all the title bar graphics and the ordering and position of everything on the task bar (called a panel in XFCE. I've found OpenRC an interesting learning experience too. It's very similar to upstart - another init system I used before Ubuntu switched to systemd. As a result, it's so uch simpler to get my head around. It feels a lot more.... transparent than systemd, which is a good thing I think. I do miss a few of the features that systemd offers, however. In time, though, I'm sure that I'll find alternative ways of doing things - different projects do have different ways of thinking, after all! The concept of the [AUR]() (The Arch User Repository) is possibly one of my faviourite things out of all the things I've encountered so far. It's a community-driven archive of packages, but instead of containing the package binaries themselves, each package contains instructions to fetch build, and install said package. This way requires much less maintenance I suspect, and makes it much easier to stay up to date with things. The install process for a package from the AUR is a little complex, sure, but so much easier and more automated than doing it by hand. It's like taking the benefits of downloading an installer manually from a program's website like you have to on Windows, and combining it with the ease of use and automation that comes with package managers like apt (Debian-based distrubutions) and pacman / yaourt (Arch Linux-based distributions). In short, Manjaro is a breath of fresh air, and very different to what I've tried before. While it's certainly not for the linux beginner (try Ubuntu or Linux Mint if you're a beginner!) - especially the installer - I think it fulfills a different purpose for me at least - as platform from which to explore the Arch Linux ecosystem in relative comfort and dive deeper into the way that all the different parts in a linux system interact with each other. Art by Mythdael
2019-08-23 01:38:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1958264410495758, "perplexity": 1329.0748464784488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317688.48/warc/CC-MAIN-20190822235908-20190823021908-00217.warc.gz"}
https://mukeshthawani.com/my-terminal
I am using the terminal from last few years first it was on Linux now on macOS but without playing much with colors, themes, or plugins. It has always been simple like my background color will be black or white and a similar arrangement for text color. It was not like I don’t want more features like autocomplete, better fonts or better colors but I never gave much attention to updating my terminal with the better tools. I daily use Git so I always wanted some tool which will indicate few details related to the current repository like I have some changes pending to commit or a number of commits which I have not pushed yet. We see this details through git status but every time I have to type the command to get the details in the terminal. Now that’s not the case. This is my current terminal view :rocket: With Solarised-Dark color scheme. With Tango-Dark color scheme. This is some information related to my current terminal setup:
2019-03-19 15:07:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18179303407669067, "perplexity": 1398.8517094284005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201996.61/warc/CC-MAIN-20190319143502-20190319165502-00108.warc.gz"}
https://kaesve.nl/projects/mold/summary.html
# Simulating slime mold with WebGL A short while ago I attended the Revision demo party, a giant lan party for graphics programmers and other creative nerds. These events are a great source of inspiration and motivation for me, and I always get a lot of energy to work on something new. This time I decided I wanted to write a ‘slime mold’ simulation in WebGL, based on Sage Jenson’s physarium project. Their work is based on a paper that models real organisms, but I was mostly interested in the beautiful visual qualities of their results. The animations they tweet consistently take my breath away. Besides, this project also seemed like the perfect challenge to push my knowledge of using WebGL for simulations. This page summarises how I approached this project, some of the problems I encountered, and how I solved them. For this article, I’ll assume that you have a background in WebGL or OpenGL, and ideally in using it for simulations as well. However, if you don’t, I plan write a more in-depth article in the future that explains these fundamentals – so keep an eye out (and bug me on twitter to actually do it). Before I explain my solution, I recommend looking at Sage’s project page, which includes an excellent diagram explaining the algorithm on a conceptual level. In short: We simulate $$N$$ agents, each having an $$x$$ and $$y$$ coordinate and a $$heading$$. We also simulate a ‘trail map’ with a grid with $$m*m$$ cells, each containing a value that represents the density of the agent trail at that location. The grid spans the area in which the agents move, and we simulate diffusion and decay of the trail left by the agents on this grid. The simulation algorithm can be summarised as follows: For each agent: 1. Use the trail map to determine if the agent should turn left or right, or go straight ahead 2. Update the heading accordingly 3. Update the coordinates, moving the agent in its heading direction with a fixed speed 4. Increase the value in the trail map at the new location of the agent For each grid cell: 1. Set its value to a weighted average between it and its neighbors, to disperse the trail density 2. Multiply by the decay constant ## Implementation summary For my WebGL implementation, I encode the state of the simulation in 4 textures: x coordinates, y coordinates, headings and densities. I create two of each, to ping pong between. I chose to make all the textures the same size. This means that the number of agents is directly tied to the trail map size, and that on average there’s one agent per grid cell. The upside of this constraint is that I do the diffusion, decay and moving the agents in the same step. Most of the code for this project is WebGL configuration and setup: creating the state textures and initialize them with random data, the programs and a frame buffer. I also create two array buffers: • A buffer with $$3$$ points, describing a screen covering triangle • A buffer with $$N$$ points, encoding the UVs of each agent I split the code up in three WebGL programs; • Update the agents and diffuse and decay the trail map • Depositing a trail • Render to the screen I gratefully use Javascripts fairly new template string syntax, to include the shader code directly in the rest of my code, and to reuse snippets between shaders. The render program is straightforward, just displaying the trail map. I outline how I use the other two programs below: ### Program 1: Update agents and trail map 1. Bind the offscreen frame buffer 2. Bind the ping textures to texture units 0-3, to read from 3. Bind the pong textures to color attachments 0-3, to write to 5. Set blend function to overwrite 6. Draw the screen covering triangle. This results in running a fragment shader for each pixel in the frame buffer. For each pixel: • Read the agents position and heading from the textures at gl_FragCoord • Use these to read from the density texture • Update and output the heading and position • Read the density at gl_FragCoord, and the densities in the fragments around it • Set the output to a weighted average of these densities, multiplied by the decay rate This fragment shader (minus some constants and setup) contains most of the simulation logic and looks like this: uniform sampler2D xTex; uniform sampler2D yTex; uniform sampler2D densityTex; layout(location = 0) out vec4 xBuf; layout(location = 1) out vec4 yBuf; layout(location = 2) out vec4 headingBuf; layout(location = 3) out vec4 densityBuf; void main() { // Convert from fragment coordinates to texture coordinates vec2 uv = gl_FragCoord.xy / SIM_DIM; // // float x = unpack(texture(xTex, uv)); float y = unpack(texture(yTex, uv)); vec2 p = vec2(x, y); // // Take samples in the trail map near the agents location // vec2 leftSampleP = p + SENSOR_DISTANCE*a2v(heading - SENSOR_ANGLE); float leftSample = unpack(texture(densityTex, leftSampleP)); vec2 forwardSampleP = p + SENSOR_DISTANCE*a2v(heading); float forwardSample = unpack(texture(densityTex, forwardSampleP)); vec2 rightSampleP = p + SENSOR_DISTANCE*a2v(heading + SENSOR_ANGLE); float rightSample = unpack(texture(densityTex, rightSampleP)); // // if (leftSample < LOW_DENSITY && forwardSample < HIGH_DENSITY && HIGH_DENSITY <= rightSample) { } else if ( rightSample < LOW_DENSITY && forwardSample < HIGH_DENSITY && HIGH_DENSITY <= leftSample) { } else if ( HIGH_DENSITY <= leftSample && forwardSample < HIGH_DENSITY && HIGH_DENSITY <= rightSample ) { // TODO: Use a more random value } // // Update location // // // Calculate diffused density at the current fragment // vec3 spacing = vec3(1.0/SIM_DIM, 0.0); float diffuse = ( unpack(texture(densityTex, uv)) + ( // 4/5 + 4/20 = (16 + 4)/20 = 1 0.2*( unpack(texture(densityTex, uv + spacing.zy)) + // north unpack(texture(densityTex, uv + spacing.xz)) + // east unpack(texture(densityTex, uv - spacing.zy)) + // south unpack(texture(densityTex, uv - spacing.xz)) // west ) + 0.05*( unpack(texture(densityTex, uv + spacing.zy + spacing.xz)) + // ne unpack(texture(densityTex, uv - spacing.zy + spacing.xz)) + // se unpack(texture(densityTex, uv - spacing.zy - spacing.xz)) + // sw unpack(texture(densityTex, uv + spacing.zy - spacing.xz)) // nw ) ) )/2.0; // // Write to textures // xBuf = pack(p.x); yBuf = pack(p.y); densityBuf = pack(DECAY_RATE*diffuse); } ### Program 2: Deposit 2. Bind the pong $$x$$ and $$y$$ textures to texture units 0 and 1 3. Bind the pong density texture to color attachment 3 4. Set the blend function to GL.FUNC_ADD 5. Draw $$N$$ GL.POINTs with the agent UV buffer. In the vertex shader, which runs for each vector in the buffer: • Use the UV to read the agents position from textures • Translate this position to a screen space coordinate • Set gl_PointSize = 1.0 6. The fragment shader outputs a constant value based on the deposit rate, which is added to the current density value at the agents position Aside from working through the usual WebGL struggles, I wanted to highlight two of the problems that took me the most effort to solve. ## Encoding and decoding the state When you’re using WebGL for anything other than rendering triangle meshes, the first problem you want to tackle is how you encode program state. In this simulation, the state consists of the agent positions and headings, and the trail densities. Since we want our WebGL program to update these values, we need to encode the state into textures. The first approach that comes to mind is using a single texture. Every pixel would store one agent and one trail density value. This seems to work out great, because a pixel (normally) consists of four channels: rgba. We could store the three properties of the agent and the density each in a channel and be done! Unfortunately, there’s a small snag; these channels are stored as 8b numbers. This means that each channel can take on 256 different values – maybe enough to encode the trail density and heading, but way too coarse to encode the position. One solution that I tried to make work is using different texture formats. The documentation for texImage2D shows you can choose the data representation and format for textures. The option I want to use is RGBA32F, which would store each channel as a 32b number. Unfortunately, I was not able to get this to work. I’m not sure if this is a limitation of my laptop, if it is not possible to write to floating point textures using WebGL2, or if I am just missing steps/misconfiguring. But after trying for an hour or two, I gave up. In the end, I chose to use separate textures instead of channels to store the state, one RGBA8 texture per property. We can use the same layout for each texture, eg. the same pixel in each texture encodes a property for one agent/cell. With WebGL2 (or using an extension in WebGL1) it is possible to write to several buffers or textures from your fragment shader, so this solution can still update all the properties in the same program. There is still one more hurdle to clear though. One pixel in a RGBA8 texture can store 4*8 = 32 bits of data, which is more than enough precision. However, there is no convenient way to read a pixel as a single value. Instead, when we read a property from a texture in our fragment shader, we get a vec4 with a value between 0-1 for each channel. WebGL normalizes the channel values for us, by dividing them by 255. To work around this, we can ‘unpack’ the vector after reading it, and then ‘pack’ it again before we write it out. I happened to find exactly these operations while I was digging through the Three.js shader source code, so that’s what I used: // Taken from https://github.com/mrdoob/three.js/blob/2ad018c7c7ad103c9da9d7f22b3154537384ab26/src/renderers/shaders/ShaderChunk/packing.glsl.js const float PackUpscale = 256. / 255.; // fraction -> 0..1 (including 1) const float UnpackDownscale = 255. / 256.; // 0..1 -> fraction (excluding 1) const vec3 PackFactors = vec3( 256. * 256. * 256., 256. * 256., 256. ); const vec4 UnpackFactors = UnpackDownscale / vec4( PackFactors, 1. ); const float ShiftRight8 = 1. / 256.; vec4 pack( const in float v ) { vec4 r = vec4( fract( v * PackFactors ), v ); r.yzw -= r.xyz * ShiftRight8; // tidy overflow return r * PackUpscale; } float unpack( const in vec4 v ) { return dot( v, UnpackFactors ); } ## Deposit In my implementation, I split updating agents into two programs: updating the location and heading, and depositing onto the trail map. If you were writing this in normal Javascript, this would seem like a weird and somewhat arbitrary separation, but it is necessary to implement the algorithm in WebGL. For the first part of the update, I run a fragment shader for all $$N$$ pixels in the state textures. Say we are running the fragment shader for some pixel at $$p_{pixel}$$. We get the agent position $$p_{agent}$$, and can read from the trail map using that position. The fragment shader can then output the updated values for this agent, as outlined before. The problem is, there is no way to write to a different pixel from the fragment shader. This means that even though we can read the density at $$p_{agent}$$, we can’t increase it – we need a different solution. The deposit program uses a different render strategy. Instead of ‘drawing’ a giant screen covering triangle and putting all the logic in the fragment shader, I draw a GL.POINT for each agent, passing the vertex shader the agents UV coordinates as an attribute (it might have been better to generate these on the fly, in the vertex shader itself). The vertex shader uses the UV to output the agents position, together with gl_PointSize = 1.0. This should result in running the fragment shader for the one pixel where we want to increase the trail value. I used another clever trick at this point, but while writing this article I realized it creates problems, and that I probably didn’t need it in the first place. I was trying to avoid reading and writing to/from the same trail map texture. This step runs after running the diffusion and decay, so I need to read from the resulting texture. The way I set up the code, this was the ‘output’ texture and I wanted to also write the result of the deposit step to it. As a work around, I came up with the idea of using blend modes instead of reading the current value and increasing it in the fragment shader. By setting the blend mode to GL.FUNC_ADD, the fragment shader could just output a constant value, which WebGL would then add for us to the current value. This sounds fine, and even seems to work in practice, but because of the way I encode the values in the textures, the blend function will produce wrong results when a channel overflows. I could have solved this problem with better use of my ping pong textures, but I a similar issue would still persist when an agent isn’t exactly at a pixel coordinate. WebGL will bilinearly interpolate values when you read or write to/from between fragments. This interpolation is per channel, which means it will produce wrong results with our encoding scheme. Getting RGBA32F textures to work should make these problems would disappear. ## Results In the end, I was able to write a (mostly) working simulation during my time at Revision, almost completely from scratch. I’d like to thank Sage Jenson again for their enlightening diagram, which made it possible to implement my own version so quickly. My laptop can run the simulation at ~30fps simulating and rendering 4M particles. There is definitely still room for improvement: fixing the bugs I mentioned, porting it to WebGL1 for wider support, decoupling the trail map size from the number of agents and optimizing performance, to name a few options. Besides these technical improvements, I’m also inspired to modify the simulation rules to create different behaviors. I already tried controlling the simulation with bitmaps, leading to some cool results. Some other ideas I definitely have to try out are using it in combination with reaction diffusion or fluid simulations and adding user interactions. Besides Sage Jenson, @ojelibalon is also working on these kinds of animations and definitely worth checking out. If you enjoyed reading this, or have questions, suggestions or improvements please let me know on twitter or through email.
2020-12-01 18:51:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2339347004890442, "perplexity": 2284.120223852162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141681209.60/warc/CC-MAIN-20201201170219-20201201200219-00169.warc.gz"}
http://clay6.com/qa/14436/the-value-of-tan-bigg-cos-big-large-frac-big-tan-big-large-frac-big-bigg-is
Browse Questions # The value of $\tan\bigg[\cos^{-1}\big(\large\frac{4}{5}\big)$$+\tan^{-1}\big(\large\frac{2}{3}\big)\bigg] is (a)\;\large\frac{6}{17}$$\qquad(b)\;\large\frac{7}{16}$$\qquad(c)\;\large\frac{16}{7}$$\qquad(d)\;None\;of\;these$ $\tan\big[\cos^{-1}\big(\large\frac{4}{5}\big)$$+\tan^{-1}\big(\large\frac{2}{3}\big)\big] \Rightarrow \tan\bigg[\tan^{-1}\large\frac{3}{4}$$+\tan^{-1}\large\frac{2}{3}\bigg]$ $\Rightarrow \tan\bigg[\tan^{-1}\bigg(\large\frac{3/4+2/3}{1-3/4\times2/3}\bigg)\bigg]$ $\Rightarrow \large\frac{17}{12}\times \frac{12}{6}$ $\Rightarrow \large\frac{17}{6}$ Hence (d) is the correct answer.
2017-04-28 12:17:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999767541885376, "perplexity": 10141.637735368697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122955.76/warc/CC-MAIN-20170423031202-00522-ip-10-145-167-34.ec2.internal.warc.gz"}
https://help.antisoftware.club/support/discussions/topics/62000183128
Start a new topic ## LaTeX Support Hello! I'll be posting quite a bit of math stuff, so LaTeX support in posts and comments would be very helpful. For instance, GitHub supports embedded LaTeX in Markdown. For rendering LaTeX GitHub uses Mathjax, but I've heard good things about KaTeX as well.
2023-02-03 16:28:39
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9058380722999573, "perplexity": 2567.2002705562845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500058.1/warc/CC-MAIN-20230203154140-20230203184140-00393.warc.gz"}
http://openstudy.com/updates/51dec665e4b076f7da41456c
## anonymous 3 years ago A system that consists of tributaries, watersheds and divides is called • This Question is Open 1. thomaster $$\bf\Huge Welcome~to~\color{#00B4ff}{Open}\color{#7cc517}{Study}~!!!$$ I think it's called a drainage system. 2. anonymous that mite be drainage system pattern, can either b trellis, dendritic, radial or parallel drainage 3. anonymous i forget to add system at the end
2016-08-24 18:04:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6264493465423584, "perplexity": 12757.244286381785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982292607.17/warc/CC-MAIN-20160823195812-00009-ip-10-153-172-175.ec2.internal.warc.gz"}
https://rpg.stackexchange.com/questions/53328/game-that-teaches-rpgs-effectively-by-itself
# Game that Teaches RPGs effectively “by itself” [closed] I am looking to give someone who has never played an RPG a gift of a game and some dice, but this isn't someone who I can necessarily just RUN a game for, so the book needs to do as good a job as possible of teaching the "fundamentals" out of the box. I'm expecting the individual in question to scrounge up a couple of friends to play with, so the game doesn't need to specifically cater to solo or extremely small group play, though working with 3ish people would be a plus, I think most RPGs probably manage that okay. The individual in question is a big Marvel fan, so the original thought was Marvel Heroic Roleplay, but upon examination, it doesn't seem to be very good for beginners, and it also appears to assume a very large number of assorted dice, which is not a plus in this situation either. ("Here's this game! And the 50 dice you need to play it! Have fun!") So while Superhero games are a plus, the most important thing is that the book teach effectively. As this is a game-recommendation question, please adhere to the FAQ, the rules for subjective questions as outlined in Good Subjective, Bad Subjective and our rules for game recommendations. All responses must cite actual experience or reference others' experiences! ## closed as off-topic by doppelgreener♦, user17995, Purple Monkey, SevenSidedDieApr 4 '16 at 3:43 This question appears to be off-topic. The users who voted to close gave this specific reason: If this question can be reworded to fit the rules in the help center, please edit the question. ## locked by SevenSidedDieApr 4 '16 at 4:06 This question exists because it has historical significance, but it is not considered a good, on-topic question for this site, so please do not use it as evidence that you can ask similar questions here. This question and its answers are frozen and cannot be changed. More info: help center. • Edited to add, but I expect them to scrounge up some friends. – Airk Dec 18 '14 at 22:08 • Stack's requirement that game-rec answers come from positions of experience rather than speculation means only a select few citizens will have useful answers (as many gamers are brought into RPGs under the intimate tutelage of their peers)--but it's an awesome, challenging question and I'm excited to see the answers it attracts. (If it's lacking good answers in a couple days, somebody ping me to put a bounty on this to get more attention on it.) – BESW Dec 19 '14 at 0:35 • Stack doesn't always give fast turn-around on questions like this one, so if you need recommendations fast but sloppy to get something before Christmas, the Role-playing Games Chat could probably help puzzle something out (though it likely won't be as solid or vetted an answer as the main site can provide). – BESW Dec 19 '14 at 0:43 • It's not super urgent; I can either fake it for Christmas, or send this late, or whathaveyou. – Airk Dec 19 '14 at 3:20 • @Airk - How old school are you willing to go? Tunnels & Trolls is a simple (and cheap) fantasy RPG originally published in 1975, with many later editions, which offered a number of solo adventures - a great way to learn by doing. – RobertF Dec 19 '14 at 15:05 # Mutants & Masterminds Suggesting this for a few reasons, which I'll sum up below. Mutants&Masterminds is a very open game focused on modern day superheroes fighting supervillains. I've some experience running the game in both its traditional superhero style and a homebrew fantasy mode. It's one of my favorite games to run both with highly creative and with very new players. ### Handlings small groups While M&M suggests some kind of team to play with, it handles pretty well even with only 1 or 2 players. The mechanics are very forgiving of bad rolls, high power players vs low power mooks works quite well, which means even a single hero can take on a squad of goons, and the setting itself generally says that heroes don´t die, they are merely defeated. So even if everything goes to hell, the players will just wake up in the villains deathtrap, ready for a daring escape, rather than dead. ### Setting The setting is modern day superheroes like the Justice League and stuff. Each player runs a superhero, and they can be built any way the player wants. The game comes with a whole bunch of example characters which are (very poorly) disguised well known heroes, so if the player wants to "be Superman" he'll have a ready made character in the book. This means it's a setting everyone knows well, too. You can play it in your own city, which makes it easy for the DM. While in normal games questions like "Is there a big library in this city?" require lots of making stuff up, now both players and DM will have a pretty clear idea of what is and is not available. (The first time I played, I was in college and new to the city and actually learned about a few new buildings that I wasn't aware the city had. But it also saved me from having to make up any new locations, and poor descriptions don't matter when everyone in the group has been inside the building a scene takes place in. It lowers the bar by a lot for the DM.) ### For the DM This game has very low book-keeping, which makes it easy to run for DMs. Making monsters is easy as they just follow the rules for players, there's very little bookkeeping (the game doesn't use hitpoints and most powers have unlimited uses or very real roleplaying limitations to their use). It comes with an example adventure, you can use any of the listed player characters as villains and there's a chapter on using normals in the game. Plus, because you're using regular people half the time things don't even need stats. You can just assume that the hero can beat up the mook unless there's like a dozen of them, and they're armed. In my experience, if the DM can come up with a fun story (or sometimes even a fun starting point for a story) the rest will follow naturally, with very little need for heavy prep work or needing huge amounts of mechanical things. The last game I ran had little more prep than "The characters will be inside a maglev train, which is also carrying something valuable in a safe, and some guys will try to steal that thing." From there on out, I decided one of the villains would have laserbeam powerz to cut open the safe. I made a grand total of 4 characters to fill out the team of bad guys. With these 4 character sheets, I ran a game session spanning a full day. The story went completely different from my expectations when the players startled laserbeam-guy while he was cutting the safe and he swung his arms around, accidentally cutting the train in half and stranding it in the middle of an uninhabited area... but it worked out really well. Due to the superpowered nature of the game, most of the things that require lots of little rules in a grittier or more realistic game can be easily hand-waved here. Also, the game lets you throw together new mechanical creatures very easily. One of my games required there to be a Troll for the players to fight in the middle of the game. Making one really was no more complicated than "Hmm, it should be big and strong, have claws, and regenerate damage" and then quickly assigning it some appropriate ranks in the Size, Strike, Super-Strength and Regeneration powers. You can set these up in 5 minutes and use them for a scene, and if they survive and have to return later you can always flesh them out more afterwards. Since there's no spell selection, complicated balance rules or attributes that affect dozens of things in the creature, there's little chance of accidentally ruining something. ### For the players One of the main things I dislike about certain systems is the first time you introduce new players and they ask if they can play X is that you must tell them "no, because that's too powerful/difficult/I don't know how". Mutants&Masterminds allows EVERYTHING from the start, with reasonable balance still in the game. All the powers scale exponentially, but you can pick whatever you want. Even things like manipulating time itself are available (albeit with a "please ask your DM because this can ruin stories" notice). Here's some of the characters I've seen played, to give you an idea of the kind of things that are possible. Most of these were played (and played well) by first-time players: - A modern blind samurai - A guy who could turn into a lightning storm and teleport through eletricity-masts - A shapeshifting druid who would turn into crazy beasts - An inventor who would build crazy weapons in the middle of a fight - A juggernaut who could not be moved except if he wanted to - A dwarf who controlled the earth and made earthquakes and the like - A regular guy who was so lucky he could operate inside a super team (he had no real superpowers, just the ability to change die rolls to a 20) All of these were quite balanced, some were crazy superheroes, some were regular people. Due to the powerpoint mechanic, all of them could work very well together. None of these had any complicated mechanics; most characters only have about 5 active powers to worry about. Even some of my players who were turned off by the complexity of games like Dungeons&Dragons had no problem running these. ### Difficult situations Another big problem with running a game as a new DM is dealing with players who come up with difficult plans. M&M has a built in mechanic for when players do something that the DM does not know how to deal with (ie; plans that would circumvent the entire rest of the story) in that he can throw up a complication or even just say "we can't do that because it would break things" and award the players with special storypoints that they can later use to do cool stuff, to take away the sting. This allows a bit of railroading around really complicated or storybreaking things without the players feeling bitter about it. The points can also be awarded for when players do really cool things, to reward coming up with over the top things. As an example, during one of my first games as a player, the storyteller had two villains in a scene. One active threat and one merely observing undetected. Unfortunately for him, I had a character who had radar-vision and superpowered-tracking skills; and I would have no problems detecting and following the second, more powerful villain. Upon stating I wanted to follow the sneaky guy while the rest of the team took off after the main trouble maker, he realised that I would either be able to circumvent half the story (by easily tracking what should be a hidden villain) or get myself captured/killed (by confronting someone we wouldn't be able to deal with even as a complete team) Instead, he told me the enemy villain fled down a drain and managed to "somehow evade me in the sewers". In most games this would be a real let-down, but because you get a special token which is only awarded for being awesome (normally by doing amazing in-character things, but also when you outwit the storyteller) it doesn't feel like that. It just makes you feel cool and lets you jump back into the rest of the story. It's a combination of mechanics, setting and consensus around the table that you're running a superhero game that makes it work. ### Level progression Level progression is simply spending more points, just like character creation. This means no picking classes or having to match certain things together; you just upgrade whatever powers you found coolest or select new ones as you will. Plus, you get points at every session so you CAN upgrade something at every session, or wait a bit to get enough points for something really cool. The game's scope runs from ordinary People (Power Level 1-3 or so) to creatures that make even Superman poop his pants (at the max Power Level 20 you can do just about anything you want) The crazy weapon builder I mentioned above for example, was originally supposed to play like a guy who carried around a bag of infinite weapons. But when the story began, the player realised it was more fun if he'd make them himself. From then on out, over the course of 2 or 3 sessions, he invested all his points into powers that would let him build new things in the middle of a fight. The switch from weapons-guy to crazy inventor came very naturally that way, and let him completely re-envision his character. That's something not a lot of systems seem to allow. ### Extra bonus The game uses only one type of dice (of the 20-sided variety) and all of this goodness comes in a single book. You can optionally buy a settings book for more information, samples, etc, but I've never used any. Plus, you can get the open rules for the game for free here: http://www.d20herosrd.com/ • This is a generally good answer, and I agree that M&M is probably a good way to go, but you don't ever reference actual experience with learning to play with this system, which is what the querent is asking about. Game-rec requires a reference to actual experience, and you don't mention how this system works for people who are learning on their own. – DuckTapeAl Dec 19 '14 at 9:19 • Do you have a good way to format such experience? I have learned to play this game from the book by myself (though not as my first) and have taught a number of new players, but other than listing out all the ways in which it's great I'm not sure how to reference actual experience. – Erik Dec 19 '14 at 9:30 • The RPG.SE guidelines for game recommendations are linked just under the question itself, as is the more general Stack Exchange guidance on providing good subjective answers. – BESW Dec 19 '14 at 9:32 • To clarify: all you have to add is why the game's features were great for your experience teaching yourself about RPGs with this system. That takes it from "These features are great!" to "I know these features are great because they worked for me." – BESW Dec 19 '14 at 10:12 • Alright, how's this? I added in examples from the first times when I DMed and played the game. – Erik Dec 19 '14 at 11:12 The only superhero game I know of that effectively teaches beginners how to roleplay is out of print. It is the basic version of TSR's Marvel Super Heroes. While I am a fan of Hero Game's Champions and Mayfair's DC Heroes RPG, TSR's Basic MSH RPG is gold standard of accessibility for novices. While there are websites with old Marvel PDFs that are tolerated by Hasbro and Wizards of the Coast, your best bet are the following: Ebay Amazon Noble Knight Games Looks like you will pay between $18 to$35 as of December 2014 for a vintage boxed set. Note these are link to the searches I used not the product themselves. There is a modern Marvel-less incarnation of the system behind MSH RPG called the 4c System. It takes name from the four colors used in the action chart of the original. However I strongly recommend trying to get a hold of a vintage MSH RPG Boxed set.The writing is just infused with the love of comics, superheroes and all things Marvel. Something that will greatly appeal to a fan of Marvel Super Heroes. The two books of the Basic Set are strongly oriented toward people who know nothing about roleplaying games. It almost laid out like a comics with copious illustrations tying every game mechanic back to something in the Marvel Universe. For example the section on abilities not only as the usual table showing the different scores and their effect. It just doesn't have a column where the name of the Marvel Hero that best represents that score is shown, it has a head shot of that hero as well. Combat mechanics are not only describe but a mini-comics illustrate each combat round in the example. The whole basic books is packed with stuff like this. • Please remember this is a game-rec question with its attendant guidelines. – BESW Dec 19 '14 at 13:30 • I read and played MSH RPGs. – RS Conley Dec 19 '14 at 13:32 • Then it'd be great if you use that experience to explicitly inform your answer. Talk about why and how teaching yourself the system was such a good experience that you're recommending it for other. – BESW Dec 19 '14 at 21:36 ## Fiasco The rules system and book are both fairly compact for an RPG. It explains everything you need to know in order to play, in friendly terms. Because it's centered around one-session games it is more approachable to a first time player that may be not yet feel ready to commit to a campaign. The only preparation required is to read the book and have it handy. The components needed to play are a bunch of d6s (a less intimidating die for the non-RPG player), index cards, and a pen or two. I've introduced this game to many veteran RPGers, and to people completely new to the hobby. Everyone has loved it. It has a fast, fun, lightweight feel. Fiasco is setting-agnostic. It rely upon rolling on a group of tables that guide you in setting up the story and characters. These tables are called Playsets, and there are dozens available for free PDFs download from the publisher's website, and elsewhere on the internet. The Playset will provide a setting, and help shape the tone and characters. A big list of official and fan-created playsets: http://www.bullypulpitgames.com/wiki/index.php?title=Fiasco_Playsets A super hero playset for your Marvel-loving fan: http://www.bullypulpitgames.com/wiki/index.php?title=Heroes_Of_Pinnacle_City If you give this as a gift, I'd recommend printing out two or three playsets your friend would enjoy and bundling them with the book. The core Fiasco book does come with a couple inside, but more tailored choices will help them see how flexible his new toy is. • Have you had anyone teach themselves on this system? – BESW Dec 19 '14 at 21:36
2019-06-24 18:01:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21541061997413635, "perplexity": 1634.1963470235883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999620.99/warc/CC-MAIN-20190624171058-20190624193058-00002.warc.gz"}
https://byjus.com/maths/important-questions-class-11-maths-chapter-13-limits-derivatives/
# Important Questions for Class 11 Maths Chapter 13 - Limits and Derivatives Get the important questions for class 11 Maths Chapter 13 – Limits and Derivatives here. The important questions are given here will help you in the preparation for the annual examination. Go through the problems and clear your doubts while solving the problems. Students can get an idea about the type of questions asked in the examination after solving these questions. Here, the questions are based on the NCERT textbook and as per the syllabus of the CBSE board. The problems include all types of questions such as 1 mark, 2 marks, 4 marks, 6 marks. Practice the problems provided here to achieve a good score in class 11 Maths final examination. Also, get all the chapters important questions for Maths here. Class 11 Maths Chapter 13 – Limits and Derivatives includes the following important concepts such as: • Limits • Derivatives • Limits of the trigonometric functions • Algebra of the derivative of the function Also, Check: ## Class 11 Chapter 13 – Limits and Derivatives Important Questions with Solutions Practice the following important questions in class 11 Maths Limits and Derivatives that should help you to solve the problems faster with accuracy. Question 1: Find the derivative of the function x2cos x. Solution: Given function is x2cos x Let y = x2cos x Differentiate with respect to x on both sides. Then, we get: dy/dx = (d/dx)x2cos x Now, using the formula, we can write the above form as: dy/dx = x2 (d/dx) cos x + cos x (d/dx)x2 Now, differentiate the function: dy/dx = x2 (-sin x) + cos x (2x) Now, rearrange the terms, we will get: dy/dx = 2x cos x – xsin x Question 2: Find the positive integer “n” so that limx → 3[(xn– 3n)/(x – 3)] = 108. Solution: Given limit: limx → 3[(xn– 3n)/(x – 3)] = 108 Now, we have: limx → 3[(xn– 3n)/(x-3)] = n(3)n-1 n(3)n-1 = 108 Now, this can be written as: n(3)n-1 = 4 (27) = 4(3)4-1 Therefore, by comparing the exponents in the above equation, we get: n = 4 Therefore, the value of positive integer “n” is 4. Question 3: Find the derivative of f(x) = x3 using the first principle. Solution: By definition, f’(x) = limh→ 0 [f(x+h)-f(x)]/h Now, substitute f(x)=x3 in the above equation: f’(x) = limh→ 0 [(x+h)3-x3]/h f’(x) = limh→ 0 (x3+h3+3xh(x+h)-x3)/h f’(x) = limh→ 0 (h2+3x(x+h)) Substitute h = 0, we get: f’(x) = 3x2 Therefore, the derivative of the function f’(x) = x3 is 3x2. Question 4: Determine the derivative of cosx/(1+sin x). Solution: Given function: cosx/(1+sin x) Let y = cosx/(1+sin x) Now, differentiate the function with respect to “x”, we get dy/dx = (d/dx) (cos x/(1+sin x)) Now, use the u/v formula in the above form, we get dy/dx = [(1+sin x)(-sin x) – (cos x)(cos x)]/(1+sin x)2 dy/dx = (-sin x – sin2x-cos2x)/(1+sin x)2 Now, take (-) outside from the numerator, we get: dy/dx = -(sin x + sin2.x + cos2x)/(1+sin x)2 We know that sin2.x + cos2x = 1 By substituting this, we can get: dy/dx = -(1+sin x)/(1+sin x)2 Cancel out (1+sin x) from both numerator and denominator, we get: dy/dx = -1/(1+sin x) Therefore, the derivative of cosx/(1+sin x) is -1/(1+sin x). Question 5: limx→ 0 |x|/x is equal to: (a)1 (b)-1 (c)0 (d)does not exists Solution: A correct answer is an option (d) Explanation: The limit mentioned here is x→0 It has two possibilities: Case 1: x→0+ Now, substitute the limit in the given function: limx→ 0+ |x|/x = x/x = 1 Case 2: x→0 Now, substitute the limit in the given function: limx→ 0- |x|/x = -x/x = -1 Hence, the result for both cases varies, the solution is an option (D) Question 6: Evaluate the derivative of f(x) = sin2x using Leibnitz product rule. Solution: Given function: f(x) = sin2x Let y= sin2x Now, by using Leibnitz product rule, we can write it as: dy/dx = (d/dx) sin2x Sin2x can be written as (sin x)(sin x) Now, it becomes: dy/dx = (d/dx) (sin x)(sin x) dy/dx = (sin x)’(sin x) + (sin x)(sin x)’ dy/dx = cos x sin x + sin x cos x dy/dx = 2 sin x cos x dy/dx = sin 2x Therefore, the derivative of the function sin2x is sin 2x. ### Practice Problems for Class 11 Maths Chapter 13 – Limits and Derivatives Solve chapter 13 limits and derivatives important problems given below: 1. Evaluate: limx → 0 [(sin22x)/(sin24x)] 2. Differentiate the function with respect to x: (ax2 + cot x)(p+q cos x) 3. Show that the limx → 0 [(|x-4|)/(x-4)] does not exists 4. Evaluate the following: limy → 0 [(x+y)sec(x+y)-x sec x]/y 5. Differentiate 1/(ax2+bx+c) with respect to x. 6. Evaluate the derivative of 99x at x=100 7. Find the derivative of the following trigonometric functions: (i) 2 tan x – 7 sec x (ii) sin x cos x (iii) 5 sec x + 4 cos x 8. Differentiate the function: cos (x2+1). 9. Differentiate x2 sin x + cos 2 x. 10. Differentiate (2x – 7)2 (3x+5)3. To practice more problems in Class 11 Maths, register with BYJU’S – The Learning App and download the app to learn with ease.
2022-06-25 04:03:20
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8739186525344849, "perplexity": 5811.057230104807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034170.1/warc/CC-MAIN-20220625034751-20220625064751-00445.warc.gz"}
http://tex.stackexchange.com/questions/126172/alignment-of-table-header-in-column-with-decimal-numbers-using-siunitx
# Alignment of table header in column with decimal numbers using siunitx I have a table with decimal numbers aligned at decimal point. In the column header, I have a % sign. In the code snippet given below, the percent sign is aligned at the left in the column, but I would like it to be either aligning with the decimal points or to the right. How can I achieve this? Current result: \documentclass[ngerman]{scrbook} \usepackage{siunitx} \begin{document} \begin{tabular}{l S[table-format=3.2]} \textbf{fruit} & \textbf{\%} \\ apple & 12,34 \\ banana & ,1 \\ cherry & 1,2345 \\ coconut & 100 \\ \end{tabular} \end{document} - You can align to the right by using multicolumn: \documentclass[ngerman]{scrbook} \usepackage{siunitx} \begin{document} \begin{tabular}{l S[table-format=3.4]} \textbf{fruit} & \multicolumn{1}{r}{\textbf{\%}} \\ % apple & 12,34 \\ banana & ,1 \\ cherry & 1,2345 \\ coconut & 100 \\ \end{tabular} \end{document} I have changed S[table-format=3.2] to S[table-format=3.4] so as to compensate for the entry in cherry.. - To center the % symbol in the header row of the second column, you could encase the instruction \textbf{\%} in curly braces, like this: {\textbf{\%}} Using \multicolumn{1}{c}{\textbf{\%}} works too... - This would only center the symbol over the whole column width, but not aligning it at the decimal point. –  Martin Jul 31 '13 at 9:51 @Martin - In the present case, i.e., given the OP's data for the various percentages, centering the % symbols works out to being pretty much the same as aligning it directly on the decimal point. :-) –  Mico Jul 31 '13 at 10:00
2015-01-25 16:33:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9686824083328247, "perplexity": 2312.5738448850943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115900160.86/warc/CC-MAIN-20150124161140-00203-ip-10-180-212-252.ec2.internal.warc.gz"}
https://www.eng-tips.com/viewthread.cfm?qid=295249
× INTELLIGENT WORK FORUMS FOR ENGINEERING PROFESSIONALS Are you an Engineering professional? Join Eng-Tips Forums! • Talk With Other Members • Be Notified Of Responses • Keyword Search Favorite Forums • Automated Signatures • Best Of All, It's Free! *Eng-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail. #### Posting Guidelines Promoting, selling, recruiting, coursework and thesis posting is forbidden. # Definiton for station blackout ## Definiton for station blackout (OP) Does anybody know the definition for station Blackout for a multiunit station? Thanks ### RE: Definiton for station blackout Besides the obvious, what are you driving at? rmw ### RE: Definiton for station blackout It is one of those AW! things and it's not shucks. I short definition I've seen is that it is the loss of all distributed power within the battery limits of the plant. I've been involved in 3 total blackouts not in a Nuclear Plant but in a  chemical plant that makes synthetic fiber.  From the ammount of chaos this caused in our plant I can imagine what it would be like in a Nuclear Plant. In our plant we also have a time constraint to get polymer processes back on line. If this doesn't happen within 45 minutes a unit has to be overhauled at considerable cost. ### RE: Definiton for station blackout (OP) http://definitions.uslegal.com/s/station-blackout-energy/ The above link is what I found on the net. Now in the case of a multi-unit station that has non environmentally qualified stand -by generators (SG)and 2 environmentally qualified emergency power generators (EPG). A black out would mean to loose all reactors and the ability to transfer power between them and the SG's would be down and the EPG's would be unavailable. And the outside grid would be unavailable. That would leave only the power from the batteries and AC power given from the inverters from the batteries. Is that right? By the way it is for CANDU plant and with regards to WANO Thanks ### RE: Definiton for station blackout You've described the "function" of a station blackout adequately in a legal sense -> But, big "but" this is a nuclear station where the "regulation" terms used by the nation's agencies define policy and requirements.   Then designs and procedures must be made based on the "definition" chosen, not by reality or the effects of three or four mutual "unforeseen" common catastrophes. A station blackout happens when there is no electrical power available inside the transformer yard.   A station brownout happens when there is insufficient reliable power of the right voltage at the right capacity capable of running for the right amount of time inside the fence. Make sense? You can write elaborate definitions of a station blackout all you want, but if you didn't anticipate an earthquake wiping out the regional power, a tsunami wiping out the plant's backup power, backup fuel, and backup power distribution network to pumps and equipment; and the plant itself is automatically being shutdown by a Cat 9 earthquake, then your plant still has no power. If the pump you need to run needs 4180 volts, but you have only a generator trailer putting out 440 volts, you still have a station power problem - regardless of whether you call it a station blackout or a station screwup or a station problem. ### RE: Definiton for station blackout as Racookpe1978 said: If the 4160 Volt motors of the pumps you need to run were flooded out by Tsunami seawater right up to the top bearings, it is irrelevant whatever auxiliary power generators are brought in- The motors are effectively unserviceable, until removed and replaced totally. Each weighs in at several tons. That was the fate of the Megawatt class seawater pumps on the outdoor platform at Fukushima, affecting all the reactors, simeltaneously. Call it a blackout if you like, it was the seawater flooding of the electrical auxiliary plant, including the Diesel Generators (each several MW) that sealed the fate at Fukushima. rasevskii ### RE: Definiton for station blackout frankiee, I can't answer your question for CANDU and WANO, but here's the definition from Title 10 of the US Code of Federal Regulations, Section 50.2: Station blackout means the complete loss of alternating current (ac) electric power to the essential and nonessential switchgear buses in a nuclear power plant (i.e., loss of offsite electric power system concurrent with turbine trip and unavailability of the onsite emergency ac power system). Station blackout does not include the loss of available ac power to buses fed by station batteries through inverters or by alternate ac sources as defined in this section, nor does it assume a concurrent single failure or design basis accident. At single unit sites, any emergency ac power source(s) in excess of the number required to meet minimum redundancy requirements (i.e., single failure) for safe shutdown (non-DBA) is assumed to be available and may be designated as an alternate power source(s) provided the applicable requirements are met. At multi-unit sites, where the combination of emergency ac power sources exceeds the minimum redundancy requirements for safe shutdown (non-DBA) of all units, the remaining emergency ac power sources may be used as alternate ac power sources provided they meet the applicable requirements. If these criteria are not met, station blackout must be assumed on all the units. You might find something similar at the Canadian Nuclear Safety Commission's website: http://nuclearsafety.gc.ca/eng/ Patricia Lougheed ****** Please see FAQ731-376: Eng-Tips.com Forum Policies: Eng-Tips.com Forum Policies for tips on how to make the best use of the Eng-Tips Forums. #### Red Flag This Post Please let us know here why this post is inappropriate. Reasons such as off-topic, duplicates, flames, illegal, vulgar, or students posting their homework. #### Red Flag Submitted Thank you for helping keep Eng-Tips Forums free from inappropriate posts. The Eng-Tips staff will check this out and take appropriate action. #### Resources Solutions Brief - Protecting and Rescuing On-Ground Personnel Keeping our warfighters safe and delivering them a competitive advantage is a key goal of departments of defense around the world. It’s a goal shared by embedded computing manufacturers like Abaco: we never forget who we serve.This case study describes how a major international contractor integrated an Abaco single board computer at the heart of its CAS/CSAR solution. Download Now Datasheet - Top Enhancements Creo 7.0 PTC's Creo 7.0 has breakthrough innovations in the areas of generative design, real-time simulation, multibody design, additive manufacturing, and more! With Creo 7.0, you will be able to design the most innovative products faster than ever before, keeping you on the cutting edge of product design and ahead of your competition. Download Now Close Box # Join Eng-Tips® Today! Join your peers on the Internet's largest technical engineering professional community. It's easy to join and it's free. Here's Why Members Love Eng-Tips Forums: • Talk To Other Members • Notification Of Responses To Questions • Favorite Forums One Click Access • Keyword Search Of All Posts, And More... Register now while it's still free!
2020-10-29 20:27:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33651047945022583, "perplexity": 5444.380246669774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107905777.48/warc/CC-MAIN-20201029184716-20201029214716-00384.warc.gz"}
https://math.stackexchange.com/questions/2300606/combinatorics-and-new-bose-einstein-statistics-problem
# Combinatorics and new Bose-Einstein Statistics Problem Can you check my solution? Is this answer correct? As there are four indistinct particles, and their sum must add 4, I am considering that exist the following partitions: (4,0,0,0) implies C(17,1)= 17 (3,1,0,0) implies C(10,1)*C(2,1)= 20 (2,2,0,0) implies C(5,1)*C(5,1)= 25 (2,1,1,0) implies C(5,1)*C(2,1)*C(2,1)= 20 (1,1,1,1) implies C(2,1)*C(2,1)*C(2,1)*C(2,1)= 16 Finally, 17 + 20 + 25 + 20 + 16 = 98. Winther, who is a postdoc in theoretical astrophysics at the ICG in Portsmouth, gave me a "translation" of the problem: Translating the problem to a more real world scenario might help: there are four wallets in a bag and each wallet contains from zero to four $1$ notes. The total money in the wallets combined is $4$. Your questions in terms of this analogy: "Is the energy of each particle the same?" = "Do we have the same money in all wallets?" Not necessarily. "some of the energy of the particles can cancel" = "some of notes cancel each other" No they can't. "Are we applying Pauli's principle?" No, bosons do not satisfy this principle. Here is the problem: Bose Statistics Problem 5) A physical system consists of four identical particles. The total energy of the system is $4E_0>0$. Each of the particles can have an energy level equal to $kE_0$ for $k\in\{0,1,2,3,4\}$. A particle with energy $kE_0$ can occupy any one of the $k^2+1$ distinct energy states at that energy level. How many distinct energy configurations can the system have? [In statistical mechanics the particles are said to obey Bose-Einstein statistics.] After my "research" in many physics websites, I found this: Consider an energy level $\varepsilon_i$ with degeneracy $g_i$, containing $n_i$ bosons. The states may be represented by $g_i-1$ lines, and the bosons by $n_i$ circles; distinguishable microstates correspond to different orderings of the lines and circles. For example, with $9$ particles in the $8$ states corresponding to a particular energy, a particular microstate might be: The number of distinct orderings of lines and circles is: $$t_i = \frac{(n_i+g_i-1)!}{n_i!(g_i-1)!}.$$ A particular distribution has a specified number of particle $n_i$ within each of the possible energy levels $\varepsilon_i$. The total number of microstates for a given distribution is therefore: $$t(\{n_i\}) = \prod_i \frac{(n_i+g_i-1)!}{n_i!(g_i-1)!}.\tag 8$$ I also found a couple of Bose-Einstein Statistics problems: A combinatorics problem related to Bose-Einstein statistics Indistinguishable particles: obeing "Bose-Einstein Statistics" • Which part are you lost with? Which parts do you understand? – Mark May 28 '17 at 21:15 • This is the stars and bars problem. Have a look here: en.wikipedia.org/wiki/Stars_and_bars_(combinatorics) – jvdhooft May 28 '17 at 21:15 • Please type out images and use MathJax. – Em. May 28 '17 at 21:39 This is a guide of what you have regarding the binomial coefficient $t_i$. The binomial in $t_i$ is basically the problem of arranging a word of $n_i+g_i-1$ letters, from which $n_i$ letters are, say A (or bosons $b$), and $g_i-1$ letters are, say B (or boxes $b$). What you have then is a Mississippi problem (very well worked out in the early part of combinatorics courses and books). Martin's book Counting: the art of enumerative combinatorics can help you to understand these concepts. Now to the problem. You can form $N!$ words from $N$ distinct letters. Consider now that the letters are not all different, and in fact that you have only two types of letters, say $n_A$ $A$'s and $n_b$ $B$'s such that $n_A+n_B=N$. Then $N!$ is an overcounting of all the possible distinct words that you can form (try this with, say $3$ $A$'s and $2$ $B$'s so that you can convince yourself of this fact). By the division principle, the total number of words of length $N$ that you can form with $n_A$ $A$'s and $n_b$ $B$'s is $$\frac{(n_A+n_B)!}{n_A!n_B!}.$$ Now it is easy to say that there are $n_A$ bosons and $n_B+1$ boxes (you need essentially $n_B$ "bars" to form $n_b+1$ boxes). • First one comment, the total energy must be $4E_0>0$. This means that the particles cannot have $k=0$ (there must be at least one with $k=1$). It doesn't mean that the energy must add up to $4$. Where did you get this problem? Was it from a book? – user2820579 May 30 '17 at 17:21 • I believe that the problem says that the the total energy must add 4, this I think is incorrect. It is different to say that the energy is $4E_0$, as you state it, than to say that it must be $4E_0>0$, or $E_0>0$. Suppose each of the particles has energy $E_0$. The total energy is thus $E_{total}=E_0+E_0+E_0+E_0=4E_0$, thus $E_{total}=4E>0$ (one of the cases you stated it). Now suppose one particle has enegy $2E_0$, and all the others has energy zero, therefor $E_{total}=2E_0>0$, a valid option also. – user2820579 May 30 '17 at 18:05 • Perhaps you can ask if $E_{total}=4E_0$, or that $E_{total}>0$. These are different conditions. Can you notice the difference? – user2820579 May 30 '17 at 18:10 • If you keep $E_{total}=4E_0$, then it's ok what you are doing. – user2820579 May 30 '17 at 18:16 • Sorry, I had to correct the last comment. – user2820579 May 30 '17 at 18:17 The reasoning is almost correct, however it seems that somewhere in between occurred a confusion, which spoiled the whole calculation. Let's investigate the problem systematically. 1. Every particle, which resides on some particular energy level, has the energy of that level. 2. The energy of every $i-$th energy level is: $i \cdot E_0$. 3. The total energy of the system is a sum of energies of all particles. 4. Let's consider different distributions of particles between different energy levels (let's call such distributions "macrostates"). In the context of the given problem each macrostate can be defined by a tuple of five numbers: $(k_0, k_1, k_2, k_3, k_4)$, where $k_i$ represents an amount of particles on the $i-$th energy level. All macrostates have to satisfy the following conditions $\label{eq:1}$: $$\sum_{i=0}^{4} k_i = 4 \\ \sum_{i=0}^{4} k_i \cdot (i \cdot E_0) = 4E_0$$ 5. Every energy level contains some amount of distinct energy states (so-called "degeneration"). It means, that for every $i-$th energy level there exists a certain amount of different placements of $k_i$ particles within $g_i = (i^2 + 1)$ distinct energy states. Such variations within the macrostates are called "microstates". Let's denote $\Omega_{k_0 k_1 k_2 k_3 k_4}$ as an amount of all microstates of the macrostate $(k_0, k_1, k_2, k_3, k_4)$. 6. An amount of distinct energy configurations of the system is equal to the sum of amounts of all microstates over all macrostates: $$\sum_{(k_0, k_1, k_2, k_3, k_4)} \Omega_{k_0 k_1 k_2 k_3 k_4}$$ Where $k_0, k_1, k_2, k_3, k_4$ satisfy conditions of the problem (see equations from the paragraph 4). 7. As far as all particles are obeying Bose-Einstein statistics, the total amount of microstates of the macrostate $(k_0, k_1, k_2, k_3, k_4)$ can be calculated as follows (as already discussed in the question): $$\Omega_{k_0 k_1 k_2 k_3 k_4} = \prod_{i = 0}^{4} {k_i + g_i - 1 \choose g_i - 1} = \prod_{i = 0}^{4} {k_i + (i^2 + 1) - 1 \choose (i^2 + 1) - 1} = \prod_{i = 0}^{4} {k_i + i^2 \choose i^2}$$ There are 5 different macrostates $(k_0, k_1, k_2, k_3, k_4)$, which satisfy the conditions of the problem: $$(3, 0, 0, 0, 1), \\ (2, 1, 0, 1, 0), \\ (2, 0, 2, 0, 0), \\ (1, 2, 1, 0, 0), \\ (0, 4, 0, 0, 0)$$ Amounts of microstates of every macrostate are following: $$\Omega_{30001} = {3 + 0^2 \choose 0^2} \cdot {1 + 4^2 \choose 4^2} = 17 \\ \Omega_{21010} = {2 + 0^2 \choose 0^2} \cdot {1 + 1^2 \choose 1^2} \cdot {1 + 3^2 \choose 3^2} = 20 \\ \Omega_{20200} = {2 + 0^2 \choose 0^2} \cdot {2 + 2^2 \choose 2^2} = 15 \\ \Omega_{12100} = {1 + 0^2 \choose 0^2} \cdot {2 + 1^2 \choose 1^2} \cdot {1 + 2^2 \choose 2^2} = 15 \\ \Omega_{04000} = {4 + 1^2 \choose 1^2} = 5 \\$$ Hence the total amount of distinct energy configurations is: $$Z = \Omega_{30001} + \Omega_{21010} + \Omega_{20200} + \Omega_{12100} + \Omega_{04000} = 17 + 20 + 15 + 15 + 5 = 72$$ Also, having the presented calculations in mind, it turns out that the most probable macrostate is $(2,1,0,1,0)$, with probability: $P_{21010} = {\Omega_{21010} \over Z} = {20 \over 72}$.
2019-11-12 18:18:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9084606766700745, "perplexity": 335.1779780491986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665726.39/warc/CC-MAIN-20191112175604-20191112203604-00277.warc.gz"}
https://mathsmadeeasy.co.uk/gcse-maths-revision/multiplying-fractions-revision/
Multiplying Fractions Worksheets | Questions and Revision | MME # Multiplying Fractions Worksheets, Questions and Revision Level 1 Level 2 Level 3 Navigate topic ## What you need to know To multiply two fractions together, simply multiply the numerators together, and then multiply the denominators together. Example: Evaluate $\dfrac{3}{4}\times \dfrac{2}{11}$. Write your answer in its simplest form. The result of multiplying two fractions together will be a fraction. The numerator will be the product of the two numerators, and the denominators will be the product of the two denominators. $\dfrac{3}{4}\times \dfrac{2}{11}=\dfrac{3\times 2}{4\times 11}=\dfrac{6}{44}$ Cancelling out a factor of two from top and bottom, we get the answer (in its simplest form) to be $\dfrac{6}{44}=\dfrac{3}{22}$ Example: Evaluate $4\times \dfrac{3}{7}$ To multiply a whole number by a fraction, remember that any number divided by 1 is just itself. So, we can write $4=\dfrac{4}{1}$ Therefore, the multiplication becomes $\dfrac{4}{1}\times \dfrac{3}{7}=\dfrac{4\times 3}{1\times 7}=\dfrac{12}{7}$ You may see that what happens when you multiply a whole number by a fraction, is that the number is multiplied only by the numerator. If you feel confident, you can just go right ahead and multiply the number by the numerator and avoid the intermediate steps. Example: Evaluate $3\frac{1}{4}\times\dfrac{2}{5}$. To multiply a mixed number by a fraction, first convert the mixed number to an improper fraction. $3\frac{1}{4}=\dfrac{(3\times4)+1}{4}=\dfrac{13}{4}$ Now we can do the multiplication as usual. \begin{aligned}3\frac{1}{4}\times\dfrac{2}{5}&=\dfrac{13}{4}\times\dfrac{2}{5} \\ &=\dfrac{13 \times 2}{4\times 5}=\dfrac{26}{20}\end{aligned} This fraction could be simplified, but as the question doesn’t ask us to, there’s no need. Note: if negative numbers are involved, normal rules of multiplying apply: 1. If you multiply one positive fraction with one negative fraction, the answer should be negative. 2. If you multiply two negative fractions together, the answer should be positive. ### Example Questions We have to times across the top and times across the bottom. $\dfrac{3}{8}\times\dfrac{3}{10}=\dfrac{3\times 3}{8\times 10}=\dfrac{9}{80}$ Writing $9$ as $\frac{9}{1}$, we get \begin{aligned}9\times\dfrac{5}{12}&=\dfrac{9}{1}\times\dfrac{5}{12} \\ &=\dfrac{9 \times 5}{1\times 12}=\dfrac{45}{12}\end{aligned} Now, we must simplify it. Cancelling out a factor of 3 from top and bottom, we get $\dfrac{45}{12}=\dfrac{15}{4}$ This cannot be simplified further, so we are done. We multiply a positive number by a negative number, so the answer should be negative. Multiplying the numerators and denominators as usual, we get $-\dfrac{8}{9}\times \dfrac{5}{4}=-\dfrac{8\times 5}{9\times 4}=-\dfrac{40}{36}$ We could simplify this fraction but the question does not require us to, so we are done. ### Learning resources you may be interested in We have a range of learning resources to compliment our website content perfectly. Check them out below.
2020-03-28 20:12:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 18, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.980619490146637, "perplexity": 706.3625835772663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493120.15/warc/CC-MAIN-20200328194743-20200328224743-00355.warc.gz"}
https://www.kybernetika.cz/content/1994/4/387
Abstract: In this paper we consider the feedback-action for reachable singular systems and its limit phenomena. Firstly we introduce the invariants which parametrize the feedback-orbits. These results were published in part a few years ago independently by different authors (see \cite{GluL90,KaHe89,MKZ90}). Then we characterize the orbit closures in the space of all reachable systems in terms of these invariants. This gives a contribution to the answer of the question: how does a system might look like, which occurs as the limit of a sequence of feedback-transformations, applied to a given system? Classification: 93C05, 93B52, 34H05, 93C35, 93B05
2023-03-31 00:55:42
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8403673768043518, "perplexity": 380.2242533160313}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949506.62/warc/CC-MAIN-20230330225648-20230331015648-00775.warc.gz"}
https://tug.org/pipermail/texhax/2010-December/016353.html
# [texhax] includegraphics and image scaling Peter Davis pfd at pfdstudio.com Sat Dec 11 04:28:21 CET 2010 Anyway, thank you all for the information and opinions. I wound up writing some code to check the size and resolution of the JPEG and compute the appropriate size to specify in the \includegraphics command. Thanks! -pd -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://tug.org/pipermail/texhax/attachments/20101210/6cb73bfb/attachment.html>
2021-03-02 05:37:01
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9687235951423645, "perplexity": 13994.004022346757}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363217.42/warc/CC-MAIN-20210302034236-20210302064236-00195.warc.gz"}
https://gamedev.stackexchange.com/questions/87302/random-object-in-array
# Random Object in Array I am using Unity. I have created eight different types of objects, stored in an array, and I am displaying them on the scene in a grid pattern with random order. The first four objects in the array are red. The player needs to destroy all four red objects to go to the next level. The code I've writted so far uses the Random.Range method, which chooses objects completely randomly. Sometimes it generates a grid that doesn't contain four red objects, so my player can't destroy them and proceed to the next level. How do I randomly generate this grid so that it is guaranteed to have exactly four red objects? Here is my code. public int gridWidth = 0; public int gridHeight = 0; public GameObject[] facePrefab = new GameObject[8]; void Awake() { gridHeight = Random.Range (3, 5); CreateGrid(gridWidth,gridHeight); } void CreateGrid(int numX, int numY) { for (int x = 0; x < gridHeight; x++) { for (int y = 0; y < gridWidth; y++) { GameObject go = Instantiate (facePrefab[Random.Range(0,facePrefab.Length)]) as GameObject; } } } • I think I understand what you mean... Why not first create the red objects and place them at random positions in your grid and then populate the rest with random objects? – Savlon Nov 15 '14 at 13:32 • I've edited the question in an attempt to make it more readable. Please fix any errors I made in interpreting your original question. – Gregory Avery-Weir Dec 2 '14 at 16:39 ## 1 Answer As Savlon suggested in comments, create the red objects first. You probably want to split your array into red prefabs and non-red prefabs. Your CreateGrid function will look like the following pseudocode: 1. Repeat four times: place a random red object at a random point on the grid. 2. Repeat for each grid square: if the grid square is empty (does not contain a red object), place a random non-red object. This will guarantee that you have exactly four red objects in the grid, placed randomly, and that the rest of the grid is filled with a random assortment of non-red objects.
2019-02-20 21:53:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2294725775718689, "perplexity": 1533.3598452321037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247496694.82/warc/CC-MAIN-20190220210649-20190220232649-00636.warc.gz"}
https://blog.xlk.me/2017/08/22/understanding-general-recursion-in-coq/
# Understanding Fix_eq in Coq ## The problem When we try to prove that one general recursion defined in coq is actually correct, in the sense that the fixpoint gives exact the same result if we unfold it once. In words of Coq, it is where (well_founded R) gives a function that returns the proof that elements of type A is accessible (in terms of R). When proving the theorem, we would encounter in someplace that we need to prove that, function F will always give the same result, even if the proof supplied to it for forall y, R y x -> P x is different in the sense of Coq but identical in the sense of functional extensionality. However, it’s strange that we need to prove such a thing when we haven’t touched it when we define the fixpoint through Fix. The reason lies behind the definition of Fix in the standard library. ## Analysis Internally, Fix is given with the help of Fix_F: The only difference is that, well_founded R is omitted and at the end forall x : A, Acc R x is added. I’ve stated that well_founded R is a function that gives the proof for Acc R x for any x : A, which means these two terms are the same. Then comes the clue: the accessibility property adds to the complexity. After analyzing the standard library, I founded the reason: For the definition of Fix_F the proof is obtained through a lemma Acc_inv, as the name suggest it obtain the proof of Acc for y from the proof of Acc for x, while Fix is simply defined by (fun (x : A) => Fix_F (Rwf x)), which has nothing related to recursion. However the theorem we want to prove is about recursion of Fix. Obviously, the proof will fail without some more hypothesis. For the left hand side of the target, the hypothesis Rwf is used only once, afterward the proof of Acc is obtained by the Acc_inv. On the right hand side the proof for y is obtained by the Rwf. The difference is the very reason for introduction of functional extensionality. However there usually requires few efforts, as functional extensionality is quite straightforward property for most functions. Share
2020-02-21 10:19:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7828851342201233, "perplexity": 553.3008439555814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145500.90/warc/CC-MAIN-20200221080411-20200221110411-00106.warc.gz"}
http://thevirtuosi.blogspot.com/2010/04/cell-phone-brain-damage-or-not.html
## Tuesday, April 27, 2010 ### Cell Phone Brain Damage... or not. For better or worse, cell phones are a part of our lives. I say this because they can be convenient when needed, but there's nothing more annoying than a dropped call or an inconveniently timed ring tone. Since many people carry their phones with them daily, there has been a number of studies which ask what are the long-term health effects of cell phone usage. While the controversy rages among medical researchers, I decided to find my own answers by doing a calculation based on the power output of a simple hand set. The key physical idea in this calculation is Faraday's law. You can check Wikipedia for a lengthy article describing it, but I'm more interested in getting down to business. Now, from the antenna of every cell phone, an electromagnetic wave is emitted. This EM wave, has magnetic and electric components which oscillate back and forth across space and through time. To quantify the effects of this EM wave on a human being, I'm going to focus on just the brain. In the simplest case, I would imagine it as a homogeneous blob. Unfortunately, this model is too simplistic and looses the all the interesting physiology. Instead, I'm going to treat the brain as a collection of loops made by neurons. In this model, I imagine that every neuron is connected to many other neurons by electrically conducting dendrites and axons. Some neurons will be connected to each other in two places, thus creating a small continuous loop. This loop has some area $A$ and as the EM wave from the cell phone passes by, it induces an EMF a la Faraday's law: $\epsilon = \frac{ d\phi}{dt}.$ Well, since the flux $\phi = \textbf{B} \cdot \textbf{A} \approx \pi r^2 E / c,$ and the electric field is an oscillatory function $E = E_0 \cos \omega t,$ we have $\epsilon = \frac{\pi r^2 \omega E_0}{c} \cos \omega t,$ where $r$ is the radius of the neuron loop, $\omega = 2 \pi f$ is the frequency of cell phone carrier waves and $c$ is the speed of light. Taking the root mean square gets rid of the cosine factor and gives $\epsilon_{rms} = \frac{ \pi r^2 \omega}{c} E_{rms}.$ So, if I know the RMS amplitude of the EM waves coming from my cell phone, then I will be able to calculate the induced voltage in a given neuron loop. Fortunately, there's a convenient set of formulae which relates power output to the RMS amplitude: $\frac{ \langle P \rangle}{A} = I = \langle u_E \rangle c = \epsilon_0 c E^2_{RMS} \quad \rightarrow \quad E_{RMS} = \left( \frac{ \langle P \rangle}{\epsilon_0 c A} \right)^{1/2}.$ This line of gibberish is relating the intensity $I$ to the power per unit area as well as the average energy density, which in turn is expressed in terms of $E_{RMS}.$ The final expression can now be substituted to find $\epsilon_{rms} = \frac{2 l^2 f}{c} \left( \frac{ \langle P \rangle}{\epsilon_0 c 4 \pi R^2} \right)^{1/2}.$ Here, I've made a few additional substitutions to get the formula in terms of real numbers. In particular, I've converted from $r,$ the radius of a neuron loop to $l,$ the length of an individual neuron when two of them form a loop. Also, I'm using $R$ to denote the distance from the cell phone antenna to the neuron loop in your head. I think a reasonable number for this should be about 10 cm, yeah? Okay, now here comes the fun part. I asked around to look at peoples cell phones, and I found that roughly, they all run at 1 W of power. Furthermore, according to an introductory physics text, cell phones operate at around $10^9 Hz$ and from a little web browsing, $l = 10^{-3} \sim 10^1$ meters. Since the length of neurons seems to vary quite a bit, I've written the expression as $\epsilon = l^2 \times (360 V/m^2).$ So for two neurons about a mm long in the same vicinity, there's a pretty decent chance they'll make contact with each other at two points. Thus, the induced voltage is about 0.36 mV. On the off chance two longer neurons form a loop (taking $l \sim 10^{-2}$ meters), the induced voltage is about 36 mV. Finally if two REALLY long neurons ($l \sim 1$ meter) make a loop (mind you, the chances of this are zero since the circular area they would form is bigger than your body!), then the induced voltage is 360 V. Electrifying! To compare these numbers to something useful, consider that the brain is constantly sending electrical signals at around 100 mV (this is called the Action Potential). Since the most probbable induced voltage (0.36 mV) in significantly smaller than the action potential (less than 1%), we can conclude that the cell phonne EM waves are essentially negligable effects. Like I said earlier, this is an extremely simple model for the brain, but it does illustrate a particular mechanism by which common technology can interfere with our biology. And yes, I trust that the calculation is correct, but while the experiments are showing higher incidence of tumors coinciding with cell phone use, well, I'm just limit my talk time. #### 1 comment: 1. Three thoughts: 1) Microwaves are nonionizing so the damage is not at the electron level. 2) Microwaves causes dielectric heating which can yield protein denaturization. 3) Neurons are not made of metal, nor do they conduct through electrons but rather movement of Na+ and K+. I don't see how EM waves cause the induced voltage 360 uV since I'm not sure what charges are being separated to cause the voltage. Can EM push cations across a cell membrane? Fun math though.
2017-04-29 09:14:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6518940329551697, "perplexity": 487.8808840355513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123484.45/warc/CC-MAIN-20170423031203-00014-ip-10-145-167-34.ec2.internal.warc.gz"}
http://www.physicsforums.com/showthread.php?p=816726
# Bernoulli formula for integrals... by eljose Tags: bernoulli, formula, integrals P: 501 let be the Bernoulli formula for calculating an integral in the form: $$\int{f(x)dx}=C+\sum_{n=1}^{\infty}(-1)^{n}x^{n}\frac{d^{n}f}{dx^{n}}\frac{1}{\Gamma(n)}$$ my question is..could we calculate the integral from this series?..thanks. Sci Advisor HW Helper P: 11,863 It's actually a series for the antiderivative. I don't see why not... Daniel. P.S. It's kinda mysterious that this formula involves Euler's gamma function and not Bernoulli's numbers. Related Discussions Calculus & Beyond Homework 2 General Physics 2 Math & Science Software 11 General Physics 2 Introductory Physics Homework 0
2014-04-20 01:01:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.909102201461792, "perplexity": 3339.467586907786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00494-ip-10-147-4-33.ec2.internal.warc.gz"}
https://jwst-pipeline.readthedocs.io/en/latest/jwst/ami_average/description.html
# Description¶ The ami_average step is one of the AMI-specific steps in the ami sub-package and is part of Stage 3 calwebb_ami3 processing. It averages the results of LG processing from the ami_analyze step for multiple exposures of a given target. It computes a simple average for all 8 components of the “_ami” product files from all input exposures. # Arguments¶ The ami_average step does not have any step-specific arguments. # Inputs¶ ## LG model parameters¶ Data model AmiLgModel File suffix _ami The only input to the ami_average step is a list of one or more “_ami” files to be processed. These should be output files from the ami_analyze step. The input to the step must be in the form of a list of “_ami” file names. Passing data models or ASN files is not supported at this time. Use the calwebb_ami3 pipeline to conveniently process multiple inputs. # Outputs¶ ## Average LG model parameters¶ Data model AmiLgModel File suffix _amiavg or _psf-amiavg The ami_average step produces a single output file, having the same format as the input files, where the data for the 8 file components are the averages from the list of input files. If the inputs in the ASN file are designated as “science”, the output product type will be “_amiavg”, whereas if the inputs are designated as “psf”, the output product type will be “_psf-amiavg.” The output file name syntax is source-based, using the product name specified in the input ASN file, e.g. “jw87600-a3001_t001_niriss_f480m-nrm_amiavg.fits.” # Reference Files¶ The ami_average step does not use any reference files.
2020-08-08 21:19:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33755022287368774, "perplexity": 3524.009618084997}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738351.71/warc/CC-MAIN-20200808194923-20200808224923-00252.warc.gz"}
http://www.maa.org/sites/default/files/images/upload_library/4/vol6/Pollanen/enVision.html
# Journal of Online Mathematics and Its Applications Volume 6. October 2006. Article ID 1305 # Interactive Web-Based Mathematics Communication ## 1. Introduction While the Internet offers phenomenal opportunities for communication, most of the available tools for personal interaction, such as e-mail, chatrooms, instant messaging, and bulletin boards, are "text-based." Many subjects, in particular the mathematical sciences, rely heavily on symbols, visual aids and other non-textual communication, with the result that for mathematics much of the interactive potential of the Internet has thus far not been realized. For mathematicians communicating with colleagues over the Internet, this is a minor annoyance, as PDF files can be exchanged, and the vast majority of mathematicians can read raw TeX notation, for example \int_0^1 x^2 dx, embedded in e-mail. However, mathematical communication over the Internet with students using text, for example via e-mail, can be a slow and frustrating process. First-year students often have enough trouble with the proper use of parentheses in communicating mathematical concepts. In e-mail format, where expressions must be inline, they tend to write statements like x2+1, 1/2x and sin x + y, all of which are ambiguous. Included in integrands or fractions, and combined with Greek letters and subscripts, these expressions can become incomprehensible. The problem becomes particularly acute for live interactive communication, where students must produce unambiguous equations and diagrams quickly. Over the past decade the Internet has changed the way mathematics is taught. Many introductory math courses are delivered completely online, and even more are available in hybrid format. Since most theories of education maintain that personal interaction -- whether teacher to student, student to teacher or student to student -- is a cornerstone of learning, this presents a serious challenge for mathematics educators. Many online courses have little live interaction, and allow students to access only course outlines, notes and assignments, post to message boards and send e-mail. The more interactive online courses available today still provide only a text-based chatroom or a whiteboard, which can be inadequate for communication in a visual subject such as mathematics. Furthermore, because these technologies are cumbersome for communicating mathematical content, they tend not to be utilized by a significant portion of students. In this article I will introduce and discuss my experiences with incorporating into teaching a new technology I have created, called enVision, that aids in communicating mathematical content. While there is significant potential for this software in online courses, in this article I will focus on my experiences with using this technology to hold anonymous online office hours in a traditional course. While office hours are a valuable and unique teaching tool, it is a common observation that students in introductory mathematics courses do not attend them. Many feel confused and intimidated by mathematics and do not want to reveal how little they think they know. This in itself is a communication barrier that impedes learning in mathematics and presents a missed opportunity for both student and instructor. In my experience, enVision has helped make me, as a math educator, more accessible to students, and has helped students reduce or overcome any shyness or math phobia they may have. ## 2. The enVision Platform ### Overview The enVision platform consists of two packages, a front end -- the enVision Web interface -- and a back-end server, called MathServer. The enVision interface is what a typical user will see and interact with. It loads into a standard Web browser with no installation, and helps users input and communicate their ideas. It has two communication components: a shared workspace, where any number of participants interact with each other in a mathematical discussion, and a text chat window, where most of the other discussion occurs. The shared work space is essentially a whiteboard drawing program that has special drag-and-drop resizeable math symbols, as well as a panel of special math characters that can be placed with ease onto the whiteboard. Everything that is written on the whiteboard and the chat area is visible to all participants. MathServer is the application that relays the information between different users to ensure that their enVision interfaces are synchronized, thus creating and maintaining a shared workspace. Figure 1. Student Mode view of enVision interface (Internet Explorer browser) The core philosophy of enVision is simplicity and ease of use, i.e., allowing for the communication of mathematical content as easily and quickly as possible with a minimal learning curve, for both student and instructor, and minimal hardware and software requirements. The emphasis is on communication in a quick and intuitive fashion, rather than having typeset letter-perfect documents. In order to keep the program platform independent and accessible to low-bandwidth users, enVision supports a core set of functionalities. It makes it possible to load JPEG and GIF images, which could easily be extracted from applications like Maple, or from PDF documents, with freely available programs like ImageMagick. Likewise, it is assumed that if video or audio are desired features, appropriate packages could be loaded simultaneously. Although enVision is easy to install and maintain, it also has advanced features that allow for integration and customization into existing environments such as Blackboard and WebCT. In short, the program can be easily installed and running in minutes as is, but it is also flexible and can be customized by those who are inclined to do so. ### enVision Web Interface Most of the interface options are accessed through the orange/pink button panel on the left-hand side of the interface or the tabbed panels on top of the applet. For more information, please read the detailed description of the enVision interface. Figure 2. Instructor Mode view of enVision interface displaying PDF (Linux with Firefox browser) ### MathServer MathServer is a simple program that runs on the web server from which the enVision applet is downloaded. Only the system administrator installing MathServer would have to be concerned with it, as it is invisible to other users. It has two functions: to receive and distribute any content input by enVision users which is then displayed for all users on the whiteboard, and to load and save sessions. MathServer must be running in order for different enVision users to communicate with each other. ## 3. Other Web Technologies to Display Mathematics While the goal of enVision is the easy input and live communication of mathematics over the Internet, it is important to discuss how it relates to the many existing technologies to display mathematics on the Web. The most widespread solution to displaying mathematics on the Web is undoubtedly LaTeX2HTML and variants such as those used by the online encyclopedia Wikipedia and the interactive mathematics server WIMS. With this solution, mathematical symbols and entire equations are displayed as individual images. The strengths of LaTeX2HTML are that TeX-style notation is the standard in mathematical typesetting and is well-known in the mathematical community, that TeX is fairly easily human-readable, and that existing documents can easily be put on the Web and displayed in older browsers without plug-ins. Of course, there are many serious disadvantages: having many images on the page can make loading slow, images are difficult to edit and are not scalable and so do not render well on high-resolution devices, such as printers. There are other programs based on TeX notation, such as Java applet-based solutions like HotEQN and JavaScript-based solutions like jsMath. The emerging standard for displaying mathematics on the Web, however, is MathML, the Mathematics Markup Language. MathML is a standard endorsed by the W3 Consortium (the official standards body of the Web), and represents the future of mathematics on the Web. It is XML-based and allows browsers to render mathematics natively in much the same way browsers render HTML. While TeX does an excellent job of encoding the presentation of an expression, the goal of MathML is to encode structural and semantic information as well. MathML is intended to: 1. Allow documents to be easily displayed in browsers and applications with different capabilities (for example, documents can be formatted for displays of different dimensions and resolutions). 2. Allow equations to be easily imported and exported between different applications (for example, between a browser and Maple). 3. Make mathematics on the Web more accessible to the visually impaired. However, while human-legible, MathML it is fairly verbose. For the purposes of comparison, the expression x2 + 3x + 1 can be written in MathML as $<mrow> <msup> <mi>x</mi><mn>2</mn> </msup> <mo>+</mo> <mrow> <mn>3</mn><mo>&InvisibleTimes;</mo><mi>x</mi> </mrow> <mo>+</mo> <mn>1</mn> </mrow>$ while the equivalent TeX expression is simply $x^2+3x+1$. There are, however, both free and commercial tools that aid in the easier input of MathML. For example, ASCIIMathML is a free JavaScript solution that allows simple calculator-type expressions, such as x^2+3x+1, entered in HTML pages to be dynamically displayed using MathML. A variant, LaTeXMathML, allows the same for LaTeX-type expressions. While WebEQ from Design Science is a commercial product that includes a MathML equation editor and a server to create MathML-based Web applications such as message boards. As MathML does not specify its own "input method" for users to easily input mathematics, it will not by itself replace the need for enVision. It will be important for future versions of enVision to attempt to incorporate some support for MathML to allow the import and export of content and help enVision be accessible to the visually impaired. However, as the complexity of MathML dwarfs that of enVision, this remains a distant goal. One problem in adopting MathML standards for rendering equations in enVision is that we actually want enVision to display equations "exactly" the same way, regardless of the platform. Otherwise, it would be difficult for users to overlay position dependent information like that which comes from a tablet pen, for example when drawing arrows between equations by hand. How enVision is currently designed to work is closer to the way PDF documents are displayed, they are rendered very consistently between operating systems and devices such as printers. Variability in rendering between systems has already been a challenge in the design of the current version of enVision. Browser display technologies such as Java, Flash, and SVG rely heavily on the system fonts of the user, which may not be the same from system to system. This is why enVision uses the fixed-spaced Courier font on the whiteboard (this font is actually slightly different in the Windows, Linux, and Mac operating systems, but some other system fonts are different enough to be unusable). ## 4. Software Specifications ### enVision Interface The enVision Web interface is a Java applet which the instructor and the student load by visiting the appropriate pages with their browsers. The applet is designed to be very small, approximately 58KB (about the size of an average web page), so that it can be loaded without delay by large numbers of students simultaneously and by users with dial-up connections. It is designed to be as platform-independent as possible, so it will run on any browser that supports at least Java 1.1, including the outdated Java Virtual Machine shipped with a default Internet Explorer installation. Thus, virtually any student with access to the Internet would be able to use the enVision interface. However, if you download the latest Java plug-in, it will run in virtually any other browser. To date, it has been tested on numerous configurations of Internet Explorer, Netscape, Mozilla Firefox, Safari and Opera, as well as various Windows, Mac, and Linux operating systems, without any major issues arising. If MathServer is running, the instructor must ensure that the enVision.jar and HTML page(s) that load it are accessible on the Web server. Then, all the instructor has to do to log on is load the appropriate Web page into his/her browser. For students, all that is required is to load the appropriate Web page into their browsers. Note, however, that the instructor can make a number of customizations by changing the applet parameters in the HTML page that loads enVision. #### Optional Applet Parameters: <param name=host value=192.0.2.5> <!-- This sets the IP address that enVision will connect to (it is a good idea to set it). --> <param name=port value=8000> <!-- This sets the port to the given value. The default is 8080. --> <param name=instructor value=true> <!-- This sets the applet to instructor mode. The default is student mode. --> <!-- This sets the username (useful if you have your own authentication script). --> <param name=announce value=false> <!-- This turns off the log in/out announcements in the Chat Window (especially useful in large classes).--> Instructor and student modes are similar, although they must be accessed from separate HTML pages. There are, however, several noteworthy differences. Students cannot load and save sessions and images, but the instructor could let students access this mode to load and save sessions after class. Instructors also have the ability to lock the whiteboard, using the <F12> key, so that no student can write on the board. If this is done, other instructors may still write on the board. In this mode, both students and instructors can still write to the chat window. The locked status is displayed on the title bar. Using the <F12> key when in locked mode would then unlock the whiteboard. The images that are used in the More Symbols panel are located in the enVision.jar Java Archive as files 0.gif to 119.gif, numbered in the order in which they appear on the panel. The instructor could customize them, so as to use different symbols instead, by replacing the existing GIF file with another. ### MathServer MathServer is a platform-independent Java application. It must be running on the Web server from which the user's browser loads the enVision applet. To start MathServer, the instructor must have permission to run processes on the Web server or have the system administrator start it. Provided Java is installed on the server, MathServer is started by issuing the command "java -jar MathServer.jar" followed by an optional port number. (The default port is 8080.) Any firewall software running on the Web server must be set to allow access to the port that MathServer is set to listen to. When the enVision applet is loaded and the server is running, "** Connected to MathServer **" should appear in the Chat Window. If it does not appear, there is either a problem with the firewall or the IP/port settings in the applet tags. If a subdirectory called "files" exists, the server will load and save sessions in that directory using the .mth extension. Note that if you wish to load images (GIF and JPEG formats are supported), they must be located in the directory (or subdirectory) from which the enVision applet was loaded (not the "files" subdirectory). ### External Applications As enVision only allows for the loading of images in GIF and JPEG format, one could also consider installing an image conversion tool. As long as the instructor can transfer the resulting images to the Web server, these tools can run on any machine that the instructor has access to. ImageMagick is a free cross-platform utility useful for obtaining and manipulating images for use in enVision from other programs, such as Maple. For example, the command "convert file.pdf[8] file.gif" would extract page 9 (as it assumes pages start from the number 0) of a PDF file and save it as a GIF file, and the command "import file.gif" can be used to create a GIF image from a region of the screen, and thus a screenshot of virtually any application can be loaded into enVision. ## 5. Availability enVision v1.1 is freeware. You can download a copy, try a demo version, and find documentation and updates at the enVision web site Figure 3. Overview of enVision basic installation ## 6. Learning Outcomes I have used enVision successfully in large and small class settings, at both large and small universities. I created the first version of enVision in 2000, while I was a teaching assistant at the University of Toronto. I tried to make myself as accessible to students as possible. However, at the time, I taught multiple sections of introductory calculus tutorials, including some in the evenings (students in these sections were often employed full-time during the day) and some at a distant suburban campus (students would have difficulty meeting me downtown at my office). A live online office hour was the best practical option in order for my students and myself to be able to communicate. More recently, I have been teaching at Trent University, a smaller liberal arts university, and have been using enVision in a second-year discrete mathematics class of about 25 students. As the course is primarily for computer science majors, students generally feel apprehensive about mathematics. Since enVision was originally conceived as a tool for large or multi-section classes, I have only recently started to experiment with it in this course. I currently supplement my regular office hours with one extra online office hour, during which students can log on anonymously (i.e., using any name they wish). The results have been impressive. Communication has improved; at times, up to half the students in the class attend the online office hour simultaneously. Based on my experiences, anonymous online office hours are advantageous for addressing the following issues: • Overcoming Math Phobia: Math phobia is a widespread problem, especially among non-mathematics majors. A common symptom is extreme mathematical timidity. Only a minority of students ask questions in a math class or come to a traditional office hour. When students are surveyed about the use of enVision, one of their most frequent comments concerns the value of being able to ask questions anonymously. • Greater Participation: Forming opinions, whether correct or incorrect, is an important part of the learning process. Aside from the fact that more students attend online office hours, anonymity leads to more mathematical discussion and allows me to ask students for responses without their fearing that I am singling them out. Students are far more likely to interpolate and add value to online discussions. • Greater Accessibility: Students lead complex lives. Due to conflicting commitments with school, work, and family, and long-distance commutes, many students have difficulty attending regular office hours. I tend to hold online office hours late in the evening, at times when my students are most likely to be studying. • Efficient Use of Time: My online office hours are scheduled only by start time, and thus students tend to be punctual. If there are only a few questions, I log out after answering them, so a significant time commitment is not required on the part of the instructor. Although the interface is not a complete substitute for face-to-face interaction, it does have its advantages. While I wait for students to respond to questions I have posed, I find it convenient to be able to access other instructor resources, such as books, notes, etc., to search for more examples or to plot a Maple diagram for them to consider. Finally, all the sessions can be saved and archived, and so answers to common questions do not have to be repeated and are made available to all my students (and could perhaps be made available to students in future years as well). • Addressing Different Learning Styles: Some students learn by writing things down. However, others learn better by being able to concentrate on the discussion and replay the saved sessions later rather than furiously scribbling what is on a professor's office blackboard during a traditional office hour. • Improved Classroom Atmosphere: I find that another benefit of strengthening dialogue outside the classroom is a more relaxed, open, and encouraging atmosphere in the classroom, as more questions are asked in class as well. ### Survey Results An in-class survey was conducted to gauge the effectiveness of online office hours. It was conducted six weeks into a 12-week semester of a second-year discrete mathematics course at Trent University, after four weekly online office hours had been held. Students were asked if they attended the online sessions and were encourage to comment. Nineteen surveys were completed, and 47% of students said that they attended some of the hours. A few students who did not attend added comments that they would like to have attended but were unavailable at the time. Comments were positive regarding online office hours. Some notable comments included the following: • "It is convenient, as you can log in from anywhere." • "I live off-campus and this raises my chances of attending `office hours' of any sort." • "I like the idea of anonymity amongst everyone involved." • "Some people don't like asking questions in class, and these online office hours help." • "The idea is great, it is a chance for students to ask questions without fear of looking stupid in front of the teacher or classmates." ### Conclusion Although interactive Web-based communication is not a substitute for face-to-face interaction, enVision does help facilitate online mathematical discussion. In particular, as today's students tend to be relaxed and comfortable with e-mail and instant messaging, they seem to be open to online office hours, and holding virtual office hours seems to have a number of pedagogical benefits when appropriate communication tools are available. This technology could also be used to encourage more dialogue between students in a student-helping-student model of learning, especially in a completely Web-based course.
2016-02-12 18:43:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49027276039123535, "perplexity": 1945.9674874188172}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701165070.40/warc/CC-MAIN-20160205193925-00120-ip-10-236-182-209.ec2.internal.warc.gz"}
http://dask.pydata.org/en/latest/dataframe-design.html
# Internal Design¶ Dask dataframes coordinate many Pandas DataFrames/Series arranged along an index. We define a dask.dataframe object with the following components: • A dask graph with a special set of keys designating partitions, such as ('x', 0), ('x', 1), .... • A name to identify which keys in the dask graph refer to this dataframe, such as 'x'. • An empty pandas object containing appropriate metadata (e.g. column names, dtypes, etc…). • A sequence of partition boundaries along the index, called divisions. Many dataframe operations rely on knowing the name and dtype of columns. To keep track of this information, all dask.dataframe objects have a _meta attribute which contains an empty pandas object with the same dtypes and names. For example: >>> df = pd.DataFrame({'a': [1, 2, 3], 'b': ['x', 'y', 'z']}) >>> ddf = dd.from_pandas(df, npartitions=2) >>> ddf._meta Empty DataFrame Columns: [a, b] Index: [] >>> ddf._meta.dtypes a int64 b object dtype: object Internally dask.dataframe does its best to propagate this information through all operations, so most of the time a user shouldn’t have to worry about this. Usually this is done by evaluating the operation on a small sample of fake data, which can be found on the _meta_nonempty attribute: >>> ddf._meta_nonempty a b 0 1 foo 1 1 foo Sometimes this operation may fail in user defined functions (e.g. when using DataFrame.apply), or may be prohibitively expensive. For these cases, many functions support an optional meta keyword, which allows specifying the metadata directly, avoiding the inference step. For convenience, this supports several options: 1. A pandas object with appropriate dtypes and names. If not empty, an empty slice will be taken: >>> ddf.map_partitions(foo, meta=pd.DataFrame({'a': [1], 'b': [2]})) 1. A description of the appropriate names and dtypes. This can take several forms: • A dict of {name: dtype} or an iterable of (name, dtype) specifies a dataframe • A tuple of (name, dtype) specifies a series • A dtype object or string (e.g. 'f8') specifies a scalar This keyword is available on all functions/methods that take user provided callables (e.g. DataFrame.map_partitions, DataFrame.apply, etc…), as well as many creation functions (e.g. dd.from_delayed). ## Categoricals¶ Dask dataframe divides categorical data into two types: • Known categoricals have the categories known statically (on the _meta attribute). Each partition must have the same categories as found on the _meta attribute. • Unknown categoricals don’t know the categories statically, and may have different categories in each partition. Internally, unknown categoricals are indicated by the presence of dd.utils.UNKNOWN_CATEGORIES in the categories on the _meta attribute. Since most dataframe operations propagate the categories, the known/unknown status should propagate through operations (similar to how NaN propagates). For metadata specified as a description (option 2 above), unknown categoricals are created. Certain operations are only available for known categoricals. For example, df.col.cat.categories would only work if df.col has known categories, since the categorical mapping is only known statically on the metadata of known categoricals. The known/unknown status for a categorical column can be found using the known property on the categorical accessor: >>> ddf.col.cat.known False Additionally, an unknown categorical can be converted to known using .cat.as_known(). If you have multiple categorical columns in a dataframe, you may instead want to use df.categorize(columns=...), which will convert all specified columns to known categoricals. Since getting the categories requires a full scan of the data, using df.categorize() is more efficient than calling .cat.as_known() for each column (which would result in multiple scans). >>> col_known = ddf.col.cat.as_known() # use for single column >>> col_known.cat.known True >>> ddf_known = ddf.categorize() # use for multiple columns >>> ddf_known.col.cat.known True To convert a known categorical to an unknown categorical, there is also the .cat.as_unknown() method. This requires no computation, as it’s just a change in the metadata. Non-categorical columns can be converted to categoricals in a few different ways: # astype operates lazily, and results in unknown categoricals ddf = ddf.astype({'mycol': 'category', ...}) # or ddf['mycol'] = ddf.mycol.astype('category') # categorize requires computation, and results in known categoricals ddf = ddf.categorize(columns=['mycol', ...]) Additionally, with pandas 0.19.2 and up dd.read_csv and dd.read_table can read data directly into unknown categorical columns by specifying a column dtype as 'category': >>> ddf = dd.read_csv(..., dtype={col_name: 'category'}) With pandas 0.21.0 and up, dd.read_csv and dd.read_table can read data directly into known categoricals by specifying instances of pd.api.types.CategoricalDtype: >>> dtype = {'col': pd.api.types.CategoricalDtype(['a', 'b', 'c'])} >>> ddf = dd.read_csv(..., dtype=dtype) ## Partitions¶ Internally a dask dataframe is split into many partitions, and each partition is one pandas dataframe. These dataframes are split vertically along the index. When our index is sorted and we know the values of the divisions of our partitions, then we can be clever and efficient with expensive algorithms (e.g. groupby’s, joins, etc…). For example, if we have a time-series index then our partitions might be divided by month. All of January will live in one partition while all of February will live in the next. In these cases operations like loc, groupby, and join/merge along the index can be much more efficient than would otherwise be possible in parallel. You can view the number of partitions and divisions of your dataframe with the following fields: >>> df.npartitions 4 >>> df.divisions ['2015-01-01', '2015-02-01', '2015-03-01', '2015-04-01', '2015-04-31'] Divisions includes the minimum value of every partition’s index and the maximum value of the last partition’s index. In the example above if the user searches for a specific datetime range then we know which partitions we need to inspect and which we can drop: >>> df.loc['2015-01-20': '2015-02-10'] # Must inspect first two partitions Often we do not have such information about our partitions. When reading CSV files for example we do not know, without extra user input, how the data is divided. In this case .divisions will be all None: >>> df.divisions [None, None, None, None, None] In these cases any operation that requires a cleanly partitioned dataframe with known divisions will have to perform a sort. This can generally achieved by calling df.set_index(...).
2018-08-17 19:34:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25854748487472534, "perplexity": 5431.366267785573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221212768.50/warc/CC-MAIN-20180817182657-20180817202657-00257.warc.gz"}
http://math.stackexchange.com/questions/57089/principal-g-bundles-depend-only-on-the-homotopy-type-of-g
# Principal G-bundles depend only on the homotopy type of $G$ Let $G$ be a topological group. Then the classifying space $BG$'s homotopy type depends on the "homotopy type" of the topological group $G$: that is, if $G \to G'$ is a morphism of topological groups (i.e., a continuous homomorphism) which is a weak equivalence, then $BG \to BG'$ is a weak equivalence (note that $BG$ is functorial by the usual construction, and its homotopy groups are those of $G$ with a shift). This means that on a CW complex, giving a principal $G$-bundle is the same as giving a principal $G'$-bundle. Is there a direct proof of this (e.g. using $H^1$) that does not resort to $BG$? - you want a proof that a G-bundle is the same as a G'-bundle without using the classifying spaces? (just asking for clarification I misread it the first time). Also, why do you want an alternative proof? – Sean Tilson Aug 13 '11 at 0:27 @Sean: Yes. For one thing, (say we assume $G, G'$ homotopy equivalent), I'd like to know that principal $G$-bundles are the same as principal $G'$-bundles under weaker hypotheses than "$X$ is a CW complex." – Akhil Mathew Aug 13 '11 at 1:11 what do you mean by the "same"? are $G$ and $G'$ CW complexes? – Sean Tilson Aug 13 '11 at 17:16 @Sean: I mean that $H^1(X, G) \to H^1(X, G')$ is an isomorphism, so isomorphism classes of $G$ and $G'$-bundles are the same. I don't necessarily want to assume $G, G'$ CW complexes, but if it makes it easier, then I'd be curious about a result in that case. – Akhil Mathew Aug 13 '11 at 17:35 you want to avoid $BG$ but $H^1(X;G) \cong [X, BG]$, right? – Sean Tilson Aug 13 '11 at 17:40
2016-06-28 10:05:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9528311491012573, "perplexity": 331.56928840156917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00165-ip-10-164-35-72.ec2.internal.warc.gz"}
https://www.lesswrong.com/posts/HQGkfx8vQfq4mSvNz/rationality-quotes-september-2012
# 12 Here's the new thread for posting quotes, with the usual rules: • Please post all quotes separately, so that they can be voted up/down separately.  (If they are strongly related, reply to your own comments.  If strongly ordered, then go ahead and post them together.) • Do not quote yourself • Do not quote comments/posts on LW/OB • No more than 5 quotes per person per monthly thread, please. New Comment Some comments are truncated due to high volume. Change truncation settings "Wait, Professor... If Sisyphus had to roll the boulder up the hill over and over forever, why didn't he just program robots to roll it for him, and then spend all his time wallowing in hedonism?" "It's a metaphor for the human struggle." "I don't see how that changes my point." Well, his point only makes any sense when applied to the metaphor since a better answer to the question "Wait, Professor... If Sisyphus had to roll the boulder up the hill over and over forever, why didn't he just program robots to roll it for him, and then spend all his time wallowing in hedonism?" is: "where would Sisyphus get a robot in the middle of Hades?" Edit: come to think of it, this also works with the metaphor for human struggle. I thought the correct answer would be, "No time for programming, too busy pushing a boulder." Though, since the whole thing was a punishment, I have no idea what the punishment for not doing his punishment would be. Can't find it specified anywhere. I don't think he's punished for disobeying, I think he's compelled to act. He can think about doing something else, he can want to do something else, he can decide to do something else ... but what he does is push the boulder. The version I like the best is that Sisyphus keeps pushing the boulder voluntarily, because he's too proud to admit that, despite all his cleverness, there's something he can't do. (Specifically, get the boulder to stay at the top of the mountain). My favorite version is similar. Each day he tries to push the boulder a little higher, and as the boulder starts to slide back, he mentally notes his improvement before racing the boulder down to the bottom with a smile on his face. Because he gets a little stronger and a little more skilled every day, and he knows that one day he'll succeed. In the M. Night version: his improvements are an asymptote - and Sisyphus didn't pay enough attention in calculus class to realize that the limit is just below the peak. 5DanielLC10y Or maybe the limit is the peak. He still won't reach it. 8MixedNuts10y In some versions he's harassed by harpies until he gets back to boulder-pushing. But RobinZ's version is better. 8Alejandro110y Borrowing one of Hephaestus [http://www.allonrobots.com/ancient-robots.html]', perhaps? Now someone just has to write a book entitled "The Rationality of Sisyphus", give it a really pretentious-sounding philosophical blurb, and then fill it with Grand Theft Robot. 0DanielLC10y He can build it. It would be pretty hard to do while pushing a boulder up a hill, but he has all the time in the world! 1CCC10y Does he have any suitable raw materials? Answer: Because the Greek gods are vindictive as fuck, and will fuck you over twice as hard when they find out that you wriggled out of it the first time. Who was the guy who tried to bargain the gods into giving him immortality, only to get screwed because he hadn't thought to ask for youth and health as well? He ended up being a shriveled crab like thing in a jar. My highschool english teacher thought this fable showed that you should be careful what you wished for. I thought it showed that trying to compel those with great power through contract was a great way to get yourself fucked good an hard. Don't think you can fuck with people a lot more powerful than you are and get away with it. EDIT: The myth was of Tithonus. A goddess Eos was keeping him as a lover, and tried to bargain with Zeus for his immortality, without asking for eternal youth too. Ooops. Don't think you can fuck with people a lot more powerful than you are and get away with it. I'm no expert, but that seems to be the moral of a lot of Greek myths. 1[anonymous]10y King Midas, too. 0[anonymous]9y I'd say this captures the spirit of Less Wrong perfectly. Do unto others 20% better than you expect them to do unto you, to correct for subjective error. -- Linus Pauling Citation for this was hard; the closest I got was Etzioni's 1962 The Hard Way to Peace, pg 110. There's also a version in the 1998 Linus Pauling on peace: a scientist speaks out on humanism and world survival : writings and talks by Linus Pauling; this version goes I have made a modern formulation of the Golden Rule: "Do unto others 20 percent better than you would be done by - the 20 percent is to correct for subjective error." 4Caspian10y Did you take "expect" to mean as in prediction, or as in what you would have them do, like the Jesus version? 1DanielLC10y How about doing unto others what maximizes total happiness, regardless of what they'd do unto you? 6prase10y The former is computationally far more feasible. 4RomanDavis10y By acting in a way that discourages them from hurting you, and encouraging them to help you, you are playing your part in maximizing total happiness. 1Nisan10y Doing unto others that which causes maximum total happiness leaves you vulnerable to Newcomb problems. You want to do unto others that which logically entails maximum total happiness. Under certain conditions, this is the same as Pauling's recommendation. 0DanielLC10y I never mentioned causation. If you find a way to maximize it acausally, do that. 1CronoDAS10y It has a tendency [http://lesswrong.com/lw/v1/ethical_injunctions/] to go horribly wrong. [http://tvtropes.org/pmwiki/pmwiki.php/Main/UtopiaJustifiesTheMeans] 1Kindly10y It's a nice sentiment, but the optimization problem you suggest is usually intractable. 3DanielLC10y It's better to at least attempt it than just find an easier problem and do that. You might have to rely on intuition and such to get any answer, but you're not going to do well if you just find something easier to optimize. 4Kindly10y Yes, but there's no way a pithy quote is going to solve the problem for you. It might, however, contain a useful heuristic. “A writer who says that there are no truths, or that all truth is ‘merely relative,’ is asking you not to believe him. So don’t.” ― Roger Scruton, Modern Philosophy: An Introduction and Survey 5simplicio10y I am sympathetic to this line, but Scruton's dismissal seems a little facile. If somebody says the truth is relative, then they can bite the bullet if they wish and say that THAT truth is also relative, thus avoiding the trap of self-contradiction. It might still be unwise to close your ears to them. Consider a case where we DO agree that a given subject matter is relative; e.g., taste in ice-cream. Suppose Rosie the relativist tells you: "This ice-cream vendor's vanilla is absolutely horrible, but that's just my opinion and obviously it's relative to my own tastes." You would probably agree that Rosie's opinion is indeed "just relative"... and still give the vanilla a miss this time. 0Nisan10y If "this vanilla ice cream is horrible" is relatively true, then "Rosie's opinion is that this vanilla ice cream is horrible" is absolutely true. The person who says, as almost everyone does say, that human life is of infinite value, not to be measured in mere material terms, is talking palpable, if popular, nonsense. If he believed that of his own life, he would never cross the street, save to visit his doctor or to earn money for things necessary to physical survival. He would eat the cheapest, most nutritious food he could find and live in one small room, saving his income for frequent visits to the best possible doctors. He would take no risks, consume no luxuries, and live a long life. If you call it living. If a man really believed that other people's lives were infinitely valuable, he would live like an ascetic, earn as much money as possible, and spend everything not absolutely necessary for survival on CARE packets, research into presently incurable diseases, and similar charities. In fact, people who talk about the infinite value of human life do not live in either of these ways. They consume far more than they need to support life. They may well have cigarettes in their drawer and a sports car in the garage. They recognize in their actions, if not in their words, that physical survival is only one value, albeit a very important one, among many. -- David D. Friedman, The Machinery of Freedom 1olalonde10y Related: 1DanielLC10y He's just showing that those people don't give infinite value, not that it's nonsense. It's nonsense because, even if you consider life infinitely more intrinsically valuable than a green piece of paper, you'd still trade a life for green pieces of paper, so long as you could trade them back for more lives. 1RobinZ10y If life were of infinite value, trading a life for two new lives would be a meaningless operation - infinity times two is equal to infinity. Not unless by "life has infinite value" you actually mean "everything else is worthless". Not quite so! We could presume that value isn't restricted to the reals + infinity, but say that something's value is a value among the ordinals. Then, you could totally say that life has infinite value, but two lives have twice that value. But this gives non-commutativity of value. Saving a life and then getting $100 is better than getting$100 and saving a life, which I admit seems really screwy. This also violates the Von Neumann-Morgenstern axioms. In fact, if we claim that a slice of bread is of finite value, and, say, a human life is of infinite value in any definition, then we violate the continuity axiom... which is probably a stronger counterargument, and tightly related to the point DanielLC makes above. 8The_Duck10y If we want to assign infinite value to lives compared to slices of bread, we don't need exotic ideas like transfinite ordinals. We can just define value as an ordered pair (# of lives, # of slices of bread). When comparing values we first compare # of lives, and only use # of slices of bread as a tiebreaker. This conforms to the intuition of "life has infinite value" and still lets you care about bread without any weird order-dependence. This still violates the continuity axiom, but that, of itself, is not an argument against a set of preferences. As I read it, claiming "life has infinite value" is an explicit rejection of the continuity axiom. Of course, Kaj Sotala's point in the original comment was that in practice people demonstrate by their actions that they do accept the continuity axiom; that is, they are willing to trade a small risk of death in exchange for mundane benefits. 4DanielLC10y You could use hyperreal numbers [http://en.wikipedia.org/wiki/Hyperreal_number]. They behave pretty similarly to reals, and have reals as a subset. Also, if you multiply any hyperreal number besides zero by a real number, you get something isomorphic to the reals, so you can multiply by infinity and it still will work the same. I'm not a big fan of the continuity axiom. Also, if you allow for hyperreal probabilities, you can still get it to work. 0Decius10y True Only if you have a way to describe infinity in terms of a real number. 0DanielLC10y You just pick some infinite hyper real number and multiply all the real numbers by that. What's the problem? 0Decius10y Oh, you're saying assign a hyperreal infinite numbers to the value of individual lives. That works, but be very careful how you value life. Contradictions and absurdities are trivial to develop when one aspect is permitted to override every other one. 1benelliott10y Nitpick, I think you mean non-commutativity, the ordinals are associative. The rest of your post agrees with this interpretation. 0fiddlemath10y Oops, yes. Edited in original; thanks! There is something about practical things that knocks us off our philosophical high horses. Perhaps Heraclitus really thought he couldn't step in the same river twice. Perhaps he even received tenure for that contribution to philosophy. But suppose some other ancient had claimed to have as much right as Heraclitus did to an ox Heraclitus had bought, on the grounds that since the animal had changed, it wasn't the same one he had bought and so was up for grabs. Heraclitus would have quickly come up with some ersatz, watered-down version of identity of practical value for dealing with property rights, oxen, lyres, vineyards, and the like. And then he might have wondered if that watered-down vulgar sense of identity might be a considerably more valuable concept than a pure and philosophical sort of identity that nothing has. John Perry, introduction to Identity, Personal Identity, and the Self He bought the present ox along with the future ox. He could have just bought the present ox, or at least a shorter interval of one. This is known as "renting". 1Paul Crowley9y Which future ox did he buy? 0DanielLC9y Sorry. The future oxen. 2Paul Crowley9y Of the many oxen at a given point in time in the future, which one did he buy? 2DanielLC9y Oh. I see what you mean. He bought the ones on the future side of that worldline. It's convenient that way, and humans are good at keeping track. He could have bought any combination of future oxen the guy owns. This has the advantage of later oxen being in the same area as earlier oxen, simplifying transportation. 3Paul Crowley9y I'm not making myself clear. It's clear from what you say that if Heraclites bought an ox yesterday, he owns an ox today. But in order to say that he owns this particular ox, he needs a better system of identity than "you never step into the same river twice". 2DanielLC9y It's a sensible system for deciding how to buy and sell oxen because it minimizes shipping costs. It's a less sensible way to, for example, judge the value of a person. Should I choose Alice of Bob just because Bob is at the beginning of a world-line and Alice is not? This does kind of come back to arguing definitions. The common idea of identity is really useful. If a philosopher thinks otherwise, he's overthinking it. "Identity" refers to something. I just don't think it's anything beyond that. You in principle could base your ethics on it, but I see no reason to. It's not as if it's something anybody can experience. If you base your anthropics on it, you'll only end up confusing yourself [http://lesswrong.com/lw/19d/the_anthropic_trilemma/]. 0CCC10y Alternatively, he purchased the present ox using the ox-an-hour-ago as payment. 3DanielLC10y No. There is nobody to make that transaction with, and his past self still used the past ox, so he can't sell it. If only there were evil people somewhere insidiously committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being, and who is willing to destroy a piece of his own heart? Solzhenitsyn But the line dividing good and evil cuts through the heart of every human being, and who is willing to destroy a piece of his own heart? If only it were a line. Or even a vague boundary between clearly defined good and clearly defined evil. Or if good and evil were objectively verifiable notions. 6simplicio10y You don't think even a vague boundary can be found? To me it seems pretty self-evident by looking at extremes; e.g., torturing puppies all day is obviously worse than playing with puppies all day. By no means am I secure in my metaethics (i.e., I may not be able to tell you in exquisite detail WHY the former is wrong). But even if you reduced my metaethics down to "whatever simplicio likes or doesn't like," I'd still be happy to persecute the puppy-torturers and happy to call them evil. 1FeepingCreature10y Animal testing. And even enjoying torturing puppies all day is merely considered "more evil" because it's a predictor of psychopathy. 0simplicio10y So I think maybe I leapt into this exchange uncarefully, without being clear about what I was defending. I am defending the meaningfulness & utility of a distinction between good & evil actions (not states of affairs). Note that a distinction does not require a sharp dividing line (yellow is not the same as red, but the transition is not sudden). I also foresee a potential disagreement about meta-ethics, but that is just me "extrapolating the trend of the conversation." Anyway, getting back to good vs evil: I am not especially strict about my use of the word "evil" but I generally use it to describe actions that (a) do a lot of harm without any comparably large benefit, AND (b) proceed from a desire to harm sentient creatures. Seen in this light it is obvious why torturing puppies is evil, playing with them is good, and testing products on them is ethically debatable (but not evil, because of the lack of desire to harm). None of this is particularly earth-shattering as philosophical doctrine. Not if you think animals' interests count morally, which I do explicitly, and virtually everybody does implicitly. 0FeepingCreature10y I think your philosophy is probably fairly normal, it's just any attempt to simplify such things looks like an open challenge to point out corner cases. Don't take it too seriously. Also I'm not fully convinced on whether animals' interests count morally, even though they do practically by virtue of triggering my empathy. Aside from spiders. Those can just burn. (Which is an indicator that animals only count to me because they trigger my empathy, not because I care) 5faul_sname10y But.. but.. they just want to give you a hug. [http://ih3.redbubble.net/work.4827195.1.flat,550x550,075,f.cute-jumping-spider.jpg] 0disinter10y You point out that there are acts easily agreed to be evil and acts easily agreed to be good, but that doesn't imply a definable boundary between good and evil. First postulate a boundary between good and evil. Now, what is necessary to refute that boundary? A clearly defined boundary would require actions that fall near the boundary to always fall to on side or the other without fail. Easily, that is not the case. Stealing food is clearly evil if you have no need but the victim has need for the food. If the needs are opposite, then it is not clearly evil. So there is no clear boundary, but what would a vague boundary require? A think a vague boundary requires that actions can be ranked in a vague progression from "certainly good" through "overall good, slightly evil" and descend through progressively less good zones as they approach from one side, then crossing a "evil=~good" area, into a progressively more evil side. I do not see that is necessarily the case. Stealing food is clearly evil if you have no need but the victim has need for the food. If the needs are opposite, then it is not clearly evil. So there is no clear boundary, but what would a vague boundary require? You are pointing to different actions labeled stealing and saying "one is good and the other is evil." Yeah, obviously, but that is no contradiction - they are different actions! One is the action of stealing in dire need, the other is the action of stealing without need. This is a very common confusion. Good and evil (and ethics) are situation-dependent, even according to the sternest, most thundering of moralists. That does not tell us anything one way or the other about objectivity. The same action in the same situation with the same motives is ethically the same. 8disinter10y Thank you for pointing out my confusion. I've lost confidence that I have any idea what I'm talking about on this issue. 3beberly3710y I think the intermediate value theorem covers this. Meaning if a function has positive and negative values (good and evil) and it is continuous (I would assume a "vague boundary" or "grey area" or "goodness spectrum" to be continuous) then there must be at least one zero value. That zero value is the boundary. 3shminux10y It would indeed cover this if goodness spectrum was a regular function, not a set-valued map [http://en.wikipedia.org/wiki/Multivalued_function]. Unfortunately, the same thoughts and actions can correspond to different shades of good and evil, even in the mind of the same person, let alone of different people. Often at the same time, too. 1simplicio10y This shows that there is disagreement & confusion about what is good & what is evil. That no more proves good & evil are meaningless, than disagreement about physics shows that physics is meaningless. Actually, disagreement tends to support the opposite conclusion. If I say fox-hunting is good and you say it's evil, although we disagree on fox-hunting, we seem to agree that only one of us can possibly be right. At the very least, we agree that only one of us can win. But the line dividing Kansas and Nebraska cuts through the heart of every human being. And who is willing to grow corn on his own heart? — Steven Kaas 4gjm10y Duplicate [http://lesswrong.com/lw/aha/rationality_quotes_march_2012/5ydd]. A problem well stated is a problem half solved. Charles Kettering 5thomblake10y A problem sufficiently well-stated is a problem fully solved. 1SilasBarta10y Wow, I didn't even know that's a quote from someone! I had inferred that (mini)lesson from a lecture I heard, but it wasn't stated in those terms, and I never checked if someone was already known for that. Nobody is smart enough to be wrong all the time. Ken Wilber "But I tell you he couldn't have written such a note!" cried Flambeau. "The note is utterly wrong about the facts. And innocent or guilty, Dr Hirsch knew all about the facts." "The man who wrote that note knew all about the facts," said his clerical companion soberly. "He could never have got 'em so wrong without knowing about 'em. You have to know an awful lot to be wrong on every subject—like the devil." "Do you mean—?" "I mean a man telling lies on chance would have told some of the truth," said his friend firmly. "Suppose someone sent you to find a house with a green door and a blue blind, with a front garden but no back garden, with a dog but no cat, and where they drank coffee but not tea. You would say if you found no such house that it was all made up. But I say no. I say if you found a house where the door was blue and the blind green, where there was a back garden and no front garden, where cats were common and dogs instantly shot, where tea was drunk in quarts and coffee forbidden—then you would know you had found the house. The man must have known that particular house to be so accurately inaccurate." --G.K. Chesterton, "The Duel of Dr. Hirsch" 2RomanDavis10y Reversed malevolence is intelligence? Inverted information is not random noise. 2RomanDavis10y ...unless you're reversing noise which is why Reverse Stupidity is not Intelligence. 4fubarobfusco10y If someone tells you the opposite of the truth in order to deceive you, and you believe the opposite of what they say because you know they are deceitful, then you believe the truth. (A knave is as good as a knight to a blind bat.) The problem is, a clever liar doesn't lie all the time, but only when it matters. 1DanielLC10y It's more likely that they're a stupid liar than that they got it all wrong by chance. 0TheOtherDave10y Another problem is that for many interesting assertions X, opposite(opposite(X)) does not necessarily equal X. Indeed, opposite(opposite(X)) frequently implies NOT X. 0Alejandro110y Could you give an example? I would have thought this happens with Not(opposite(X)); for example, "I don't hate you" is different than "I love you", and in fact implies that I don't. But I would have thought "opposite" was symmetric, so opposite(opposite(X)) = X. 1TheOtherDave10y Well, OK. So suppose (to stick with your example) I love you, and I want to deceive you about it by expressing the opposite of what I feel. So what do I say? You seem to take for granted that opposite("I love you") = "I hate you." And not, for example, "I am indifferent to you." Or "You disgust me." Or various other assertions. And, sure, if "I love you" has a single, unambiguous opposite, and the opposite also has a single, unambiguous opposite, then my statement is false. But it's not clear to me that this is true. If I end up saying "I'm indifferent to you" and you decide to believe the opposite of that... well, what do you believe? Of course, simply negating the truth ("I don't love you") is unambiguously arrived at, and can be thought of as an opposite... though in practice, that's often not what I actually do when I want to deceive someone, unless I've been specifically accused of the truth. ("We're not giant purple tubes from outer space!") Lol, my professor would give a 100% to anyone who answered every exam question wrong. There were a couple people who pulled it off, but most scored 0<10. I'm assuming a multiple-choice exam, and invalid answers don't count as 'wrong' for that purpose? Otherwise I can easily miss the entire exam with "Tau is exactly six." or "The battle of Thermopylae" repeated for every answer. Even if the valid answers are [A;B;C;D]. 4MugaSofer10y Unless it really was the battle of Thermopylae. Not having studied, you wont know. 2Decius10y "The Battle of Thermopylae" is intended as the alternate for questions which might have "Tau is exactly six" as the answer. For example: "What would be one consequence of a new state law which defines the ratio of a circle's circumference to diameter as exactly three?" I bet that you can't write a question for which "Tau is exactly six." and "The battle of Thermopylae" are both answers which gain any credit... I bet that you can't write a question for which "Tau is exactly six." and "The battle of Thermopylae" are both answers which gain any credit... "Write a four word phrase or sentence." 6Decius10y You win. 4shminux10y Judging by this and your previous evil genie comments, you'd make a lovely UFAI. 4Jay_Schweikert10y I hate to break up the fun, and I'm sure we could keep going on about this, but Decius's original point was just that giving a wrong answer to an open-ended question is trivially easy. We can play word games and come up with elaborate counter-factuals, but the substance of that point is clearly correct, so maybe we should just move on. 4Decius10y That was exactly the challenge I issued. Granted, it's trivial to write an answer which is wrong for that question, but it shows that I can't find a wrong answer for an arbitrary question as easily as I thought I could. 7Daniel_Burfoot10y An interesting corollary of the efficient market hypothesis is that, neglecting overhead due to things like brokerage fees and assuming trades are not large enough to move the market, it should be just as difficult to lose money trading securities as it is to make money. 1Salutator10y No, not really. In an efficient marked risks uncorrelated with those of other securities shouldn't be compensated, so you should easily be able to screw yourself over by not diversifying. 1Daniel_Burfoot10y But isn't the risk of diversifying compensated by a corresponding possibility of large reward if the sector outperforms? I wouldn't consider a strategy that produces modest losses with high probability but large gains with low probability sufficient to disprove my claim. Let's go one step back on this, because I think our point of disagreement is earlier than I thought in that last comment. The efficient market hypothesis does not claim that the profit on all securities has the same expectation value. EMH-believers don't deny, for example, the empirically obvious fact that this expectation value is higher for insurances than for more predictable businesses. Also, you can always increase your risk and expected profit by leverage, i.e. by investing borrowed money. This is because markets are risk-averse, so that on the same expectation value you get payed extra to except a higher standard deviation. Out- or underperforming the market is really easy by excepting more or less risk than it does on average. The claim is not that the expectation value will be the same for every security, only that the price of every security will be consistent with the same prices for risk and expected profit. So if the EMH is true, you can not get a better deal on expected profit without also accepting higher risk and you can not get a higher risk premium than other people. But you still can get lots of different trade-offs between expected profit and risk. Now can you ... (read more) 2CronoDAS10y Unless you're a fictional character [http://en.wikipedia.org/wiki/The_Opposite]. Or possibly Mike "Bad Player" Flores [http://www.starcitygames.com/magic/fundamentals/24786-The-Butterfly-Effect-19-More-Lessons.html] : 2Document10y 1Plubbingworth10y This reminds me of an episode of QI [http://youtu.be/srmAd62YjmA?t=3m55s], in which Johnny Vegas, who usually throws out random answers for the humor, actually managed to get a question (essentially) right. Lady Average may not be as good-looking as Lady Luck, but she sure as hell comes around more often. Anonymous Not always, since: The average human has one breast and one testicle Des McHale In other words, the average of a distribution is not necessarily the most probable value. 8sketerpot10y Don't expect her, either. In Russian Roulette, the mode is that you don't die, and indeed that's the outcome for most people who play it. You should, however, expect that there's a very large chance of instadeath, and if you were to play a bunch of games in a row, that (relatively uncommon) outcome would almost certainly kill you. (A similar principle applies to things like stock market index funds: the mode doesn't matter when all you care about is the sum of the stocks.) The real lesson is this: always expect Lady PDF [http://en.wikipedia.org/wiki/Probability_density_function]. 4simplyeric10y Not to be a bore but it does say "Lady Average" not "Sir or Madam Average". In my high school health class, for weeks the teacher touted the upcoming event: "Breast and Testicle Day!" When the anticipated day came, it was of course the day when all the boys go off to one room to learn about testicular self-examination, and all the girls go off to another to learn about breast self-examination. So, in fact, no student actually experienced Breast and Testicle Day. 7Plubbingworth10y Much to their chagrin, I'm assuming. 4Luke_A_Somers10y Rather: chagrin and relief. 0simplyeric10y Not to be a bore but it does say "Lady Average" not "Sir or Madam Average". 0shminux10y 3RobinZ10y If you're asking who comes around most often, Lady Mode it is - we can't help how it sounds. 9NancyLebovitz10y Lady Mode is the most fashionable. ...beliefs are like clothes. In a harsh environment, we choose our clothes mainly to be functional, i.e., to keep us safe and comfortable. But when the weather is mild, we choose our clothes mainly for their appearance, i.e., to show our figure, our creativity, and our allegiances. Similarly, when the stakes are high we may mainly want accurate beliefs to help us make good decisions. But when a belief has few direct personal consequences, we in effect mainly care about the image it helps to project. -Robin Hanson, Human Enhancement I feel like Hanson's admittedly insightful "signaling" hammer has him treating everything as a nail. Your contrarian stance against a high-status member of this community makes you seem formidable and savvy. Would you like to be allies with me? If yes, then the next time I go foraging I will bring you back extra fruit. I agree in principle but I think this particular topic is fairly nailoid in nature. 8zslastman10y I'd say it's such a broad subject that there have to be some screws in there as well. I think Hanson has too much faith in the ability of evolved systems to function in a radically changed environment. Even if signaling dominates the evolutionary origins of our brain, it's not advisable to just label everything we do now as directed towards signaling, any more than sex is always directed towards reproduction. You have to get into the nitty gritty of how our minds carry out the signaling. Conspiracy theorists don't signal effectively, though you can probably relate their behavior back to mechanisms originally directed towards, or at least compatible with, signaling. Also, an ability to switch between clear "near" thinking and fluffy "far" thinking presupposes a rational decision maker to implement the switch. I'm not sure Hanson pays enough attention to how, when, and for what reasons we do this. I think he's mischaracterizing the issue. Beliefs serve multiple functions. One is modeling accuracy, another is signaling. It's not whether the environment is harsh or easy, it's which function you need. There are many harsh environments where what you need is the signaling function, and not the modeling function. 0Delta10y I think the quote reflects reality (humans aren't naturally rational so their beliefs are conditioned by circumstance), but is better seen as an observation than a recommendation. The best approach should always be to hold maximally accurate beliefs yourself, even if you choose to signal different ones as the situation demands. That way you can gain the social benefits of professing a false belief without letting it warp or distort your predictions. No, that wouldn't necessarily be the case. We should expect a cost in effort and effectiveness to try to switch on the fly between the two types of truths. Lots of far truths have little direct predictive value, but lots of signaling value. Why bear the cost for a useless bit of predictive truth, particularly if it is worse than useless and hampers signaling? That's part of the magic of magisteria - segregation of modes of truth by topic reduces that cost. 0Delta10y Hmm, maybe I shouldn't have said "always" given that acting ability is required to signal a belief you don't hold, but I do think what I suggest is the ideal. I think someone who trained themselves to do what I suggest, by studying people skills and so forth, would do better as they'd get the social benefits of conformity and without the disadvantages of false beliefs clouding predictions (though admittedly the time investment of learning these skills would have to be considered). Short version: I think this is possible with training and would make you "win" more often, and thus it's what a rationalist would do (unless the cost of training proved prohibitive, of which I'm doubtful since these skills are very transferable). I'm not sure what you meant by the magisteria remark, but I get the impression that advocating spiritual/long-term beliefs to less stringent standards than short term ones isn't generally seen as a good thing (see Eliezer's "Outside the Laboratory" post among others). 0[anonymous]10y Clothes serve multiple functions. One is keeping warm, another is signalling. -L. A. Rollins, Lucifer's Lexicon: An Updated Abridgment "He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his candle at mine, receives light without darkening me. No one possesses the less of an idea, because every other possesses the whole of it." - Jefferson But many people do benefit greatly from hoarding or controlling the distribution of scarce information. If you make your living off slavery instead, then of course you can be generous with knowledge. 2CCC10y If you do not hoard your ideas, and neither do I, then we can both benefit from the ideas of the other. If I can access the ideas of a hundred other people at the cost of sharing my own ideas, then I profit; no matter how smart I am, a hundred other people working the same problem are going to be able to produce at least some ideas that I did not think of. (This is a benefit of free/open source software; it has been shown experimentally to work pretty well in the right circumstances). “The goal of the future is full unemployment, so we can play. That’s why we have to destroy the present politico-economic system.” This may sound like the pronouncement of some bong-smoking anarchist, but it was actually Arthur C. Clarke, who found time between scuba diving and pinball games to write “Childhood’s End” and think up communications satellites. My old colleague Ted Rall recently wrote a column proposing that we divorce income from work and give each citizen a guaranteed paycheck, which sounds like the kind of lunatic notion that’ll be considered a basic human right in about a century, like abolition, universal suffrage and eight-hour workdays. The Puritans turned work into a virtue, evidently forgetting that God invented it as a punishment. The interesting part is the phrase "which sounds like the kind of lunatic notion that’ll be considered a basic human right in about a century, like abolition, universal suffrage and eight-hour workdays." If we can anticipate what the morality of the future would be, should we try to live by it now? If we can anticipate what the morality of the future would be, should we try to live by it now? Not if it's actually the same morality, but depends on technology. For example, strong prohibitions on promiscuity are very sensible in a world without cheap and effective contraceptives. Anyone who tried to live by 2012 sexual standards in 1912 would soon find they couldn't feed their large horde of kids. Likewise, if robots are doing all the work, fine; but right now if you just redistribute all money, no work gets done. 8[anonymous]10y Lack of technology was not [http://en.wikipedia.org/wiki/History_of_condoms] the reason condoms weren't as widely available in 1912. 6shminux10y Right idea, not a great example. People used to have lots more kids then now, most dying in childhood. Majority of women of childbearing age (gay or straight) were married and having children as often as their body allowed, so promiscuity would not have changed much. Maybe a minor correction for male infertility and sexual boredom in a standard marriage. You seem to have rather a different idea of what I meant by "2012 standards". Even now we do not really approve of married people sleeping around. We do, however, approve of people not getting married until age 25 or 30 or so, but sleeping with whoever they like before that. Try that pattern without contraception. 2CCC10y You might. I don't. This is most probably a cultural difference. There are people in the world to day who see nothing wrong with having multiple wives, given the ability to support them (example: Jacob Zuma) 7Desrtopa10y Strong norms against promiscuity out of wedlock still made sense though, since having lots of children without a committed partner to help care for them would usually have been impractical. 6Alicorn10y Not if they were gay. How do you envision living by this model now working? That is, suppose I were to embrace the notion that having enough resources to live a comfortable life (where money can stand in as a proxy for other resources) is something everyone ought to be guaranteed. What ought I do differently than I'm currently doing? 0[anonymous]10y I would like to staple that question to the forehead of every political commentator who makes a living writing columns in far mode. What is it you would like us to do? If you don't have a good answer, why are you talking? Argh. 9RichardKennaway10y Not if the morality you anticipate coming into favour is something you disagree with. If it's something you agree with, it's already yours, and predicting it is just a way of avoiding arguing for it. 8Viliam_Bur10y If you are a consequentialist, you should think about the consequences of such decision. For example, imagine a civilization where an average person has to work nine hours to produce enough food to survive. Now the pharaoh makes a new law saying that (a) all produced food has to be distribute equally among all citizens, and (b) no one can be compelled to work more than eight hours; you can work as a volunteer, but all your produced food is redistributed equally. What would happen is such situation? In my opinion, this would be a mass Prisoners' Dilemma where people would gradually stop cooperating (because the additional hour of work gives them epsilon benefits) and start being hungry. There would be no legal solution; people would try to make some food in their free time illegally, but the unlucky ones would simply starve and die. The law would seem great in far mode, but its near mode consequences would be horrible. Of course, if the pharaoh is not completely insane, he would revoke the law; but there would be a lot of suffering meanwhile. If people had "a basic human right to have enough money without having to work", situation could progress similarly. It depends on many things -- for example how much of the working people's money would you have to redistribute to non-working ones, and how much could they keep. Assuming that one's basic human right is to have $500 a month, but if you work, you can keep$3000 a month, some people could still prefer to work. But there is no guarantee it would work long-term. For example there would be a positive feedback loop -- the more people are non-working, the more votes politicians can gain by promising to increase their "basic human right income", the higher are taxes, and the smaller incentives to work. Also, it could work for the starting generation, but corrupt the next generation... imagine yourself as a high school student knowing that you will never ever have to work; how much effort would an average student give 4Legolan10y Systems that don't require people to work are only beneficial if non-human work (or human work not motivated by need) is still producing enough goods that the humans are better off not working and being able to spend their time in other ways. I don't think we're even close to that point. I can imagine societies in a hundred years that are at that point (I have no idea whether they'll happen or not), but it would be foolish for them to condemn our lack of such a system now since we don't have the ability to support it, just as it would be foolish for us to condemn people in earlier and less well-off times for not having welfare systems as encompassing as ours. I'd also note that issues like abolition and universal suffrage are qualitatively distinct from the issue of a minimum guaranteed income (what the quote addresses). Even the poorest of societies can avoid holding slaves or placing women or men in legally inferior roles. The poorest societies cannot afford the "full unemployment" discussed in the quote, and neither can even the richest of modern societies right now (they could certainly come closer than the present, but I don't think any modern economy could survive the implementation of such a system in the present). I do agree, however, about it being a solid goal, at least for basic amenities. 5Viliam_Bur10y To avoid having slaves, the poorest society could decide to kill all war captives, and to let starve to death all people unable to pay their debts. Yes, this would avoid legal discrimination. Is it therefore a morally preferable solution? 2CCC10y In poor societies that permit slavery, a man might be willing to sell himself into slavery. He gets food and lodging, possibly for his family as well as himself; his new purchaser gets a whole lot of labour. There's a certain loss of status, but a person might well be willing to live with that in order to avoid starvation. 1CronoDAS10y Elections can take quite a bit of resources to run when you have a large voting population... 2Eliezer Yudkowsky10y No, politicians can afford to spend lots of money on them. The actual mechanism of elections have never, so far as I know, been all that expensive pre-computation. 7Randaly10y IAWYC, but the claims that most of the economic costs of elections are in political spending, and most of the costs of actually running elections are in voting machines are both probably wrong. (Public data is terrible, so I'm crudely extrapolating all of this from local to national levels.) The opportunity costs of voting alone dwarf spending on election campaigns. Assuming that all states have the same share of GDP, that people who don't a full-state holiday to vote take an hour off to vote, that people work 260 days a year and 8 hours a day, and that nobody in the holiday states do work, then we get: Political spending: 5.3 billion USD [http://www.politico.com/news/stories/1108/15283.html]Opportunity costs of elections: 15 trillion USD (US GDP) (9/50 (states with voting holidays) 1/260 (percentage of work-time lost) + 41/50 (states without holidays) 1/60 1/8 (percentage of work-time lost)) ≈ 16 billion USD Extrapolating from New York City figures [http://cityroom.blogs.nytimes.com/2010/09/14/problems-reported-with-new-voting-machines/] , election machines cost ~1.9 billion nationwide. (50 million for a population ~38 times smaller than the total US population.) and extrapolating Oakland County's 650,000 USD [http://www.hometownlife.com/article/20120906/NEWS10/120906009/Oakland-County-seeks-reimbursement-special-election-cost?odyssey=nav|head] cost across the US's 3143 counties, direct costs are just over 2 billion USD. (This is for a single election; however, some states have had as many as 5 elections in a single year. The cost of the voting machines can be amortized over multiple elections in multiple years.) (If you add together the opportunity costs for holding one general and one non-general election a year (no state holidays; around ~7 billion USD), plus the costs of actually running them, plus half the cost of the campaign money, the total cost/election seems to be around 30 billion USD, or ~0.002% of the US's GDP.) 4Eliezer Yudkowsky10y Correction accepted. Still seems like something a poor society could afford, though, since labor and opportunity would also cost less. I understand that lots of poor societies do. 2[anonymous]10y What? If anything I'd assume them to be more expensive before computers were introduced. In Italy where they are still paper based they have to hire people to count the ballots (and they have to pay them a lot, given that they select people at random and you're not allowed to refuse unless you are ill or something). According to Wikipedia, the 2005 elections in germany did cost 63 million euros, with a population of 81 million people. 0,78 eurocent per person or the 0,00000281st part of the GDP. Does not seem much, in the grander scheme of things. And since the german constitutional court prohibited the use of most types of voting machines, that figure does include the cost to the helpers; 13 million, again, not a prohibitive expenditure. 7ArisKatsaris10y http://aceproject.org/ace-en/focus/core/crb/crb03 [http://aceproject.org/ace-en/focus/core/crb/crb03] "Low electoral costs, approximately $1 to$3 per elector, tend to manifest in countries with longer electoral experience" That's a somewhat confusing comment. If they're effectively conscripted (them not being allowed to refuse), not really "hired" -- that would imply they don't need to be paid a lot... 2[anonymous]10y Is that that little? I think many fewer people would vote if they had to pay $3 out of their own pocket in order to do so. A law compelling people to do stuff would be very unpopular, unless they get adequate compensation. Not paying them much would just mean they would feign illness or something. (If they didn't select people by lot, the people doing that would be the ones applying for that job, who would presumably like it more than the rest of the population and hence be willing to do that for less.) 0ArisKatsaris10y Well perhaps fewer people would vote if they had to pay a single cent out of their own pocket -- would that mean that 0.01$ isn't little either? How much are these Italian ballot-counters being paid? Can we quantify this? 0[anonymous]10y IIRC, something like €150 per election. I'll look for the actual figure. 5DanArmak10y Why so? Usually when people can't refuse to do a job, they're paid little, not a lot. 1RomanDavis10y Like jury duty. Yeah. Why would it be different in Greece? 1Peterdjones10y In the UK, the counters are volunteers. 0TheOtherDave10y Well, yes. Almost tautologically so, I should think. The tricky part is working out when humans are better off. 3khafra10y If you are a bayesian, you should think about how much evidence your imagination constitutes. For example, imagine a civilization where an average person gains little or no total productivity by working over 8 hour per day. Imagine, moreover, that in this civilization, working 10 hours a day doubles your risk of coronary heart disease, the leading cause of death in this civilization. Finally, imagine that, in this civilization, a common way for workers to signal their dedication to their jobs is by staying at work long hours, regardless of the harm it does both to their company and themselves. In this civilization [http://www.overcomingbias.com/2011/12/work-hour-skepticism.html], a law preventing individuals from working over 8 hours per day is a tremendous social good. 3NancyLebovitz10y Work hour skepticism [http://www.overcomingbias.com/2011/12/work-hour-skepticism.html] leaves out the question of the cost of mistakes. It's one thing to have a higher proportion of defective widgets on an assembly line (though even that can matter, especially if you want a reputation for high quality products), another if the serious injury rate goes up, and a third if you end up with the Exxon Valdez. 2[anonymous]10y You mean “incentives to fully report your income”, right? ;-) (There are countries where a sizeable fraction of the economy is underground. I come from one.) The same they give today. Students not interested in studying mostly just cheat. 2CronoDAS10y Well, if your society isn't rich enough, you just do what you can. (And a lot of work really isn't all that important; would it be that big of a disaster if your local store carried fewer kinds of cosmetics, or if your local restaurant had trouble hiring waiters?) See also. [http://www.philorum.org/speech/20051207JohnBentley.html] 0jbeshir10y It is true that in the long run, things could work out worse with a guarantee of sufficient food/supplies for everyone. I think, though, that this post answers the wrong question; the question to answer in order to compare consequences is how probable it is to be better or worse, and by what amounts. Showing that it "could" be worse merely answers the question "can I justify holding this belief" rather than the question "what belief should I hold". The potential benefits of a world where people are guaranteed food seem quite high on the face of it, so it is a question well worth asking seriously... or would be if one were in a position to actually do anything about it, anyway. Prisoners' dilemmas amongst humans with reputation and social pressure effects do not reliably work out with consistent defection, and models of societies (and students*) can easily predict almost any result by varying the factors they model and how they do so, and so contribute very little evidence in the absence of other evidence that they generate accurate predictions. The only reliable information that I am aware of is that we know that states making such guarantees can exist for multiple generations with no obvious signs of failure, at least with the right starting conditions, because we have such states existing in the world today. The welfare systems of some European countries have worked this way for quite a long time, and while some are doing poorly economically, others are doing comparably well. I think that it is worth assessing the consequences of deciding to live by the idea of universal availability of supplies, but they are not so straightforwardly likely to be dire as this post suggests, requiring a longer analysis. 8Viliam_Bur10y As I wrote, it depends on many things. I can imagine a situation where this would work; I can also imagine a situation where it would not. As I also wrote, I can imagine such system functioning well if people who don't work get enough money to survive, but people who do work get significantly more. Data point: In Slovakia many uneducated people don't work, because it wouldn't make economical sense for them. Their wage, minus traveling expenses, would be only a little more, in some cases even less than their welfare. What's the point of spending 8 hours in work if in result you have less money? They cannot get higher wages, because they are uneducated and unskilled; and in Slovakia even educated people get relatively little money. The welfare cannot be lowered, because the voters on the left would not allow it. The social pressure stops working if too many people in the same town are doing this; they provide moral support for each other. We have villages where unemployment is over 80% and people have already accommodated to this; after a decade of such life, even if you offer them a work with a decent wage, they will not take it, because it would mean walking away from their social circle. This would not happen in a sane society, but it does happen in the real life. Other European countries seem to fare better in this aspect, but I can imagine the same thing happening there in a generation or two. A generation ago most people would probably not predict this situation in Slovakia. I also can't imagine the social pressure to work on the "generation Facebook". If someone spends most of their day on Facebook or playing online multiplayer games, who exactly is going to socially press them? Their friends? Most of them live the same way. Their parents? The conflict between generations is not the same thing as peer pressure. And the "money without work is a basic human right" meme also does not help. It could work in a country where the difference between average wage (e 3jbeshir10y This is interesting, particularly the idea of comparing wage growth against welfare growth predicting success of "free money" welfare. I agree that it seems reasonably unlikely that a welfare system paying more than typical wages, without restrictions conflicting with the "detached from work" principle, would be sustainable, and identifying unsustainable trends in such systems seems like an interesting way to recognise where something is going to have to change, long-term. I appreciate the clarification; it provides what I was missing in terms of evidence or reasoned probability estimates over narrative/untested model. I'm taking a hint from feedback that I likely still communicated this poorly, and will revise my approach in future. Back on the topic of taking these ideas as principles, perhaps more practical near-term goals which provide a subset of the guarantee, like detaching availability of resources basic survival from the availability of work, might be more probably achievable. There are a wider range of options available for implementing these ideas, and of incentives/disincentives to avoid long-term use. An example which comes to mind is providing users with credit usable only to order basic supplies and basic food. My rough estimate is that it seems likely that something in this space could be designed to operate sustainably with only the technology we have now. On the side, relating to generation Facebook, my model of the typical 16-22 year old today would predict that they'd like to be able to buy an iPad, go to movies, afford alcohol, drive a nice car, go on holidays, and eventually get most of the same goals previous generations sought, and that their friends will also want these things. At younger ages, I agree that parental pressure wouldn't be typically classified as "peer pressure", but I still think it likely to provide significant incentive to do school work; the parents can punish them by taking away their toys if they don't, as effectively 8Viliam_Bur10y I have heard this idea proposed, and many people object against it saying that it would take away the dignity of those people. In other words, some people seem to think that "basic human rights" include not just things necessary for survival, but also some luxury and perhaps some status items (which then obviously stop being status items, if everyone has them). In theory, yes. However, as a former teacher I have seen parents completely fail at this. Data point: A mother came to school and asked me to tell her 16 year old daughter, my student, to not spend all her free time at internet. I did not understand WTF she wanted. She explained to me that as a computer science teacher her daughter will probably regard me an authority about computers, so if I ask her to not use the computer all day long, she wil respect me. This was her last hope, because as a mother she could not convince her daughter to go away from the computer. To me this seemed completely insane. First, the teachers in given school were never treated as authorities on anything; they were usually treated like shit both by students and school administration (a month later I left that school). Second, as a teacher I have zero influence on what my students do outside school, she as a mother is there; she has many possible ways to stop her daughter from interneting... for instance to forcibly turn off the computer, or just hide the computer somewhere while her daughter is at school. But she should have started doing something before her daughter turned 16. If she does not know that, she is clearly unqualified to have children; but there is no law against that. OK, this was an extreme example, but during my 4-years teaching carreer I have seen or heard from colleagues about many really fucked up parents; and those people were middle and higher social class. This leads me to very pesimistic views, not shared by people who don't have the same experience and are more free to rationalize this away. I think tha 5Eugine_Nier10y One of these things is not like the others. 4DanArmak10y Yes, no state has ever implemented truly universal suffrage (among minors). 4Jayson_Virissimo10y In Jasay [http://www.dejasay.org/default.asp]'s terminology, the first is a liberty (a relation between a person and an act) and the rest are rights {relations between two or more persons (at least one rightholder and one obligor) and an act}. I find this distnction useful for thinking more clearly about these kinds of topics. Your mileage may vary. 1Eugine_Nier10y I was actually referring to the the third being what I might call an anti-liberty, i.e., you aren't allowed to work more than eight-hours a day, and the fact that is most definitely not enforced nor widely considered a human right. 7DanArmak10y How is that different from pointing out that you're not allowed to sell yourself into slavery (not even partially, as in signing a contract to work for ten years and not being able to legally break it), or that you're not allowed to sell your vote? 5ArisKatsaris10y I'd say each of the three can be said to be unlike the others: * abolition falls under Liberty * universal suffrage falls under Equality * eight-hour workdays falls under Solidarity 3arundelo10y So "all of these things are not like the others" [http://www.youtube.com/watch?v=6b0ftfKFEJg]. 5[anonymous]10y I thought eight-hours workdays were about employers not being allowed to demand that employees work more than eight hours a day; I didn't know you weren't technically allowed to do that at all even if you're OK with it. 8fortyeridania10y 1. You are allowed to work more than eight hours per day. It's just that in many industries, employers must pay you overtime if you do so. 2. Even if employers were prohibited from using "willingness to work more than 8 hours per day" as a condition for employment, long workdays would probably soon become the norm. 3. Thus a more feasible way to limit workdays is to constrain employees rather than employers. To see why, assume that without any restrictions on workday length, workers supply more than 8 hours. Let's say, without loss of generality, that they supply 10. (In other words, the equilibrium quantity supplied is ten.) If employers can't demand the equilibrium quantity, but they're still willing to pay to get it, then employees will have the incentive to supply it. In their competition for jobs (finding them and keeping them), employees will be supply labor up until the equilibrium quantity, regardless of whether the bosses demand it. Working more looks good. Everyone knows that; you don't need your boss to tell you. So if there's competition for your spot or for a spot that you want, it would serve you well to work more. So if your goal is to prevent ten-hour days, you'd better stop people from supplying them. At least, this makes sense to me. But I'm no microeconomist. Perhaps we have one on LW who can state this more clearly (or who can correct any mistakes I've made). 2Vaniver10y See Lochner v. New York [http://en.wikipedia.org/wiki/Lochner_v._New_York]. Within the last five years there was a French strike (riot? don't remember exactly) over a law that would limit the workweek of bakers, which would have the impact of driving small bakeries out of business, since they would need to employ (and pay benefits on) 2 bakers rather than just 1. Perhaps a French LWer remembers more details? 1Slackson10y It would be very hard to distinguish when people were doing it because they wanted to, and when employers were demanding it. Maybe some employees are working that extra time, but one isn't. The one that isn't happens to be fired later on, for unrelated reasons. How do you determine that worker's unwillingness to work extra hours is not one of the reasons they were fired? Whether it is or not, that happening will likely encourage workers to go beyond the eight hours, because the last one that didn't got fired, and a relationship will be drawn whether there is one or not. 0[anonymous]10y It's not like you can fire employees on a whim: the “unrelated reasons” have to be substantial ones, and it's not clear you can find ones for any employee you want to fire. (Otherwise, you could use such a mechanism to de facto compel your employees to do pretty much anything you want.) Also, even if you somehow did manage to de facto demand workers to work ten hours a day, if you have to pay hours beyond the eighth as overtime (with a hourly wage substantially higher than the regular one), then it's cheaper for you to hire ten people eight hours a day each than eight people ten hours a day. 7TimS10y Under American law, you basically can fire an employee "on a whim" as long as it isn't a prohibited reason. 1Eugine_Nier10y Only if they can't get another job. 1[anonymous]10y That assumption isn't [http://en.wikipedia.org/wiki/List_of_countries_by_unemployment_rate] that far-fetched. Also, the same applies to doing that to compel them to work extra time (or am I missing something?). 3thomblake10y If we can afford it. Moral progress proceeds from economic progress. 4TimS10y Morality is contextual. If we have four people on a life boat and food for three, morality must provide a mechanism for deciding who gets the food. Suppose that decision is made, then Omega magically provides sufficient food for all - morality hasn't changed, only the decision that morality calls for. -------------------------------------------------------------------------------- Technological advancement has certainly caused moral change (consider society after introduction of the Pill). But having more resources does not, in itself, change what we think is right, only what we can actually achieve. 4[anonymous]10y That's an interesting claim. Are you saying that true moral dilemmas (i.e. a situation where there is no right answer) are impossible? If so, how would you argue for that? 2MixedNuts10y I think they are impossible. Morality can say "no option is right" all it wants, but we still must pick an option, unless the universe segfaults and time freezes upon encountering a dilemma. Whichever decision procedure we use to make that choice (flip a coin?) can count as part of morality. 4[anonymous]10y I take it for granted that faced with a dilemma we must do something, so long as doing nothing counts as doing something. But the question is whether or not there is always a morally right answer. In cases where there isn't, I suppose we can just pick randomly, but that doesn't mean we've therefore made the right moral decision. Are we ever damned if we do, and damned if we don't? 3Strange710y When someone is in a situation like that, they lower their standard for "morally right" and try again. Functional societies avoid putting people in those situations because it's hard to raise that standard back to it's previous level. 0CronoDAS10y Well, if all available options are indeed morally wrong, we can still try to see if any are less wrong than others. 2[anonymous]10y Right, but choosing the lesser of two evils is simple enough. That's not the kind of dilemma I'm talking about. I'm asking whether or not there are wholly undecidable moral problems. Choosing between one evil and a lesser evil is no more difficult than choosing between an evil and a good. But if you're saying that in any hypothetical choice, we could always find something significant and decisive, then this is good evidence for the impossibility of moral dilemmas. 8CronoDAS10y It's hard to say, really. Suppose we define a "moral dilemma for system X" as a situation in which, under system X, all possible actions are forbidden. Consider the systems that say "Actions that maximize this (unbounded) utility function are permissible, all others are forbidden." Then the situation "Name a positive integer, and you get that much utility" is a moral dilemma for those systems; there is no utility maximizing action, so all actions are forbidden and the system cracks. It doesn't help much if we require the utility function to be bounded; it's still vulnerable to situations like "Name a real number less than 30, and you get that much utility" because there isn't a largest real number less than 30. The only way to get around this kind of attack by restricting the utility function is by requiring the range of the function to be a finite set. For example, if you're a C++ program, your utility might be represented by a 32 bit unsigned integer, so when asked "How much utility do you want" you just answer "2^32 - 1" and when asked "How much utility less than 30.5 do you want" you just answer "30". (Ugh, that paragraph was a mess...) 2[anonymous]10y That is an awesome example. I'm absolutely serious about stealing that from you (with your permission). Do you think this presents a serious problem for utilitarian ethics? It seems like it should, though I guess this situation doesn't come up all that often. ETA: Here's a thought on a reply. Given restrictions like time and knowledge of the names of large numbers, isn't there in fact a largest number you can name? Something like Graham's number won't work (way too small) because you can always add one to it. But trans-finite numbers aren't made larger by adding one. And likewise with the largest real number under thirty, maybe you can use a function to specify the number? Or if not, just say '29.999....' and just say nine as many times as you can before the time runs out (or until you calculate that the utility benefit reaches equilibrium with the costs of saying 'nine' over and over for a long time). 1[anonymous]10y Transfinite cardinals aren't, but transfinite ordinals are. And anyway transfinite cardinals can be made larger by exponentiating them. 0[anonymous]10y Good point. What do you think of Chrono's dilemma? 3[anonymous]10y "Twenty-nine point nine nine nine nine ..." until the effort of saying "nine" again becomes less than the corresponding utility difference. ;-) 0CronoDAS10y Sure, be my guest. Honestly, I don't know. Infinities are already a problem, anyway. 1[anonymous]10y My view is that a more meaningful question than ‘is this choice good or bad’ is ‘is this choice better or worse than other choices I could make’. 0[anonymous]10y Would you say that there are true practical dilemmas? Is there ever a situation where, knowing everything you could know about a decision, there isn't a better choice? 1[anonymous]10y If I know there isn't a better choice, I just follow my decision. Duh. (Having to choose between losing $500 and losing$490 is equivalent to losing $500 and then having to choose between gaining nothing and gaining$10: yes, the loss will sadden me, but that had better have no effect on my decision, and if it does it's because of emotional hang-ups I'd rather not have. And replacing dollars with utilons wouldn't change much.) 0[anonymous]10y So you're saying that there are no true moral dilemmas (no undecidable moral problems)? 1[anonymous]10y Depends on what you mean by “undecidable”. There may be situations [http://lesswrong.com/lw/n3/circular_altruism/4lrj] in which it's hard in practice to decide whether it's better to do A or to do B, sure, but in principle either A is better, B is better, or the choice doesn't matter. 0[anonymous]10y So, for example, suppose a situation where a (true) moral system demands both A and B, yet in this situation A and B are incomposssible. Or it forbids both A and B, yet in this situation doing neither is impossible. Those examples have a pretty deontological air to them...could we come up with examples of such dilemmas within consequentialism? 3TheOtherDave10y Well, the consequentialist version of a situation that demands A and B is one in which A and B provide equally positive expected consequences and no other option provides consequences that are as good. If A and B are incompossible, I suppose we can call this a moral dilemma if we like. And, sure, consequentialism provides no tools for choosing between A and B, it merely endorses (A OR B). Which makes it undecidable using just consequentialism. There are a number of mechanisms for resolving the dilemma that are compatible with a consequentialist perspective, though (e.g., picking one at random). 0[anonymous]10y Thanks, that was helpful. I'd been having a hard time coming up with a consequentialist example. 1[anonymous]10y Then, either the demand/forbiddance is not absolute or the moral system is broken. 1Legolan10y How are you defining morality? If we use a shorthand definition that morality is a system that guides proper human action, then any "true moral dilemmas" would be a critique of whatever moral system failed to provide an answer, not proof that "true moral dilemmas" existed. We have to make some choice. If a moral system stops giving us any useful guidance when faced with sufficiently difficult problems, that simply indicates a problem with the moral system. ETA: For example, if I have completely strict sense of ethics based upon deontology, I may feel an absolute prohibition on lying and an absolute prohibition on allowing humans to die. That would create an moral dilemma for that system in the classical case of Nazis seeking Jews that I'm hiding in my house. So I'd have to switch to a different ethical system. If I switched to a system of deontology with a value hierarchy, I could conclude that human life has a higher value than telling the truth to governmental authorities under the circumstances and then decide to lie, solving the dilemma. I strongly suspect that all true moral dilemmas are artifacts of the limitations of distinct moral systems, not morality per se. Since I am skeptical of moral realism, that is all the more the case; if morality can't tell us how to act, it's literally useless. We have to have some process for deciding on our actions. 0faul_sname10y That one thing a couple years ago qualifies. But unless you get into self-referencing moral problems, no. I can't think of one off the top of my head, but I suspect that you can find ones among decisions that affect your decision algorithm and decisions where your decision-making algorithm affects the possible outcomes. Probably like Newcomb's problem, only twistier. (Warning: this may be basilisk territory.) 0Legolan10y (Double-post, sorry) There are plenty of situations where two choices are equally good or equally bad. This is called "indifference", not "dilemma". 0[anonymous]10y Those aren't the situations I'm talking about. 0TimS10y I would make the more limited claim that the existence of irreconcilable moral conflicts is evidence for moral anti-realism. In short, if you have a decision process (aka moral system) that can't resolve a particular problem that is strictly within its scope, you don't really have a moral system. Which makes figuring out what we mean by moral change / moral progress incredibly difficult. 2[anonymous]10y This seems to be to be a rephrasing and clarifying of your original claim, which I read as saying something like 'no true moral theory can allow moral conflicts'. But it's not yet an argument for this claim. 0TimS10y I'm suddenly concerned that we're arguing over a definition. It's very possible to construct a decision procedure that tells one how to decide some, but not all moral questions. It might be that this is the best a moral decision procedure can do. Is it clearer to avoid using the label "moral system" for such a decision procedure? This is a distraction from my main point, which was that asserting our morality changes when our economic resources change is an atypical way of using the label "morality." 2[anonymous]10y No, but if I understand what you've said, a true moral theory can allow for moral conflict, just because there are moral questions it cannot decide (the fact that you called them 'moral questions' leads me to think you think that these questions are moral ones even if a true moral theory can't decide them). You're certainly right, this isn't relevant to your main point. I was just interested in what I took to be the claim that moral conflicts (i.e. moral problems that are undecidable in a true moral theory) are impossible: This is a distraction from you main point in at least one other sense: this claim is orthogonal to the claim that morality is not relative to economic conditions. 0TimS10y Yes, you correct that this was not an argument, simply my attempt to gesture at what I meant by the label "morality." The general issue is that human societies are not rigorous about the use of the label morality. I like my usage because I think it is neutral and specific in meta-ethical disputes like the one we are having. For example, moral realists must determine whether they think "incomplete" moral systems can exist. But beyond that, I should bow out, because I'm an anti-realist and this debate is between schools of moral realists. 0TheOtherDave10y Rephrasing the original question: if we can anticipate the guiding principles underlying the morality of the future, ought we apply those principles to our current circumstances to make decisions, supposing they are different? Though you seem to be implicitly assuming that the guiding principles don't change, merely the decisions, and those changed decisions are due to the closest implementable approximation of our guiding principles varying over time based on economic change. (Did I understand that right?) 2thomblake10y Pretty much. Though it feels totally different from the inside. Athens could not have thrived without slave labor, and so you find folks arguing that slavery is moral, not just necessary. Since you can't say "Action A is immoral but economically necessary, so we shall A" you instead say "Action A is moral, here are some great arguments to that effect!" And when we have enough money, we can even invent new things to be upset about, like vegetable rights. 0TheOtherDave10y (nods) Got it. On your view, is there any attempt at internal coherence? For example, given an X such that X is equally practical (economically) in an Athenian and post-Athenian economy, and where both Athenians and moderns would agree that X is more "consistent with" slavery than non-slavery, would you expect Athenians to endorse X and moderns to reject it, or would you expect other (non-economic) factors, perhaps random noise, to predominate? (Or some third option?) Or is such an X incoherent in the first place? 0TimS10y Can you give a more concrete example? I don't understand your question. 0TheOtherDave10y I can't think of a concrete example that doesn't introduce derailing specifics. Let me try a different question that gets at something similar: do you think that all choices a society makes that it describes as "moral" are economic choices in the sense you describe here, or just that some of them are? Edit: whoops! got TimS and thomblake confused. Um. Unfortunately, that changes nothing of consequence: I still can't think of a concrete example that doesn't derail. But my followup question is not actually directed to Tim. Or, rather, ought not have been. 3thomblake10y Probably a good counterexample would be the right for certain groups to work any job they're qualified for, for example women or people with disabilities. Generally, those changes were profitable and would have been at any time society accepted it. 0TimS10y I don't understand the position you are arguing and I really want to. Either illusion of transparency or I'm an idiot. And TheOtherDave appears to understand you. :( 0thomblake10y I'm not really arguing for a position - the grandparent was a counterexample to the general principle I had proposed upthread, since the change was both good and an immediate economic benefit, and it took a very long time to be adopted. 0TheOtherDave10y (nods) Yup, that's one example I was considering, but discarded as too potentially noisy. But, OK, now that we're here... if we can agree for the sake of comity that giving women the civil right to work any job would have been economically practical for Athenians, and that they nevertheless didn't do so, presumably due to some other non-economic factors... I guess my question is, would you find it inconsistent, in that case, to find Athenians arguing that doing so would be immoral? 0thomblake10y I don't think so. I'm pretty sure lots of things can stand in the way of moral progress. 3DanielLC10y If we had eight-hour workdays a century ago, we wouldn't have been able to support the standard of living expected a century ago. I'm not sure we could have even supported living. The same applies to full unemployment. We may someday reach a point where we are productive enough that we can accomplish all we need when we just do it for fun, but if we try that now, we'll all starve. If we had eight-hour workdays a century ago, we wouldn't have been able to support the standard of living expected a century ago. Is that true? (Technically, a century ago was 1912.) On January 5, 1914, the Ford Motor Company took the radical step of doubling pay to $5 a day and cut shifts from nine hours to eight, moves that were not popular with rival companies, although seeing the increase in Ford's productivity, and a significant increase in profit margin (from$30 million to $60 million in two years), most soon followed suit. 0DanielLC10y The quote seemed to imply we didn't have them a century ago. Just use two centuries or however long. My point is that we didn't stop working as long because we realized it was a good idea. We did because it became a good idea. What we consider normal now is something we could not have instituted a century ago, and attempting to institute now what what will be normal a century from now would be a bad idea. 2TheOtherDave10y So, accepting the premise that the ability to support "full unemployment" (aka, people working for reasons other than money) is something that increases over time, and it can't be supported until the point is reached where it can be supported... how would we recognize when that point has been reached? 0taelor10y The question is, can we? Does anyone happen to have any empirical data about how good, for example, Greco-Romans were at predicting the moral views of the Middle Ages? Additionally, is merely sounding "like the kind of lunatic notion that’ll be considered a basic human right in about a century" really a strong enough justification for us to radically alter our political and economic systems? If I had to guess, I'd predict that Kreider already believes divorcing income from work to be a good idea, for reasons that may or may not be rational, and is merely appealing to futurism to justify his bottom line [http://lesswrong.com/lw/js/the_bottom_line/]. 0Eugine_Nier10y Are you sure you can. It's remarkably easy to make retroactive "predictions", much harder to make actual predictions. After I spoke at the 2005 "Mathematics and Narrative" conference in Mykonos, a suggestion was made that proofs by contradiction are the mathematician's version of irony. I'm not sure I agree with that: when we give a proof by contradiction, we make it very clear that we are discussing a counterfactual, so our words are intended to be taken at face value. But perhaps this is not necessary. Consider the following passage. There are those who would believe that every polynomial equation with integer coefficients has a rational solution, a view that leads to some intriguing new ideas. For example, take the equation x² - 2 = 0. Let p/q be a rational solution. Then (p/q)² - 2 = 0, from which it follows that p² = 2q². The highest power of 2 that divides p² is obviously an even power, since if 2^k is the highest power of 2 that divides p, then 2^2k is the highest power of 2 that divides p². Similarly, the highest power of 2 that divides 2q² is an odd power, since it is greater by 1 than the highest power that divides q². Since p² and 2q² are equal, there must exist a positive integer that is both even and odd. Integers with this remarkable property are quite unlike the integers ... (read more) 5DanArmak10y The two examples are not contradictory, but analogous to one another. The correct conclusion in both is the same, and both are equally serious or ironic. 1. Suppose x² -2=0 has a solution that is rational. That leads to a contradiction. So any solution must be irrational. 2. Suppose x² +1=0 has a solution that is a number. That leads to a contradiction. So any solution must not be a number. Now what is a "number" in this context? From the text, something that is either positive, negative, or zero; i.e. something with a total ordering. And indeed we know (ETA: this is wrong, see below) that such solutions, the complex numbers, have no total ordering. I see no relevant difference between the two cases. 4The_Duck10y You can work the language a little to make them analogous, but that's not the point Gowers is making. Consider this instead: "There are those who would believe that all equations have solutions, a view that leads to some intriguing new ideas. Consider the equation x + 1 = x. Inspecting the equation, we see that its solution must be a number which is equal to its successor. Numbers with this remarkable property are quite unlike the numbers we are familiar with. As such, they are surely worthy of further study." I imagine Gowers's point to be that sometimes a contradiction does point to a way in which you can revise your assumptions to gain access to "intriguing new ideas", but sometimes it just indicates that your assumptions are wrong. "There are those who would believe that all equations have solutions, a view that leads to some intriguing new ideas. Consider the equation x + 1 = x. Inspecting the equation, we see that its solution must be a number which is equal to its successor. Numbers with this remarkable property are quite unlike the numbers we are familiar with. As such, they are surely worthy of further study." Yes, yes they are. 1DanArmak10y (Edited again: this example is wrong, and thanks to Kindly for pointing out why. CronoDAS gives a much better answer.) Curiously enough, the Peano axioms don't seem to say that S(n)!=n. Lo, a finite model of Peano: X = {0, 1} Where: 0+0=0; 0+1=1+0=1+1=1 And the usual equality operation. In this model, x+1=1 has a solution, namely x=1. Not a very interesting model, but it serves to illustrate my point below. Contradiction in conclusions always indicates a contradiction in assumptions. And you can always use different assumptions to get different, and perhaps non contradictory, conclusions. The usefulness and interest of this varies, of course. But proof by contradiction remains valid even if it gives you an idea about other interesting assumptions you could explore. And that's why I feel it's confusing and counterproductive to use ironic language in one example, and serious proof by contradiction in another, completely analogous example, to indicate that in one case you just said "meh, a contradiction, I was wrong" while in the other you invented a cool new theory with new assumptions. The essence of math is formal language and it doesn't mix well with irony, the best of which is the kind that not all readers notice. 4VKS10y But that's the entire point of the quote! That mathematicians cannot afford the use of irony! 0DanArmak10y Yes. My goal wasn't to argue with the quote but to improve its argument. The quote said: And I said, it's not just superficially similar, it's exactly the same and there's no relevant difference between the two that would guide us to use irony in one case and not in the other (or as readers, to perceive irony in one case and serious proof by contradiction in the other). 4Kindly10y Your model violates the property that if S(m) = S(n), then m=n, because S(1) = S(0) yet 1 != 0. You might try to patch this by changing the model so it only has 0 as an element, but there is a further axiom that says that 0 is not the successor of any number. Together, the two axioms used above can be used to show that the natural numbers 0, S(0), S(S(0)), etc. are all distinct. The axiom of induction can be used to show that these are all the natural numbers, so that we can't have some extra "floating" integer x such that S(x) = x. 0DanArmak10y Right. Thanks. 3CCC10y The only relevant difference that I can see is that, in the first paragraph, the solutions are explicitly limited to the rational numbers; in the second case, the solutions are not explicitly limited to the reals. 2IlyaShpitser10y There are lots of total orderings on the complex numbers. For example: a + bi [>] c + di iff a >= c or (a = c and b >= d). In fact, if you believe the axiom of choice there are "nice total orders" for any set at all. 2Kindly10y Importantly, however, the complex numbers have no total ordering that respects addition and multiplication. In other words, there's no large set of "positive complex numbers" closed under both operations. This is also the reason why the math in this XKCD strip [http://xkcd.com/410/] doesn't actually work. 0Spinning_Sandwich10y You can still find divisors for Gaussian integers. If x, y, and xy are all Gaussian integers, which will be trivially fulfilled for any x when y=1, then x, y both divide xy. You can then generalize the \sigma function by summing over all the divisors of z and dividing by |z|. The resulting number \sigma(z) lies in C (or maybe Q + iQ), not just Q, but it's perfectly well defined. 4Kindly10y If you sum over all the divisors of z, the result is perfectly well defined; however, it's 0. Whenever x divides z, so does -x. Over the integers, this is solved by summing over all positive divisors. However, there's no canonical choice of what divisors to consider positive in the case of Gaussian integers, and making various arbitrary choices (like summing over all divisors in the upper half-plane) leads to unsatisfying results. 0Spinning_Sandwich10y That's like saying the standard choice of branch cut for the complex logarithm is arbitrary. And? When you complexify, things get messier. My point is that making a generalization is possible (though it's probably best to sum over integers with 0 \leq arg(z) < \pi, as you pointed out), which is the only claim I'm interested in disputing. Whether it's nice to look at is irrelevant to whether it's functional enough to be punnable. 0Kindly10y You're right -- the generalization works. Mainly what I don't like about it is that \sigma(z) no longer has the nice properties it had over the integers: for example, it's no longer multiplicative. This doesn't stop Gaussian integers from being friendly, though, and the rest is a matter of aesthetics. 0Spinning_Sandwich10y The well-ordering principle doesn't really have any effect on canonical orderings, like that induced by the traditional less-than relation on the real numbers. This doesn't affect the truth of your claim, but I do think that DanArmak's point was quite separate from the language he chose. He might instead have worded it as having no real solution, so that any solution must be not-real. 0DanArmak10y Gah. You're quite right. I should refrain from making rash mathematical statements. Thank you. 1DanielLC10y The first one shows that assuming that there's a rational solution leads to contradiction, then drops the subject. The second one shows that assuming that there's a real solution leads to a contradiction, then suggests to investigate the non-reals. How are you supposed to tell which drops the subject and which suggests investigation? 1gwern10y Isn't that the entire point? I see this as a mathematical version of the modus tollens/ponens point [http://lesswrong.com/lw/ece/rationality_quotes_september_2012/7bc9] made elsewhere in this page. 2DanArmak10y The quote says, This seems to me to mean: the two cases are different; the first is appropriately handled by serious proof-by-contradiction, while the second is appropriately handled by irony. But readers may not be able to tell the difference, because the two texts are similar and irony is hard to identify reliably. So mathematicians should not use irony. Whereas I would say: the two cases are the same, and irony or seriousness are equally appropriate to both. If readers could reliably identify irony, they would correctly deduce that the author treated the two cases differently, which is in fact a wrong approach. So readers are better served by treating both texts as serious. I'm not saying mathematicians should / can effectively use irony; I'm saying the example is flawed so that it doesn't demonstrate the problems with irony. 4gwern10y The difference is that mathematicians apply modus tollens and reject sqrt2 being rational, but apply modus ponens and accept the existence of i; why? Because apparently the resultant extensions of theories justify this choice - and this is the irony, the reason one's beliefs are in discordance with one's words/proof and the reader is expected to appreciate this discrepancy. But what one regards as a useful enough extension to justify a modus tollens move is something other may not appreciate or will differ from field to field, and this is a barrier to understanding. 2DanArmak10y I hadn't considered that irony. I was thinking about the explicit irony of the text itself in its proof of sqrt(2) being irrational. The reader is expected to know the punchline, that sqrt(2) is irrational but that irrational numbers are important and useful. So the text that (ironically) appears to dismiss the concept of irrational numbers is in fact wrong in its dismissal, and that is a meta-irony. ...I feel confused by the meta levels of irony. Which strengthens my belief that mathematical proofs should not be ironical if undertaken seriously. 1gwern10y Yes, I feel similarly about this modus stuff; it seems simple and trivial, but the applications become increasingly subtle and challenging, especially when people aren't being explicit about the exact reasoning. 0DanArmak10y If mathematicians behaved simply as you describe, then those resultant extension theories would never have been developed, because everyone would have applied modus tollens regarding in a not-yet-proven-useful case. (Disclaimer: I know nothing about the actual historical reasons for the first explorations of complex numbers.) Therefore, it's best for mathematicians to always keep the M-T and M-P cases in mind when using a proof by contradiction. Of course, a lot of time the contradiction arises due to theorems already proven from axioms, and what happens if any one of the axioms in a theory is removed is usually well explored. 2Kindly10y You're drawing the parallel differently from the quote's author. The second example requires assuming the existence of complex numbers to resolve the contradiction. The first example requires assuming, not the existence of irrational numbers (we already know about those, or we wouldn't be asking the question!), but the existence of integers which are both even and odd. As far as I know, there are no completely satisfactory ways of resolving the latter situation. Qhorin Halfhand: The Watch has given you a great gift. And you only have one thing to give in return: your life. Jon Snow: I'd gladly give my life. Qhorin Halfhand: I don’t want you to be glad about it! I want you to curse and fight until your heart’s done pumping. --Game of Thrones, Season 2. Reminds me of Patton: No man ever won a war by dying for his country. Wars were won by making the other poor bastard die for his. You don't win a war by dying for your country. I especially like the way he calls the enemy "the other poor bastard". And not, say, "the bastard". 7Ezekiel10y Also effort, expertise, and insider information on one of the most powerful Houses around. And magic powers. 0RomanDavis10y He has magic powers? 1Ezekiel10y Rot13'd for minor spoiling potential: Ur'f n jnet / fxvapunatre. My brain technically-not-a-lies to me far more than it actually lies to me. -- Aristosophy (again) [-][anonymous]10y 35 We're talking about a person who, along with her partner, gives to efficient charity twice as much money as she spends on herself. There's no way she doesn't actually believe what she says and still does that. 6prase10y That she gives more than most others doesn't imply that her belief that giving even more is practically impossible isn't hypocritical. Yes, she very likely believes it, thus it is not a conscious lie, but only a small minority of falsities are conscious lies. Yeah, but there's also a certain plausibility to the heuristic which says that you don't get to second-guess her knowledge of what works for charitable giving until you're - not giving more - but at least playing in the same order of magnitude as her. Maybe her pushing a little bit harder on that "hypocrisy" would cause her mind to collapse, and do you really want to second-guess her on that if she's already doing more than an order of magnitude better than what your own mental setup permits? 9prase10y I am actually inclined to believe Wise's hypothesis (call it H) that being overly selfless can hamper one's ability to help others. I was only objecting to army1987's implicit argument that because she (Wise) clearly believes H, Dolores1984's suspicion of H being a self-serving untrue argument is unwarranted. 1[anonymous]10y There's an Italian proverb “Everybody is a faggot with other people's asses”, meaning more-or-less ‘everyone is an idealist when talking about issues that don't directly affect them/situations they have never experienced personally”. 3[anonymous]10y You're using hypocritical in a weird way -- I'd only normally use it to mean ‘lying’, not ‘mistaken’. 5prase10y I use "hypocrisy" to denote all instances of people violating their own declared moral standards, especially when they insist they aren't doing it after receiving feedback (if they can realise what they did after being told, only then I'd prefer to call it a 'mistake'). The reason why I don't restrict the word to deliberate lying is that I think deliberate lying of this sort is extremely rare; self-serving biases are effective in securing that. 0juliawise10y I don't believe it's practically impossible to give more than I do. I could push myself farther than I do. I don't perfectly live up to my own ideals. Given that I'm a human, I doubt any of you find that surprising. The first step is to measure whatever can be easily measured. This is OK as far as it goes. The second step is to disregard that which can’t easily be measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can’t be measured easily isn’t important. This is blindness. The fourth step is to say that what can’t easily be measured really doesn’t exist. This is suicide. Charles Handy describing the Vietnam-era measurement policies of Secretary of Defense Robert McNamara The following quotes were heavily upvoted, but then turned out to be made by a Will Newsome sockpuppet who edited the quote afterward. The original comments have been banned. The quotes are as follows: If dying after a billion years doesn't sound sad to you, it's because you lack a thousand-year-old brain that can make trillion-year plans. — Aristosophy One wish can achieve as much as you want. What the genie is really offering is three rounds of feedback. — Aristosophy If anyone objects to this policy response, please PM me so as to not feed the troll. The following quotes were heavily upvoted, but then turned out to be made by a Will Newsome sockpuppet who edited the quote afterward. The original comments have been banned. The quotes are as follows: Defection too far. Ban Will. 5Armok_GoB10y Will is a cute troll. Hmm, after observing it a few times on various forums I'm starting to consider that having a known, benign resident troll might keep away more destructive ones. No idea how it works but it doesn't seem that far-fetched given all the strange territoriality-like phenomena occasionally encountered in the oddest places. Will is a cute troll. I've heard this claimed. This behavior isn't cute. Hmm, after observing it a few times on various forums I'm starting to consider that having a known, benign resident troll might keep away more destructive ones. No idea how it works but it doesn't seem that far-fetched given all the strange territoriality-like phenomena occasionally encountered in the oddest places. This would be somewhat in fitting with findings in Cialdini. One defector kept around and visibly punished or otherwise looking low status is effective at preventing that kind of behavior. (If not Cialdini, then Greene. Probably both.) Edited how? If I remember correctly the second quote was edited to be something along the lines of "will_newsome is awesome." 0adamisom10y That is cute.. no? More childish than evil. He should just be warned that's trolling. There really should be a comment edit history feature. Maybe it only activates once a comment reaches +10 karma. 0[anonymous]10y It was edited to add something like "Will Newsome is such a badass" -- Socrates [-][anonymous]10y 11 I do find some of Will Newsome's contributions interesting. OTOH, this behaviour is pretty fucked up. (I was wondering how hard it would be to implement a software feature to show the edit history of comments.) 3Incorrect10y If only the converse were true... 0Hawisher10y "...if you lack a thousand-year-old brain that can make trillion-year plans, dying after a billion years doesn't sound sad to you"? I'm confused as to what you're trying to say. Are you saying that dying after a billion years sounds sad to you? 2nshepperd10y "If you lack a thousand-year-old brain that can make trillion-year plans, it's because dying after a billion years doesn't sound sad to you." I think meaning it's unfortunate that thinking that dying after a billion years is sad doesn't by itself give you the power to live that long. Maybe. 0Hawisher10y I was never one for formal logic, but isn't that the contrapositive? I was under the impression that the converse of p then q was q then p. 0Eugine_Nier10y Yes and that's what nshepperd wrote. 0Hawisher10y Oh wow, never mind. My brain was temporarily broken. Is it considered bad etiquette here to retract incorrect comments? 0Eugine_Nier10y When you retract the comment is simply struck-through not deleted, so no. 1Incorrect10y And therefore you would have a thousand-year-old brain that can make trillion-year plans. 0MugaSofer10y Seems legit. The only road to doing good shows, is doing bad shows. • Louis C.K., on Reddit Unfortunately, doing bad shows is not only a route to doing good shows. 3imaxwell10y True, and I hope no one thinks it is. So we can conclude that doing bad shows at first is not a strong indicator of whether you have a future as a showman. I guess I see the quote as being directed at people who are so afraid of doing a bad show that they'll never get in enough practice to do a good show. Or they practice by, say, filming themselves telling jokes in their basement and getting critiques from their friends who will not be too mean to them. In either case, they never get the amount of feedback they would need to become good. For such a person to hear "Yes, you will fail" can be oddly liberating, since it turns failure into something accounted for in their longer-term plans. “Why do you read so much?” Tyrion looked up at the sound of the voice. Jon Snow was standing a few feet away, regarding him curiously. He closed the book on a finger and said, “Look at me and tell me what you see.” The boy looked at him suspiciously. “Is this some kind of trick? I see you. Tyrion Lannister.” Tyrion sighed. “You are remarkably polite for a bastard, Snow. What you see is a dwarf. You are what, twelve?” “Fourteen,” the boy said. “Fourteen, and you’re taller than I will ever be. My legs are short and twisted, and I walk with difficulty. I require a special saddle to keep from falling off my horse. A saddle of my own design, you may be interested to know. It was either that or ride a pony. My arms are strong enough, but again, too short. I will never make a swordsman. Had I been born a peasant, they might have left me out to die, or sold me to some slaver’s grotesquerie. Alas, I was born a Lannister of Casterly Rock, and the grotesqueries are all the poorer. Things are expected of me. My father was the Hand of the King for twenty years. My brother later killed that very same king, as it turns out, but life is full of these little ironies. My sister married the new king and ... (read more) I'm surprised at how often I have to inform people of this... I have mild scoliosis, and so I usually prefer sitting down and kicking up my feet, usually with my work in hand. Coming from a family who appreciates backbreaking work is rough when the hard work is even harder and the pain longer-lasting... which would be slightly more bearable if the aforementioned family did not see reading MYSTERIOUS TEXTS on a Kindle and using computers for MYSTERIOUS PURPOSES as signs of laziness and devotion to silly frivolities. I have a sneaking suspicion that this is not a very new situation. 9ArisKatsaris10y I think the quote could be trimmed to its last couple sentences and still maintain the relevant point.. I disagree, in fact. That books strengthen the mind is baldly asserted, not supported, by this quote - the rationality point I see in it is related to comparative advantage. 8ChrisHallquist10y Oh, totally. But I prefer the full version; it's really a beautifully written passage. Discovery is the privilege of the child, the child who has no fear of being once again wrong, of looking like an idiot, of not being serious, of not doing things like everyone else. Alexander Grothendieck 7Fyrius10y ...screw it, I'm not growing up. 6Sabiola10y I remember being very much afraid of all those things as a child. I'm getting better now. ...a good way of thinking about minimalism [about truth] and its attractions is to see it as substituting the particular for the general. It mistrusts anything abstract or windy. Both the relativist and the absolutist are impressed by Pilate's notorious question 'What is Truth?', and each tries to say something useful at the same high and vertiginous level of generality. The minimalist can be thought of turning his back on this abstraction, and then in any particular case he prefaces his answer with the prior injunction: you tell me. This does not mean, 'You tell me what truth is.' It means, 'You tell me what the issue is, and I will tell you (although you will already know, by then) what the truth about the issue consists in.' If the issue is whether high tide is at midday, then truth consists in high tide being at midday... We can tell you what truth amounts to, if you first tell us what the issue is. There is a very powerful argument for minimalism about truth, due to the great logician Gottlob Frege. First, we should notice the transparency property of truth. This is the fact that it makes no difference whether you say that it is raining, or it is true that it is raining, or tr ... (read more) 9Alejandro110y The pithiest definition of Blackburn's minimalism I've read is in his review of Nagel's The Last Word: It is followed by an even pithier response to how Nagel refutes relativism (pointing that our first-order conviction that 2+2=4 or that murder is wrong is more certain than any relativist doubts) and thinks that this establishes a quasi-Platonic absolutism as the only alternative: 0buybuydandavis10y "What is truth" is a pretty good question, though a better one is "what do we do with truths?" We do a lot of things with truths, it can serve a lot of different functions. The problem comes where people doing different things with their truths talk to each other. "Nontrivial measure or it didn't happen." -- Aristosophy (Who's Kate Evans? Do we know her? Aristosophy seems to have rather a lot of good quotes.) *cough* "I made my walled garden safe against intruders and now it's just a walled wall." -- Aristosophy Attachment? This! Is! SIDDHARTHA! Is that you? That's ingenious. For more rational flavor: Live dogmatic, die wrong, leave a discredited corpse. This should be the summary for entangled truths: To find the true nature of a thing, find the true nature of all other things and look at what is left over. how to seem and be deep: Blessed are those who can gaze into a drop of water and see all the worlds and be like who cares that's still zero information content. Dark Arts: The master said: "The master said: "The master said: "The master said: "There is no limit to the persuasive power of social proof."""" More Dark arts: One wins a dispute, not by minimising potential counterarguments' plausibility, but by maximising their length. Luminosity: Have you accepted your brain into your heart? 7Alicorn10y No, I'm not her. I don't know who she is, but her Twitter is indeed glorious. (And Google Reader won't let me subscribe to it the way I'm subscribed to other Twitters, rar.) She's got to be from here, here's learning biases can hurt people: Heuristics and biases research: gaslighting the human race? Cryonics: "Are you signed up for Christonics?" "No, I'm still prochristinating." I'm starting to think this is someone I used to know from tvtropes. It is now clear to us what, in the year 1812, was the cause of the destruction of the French army. No one will dispute that the cause of the destruction of Napoleon's French forces was, on the one hand, their advance late in the year, without preparations for a winter march, into the depths of Russia, and, on the other hand, the character that the war took on with the burning of Russian towns and the hatred of the foe aroused in the Russian people. But then not only did no one foresee (what now seems obvious) that this was the only way that could lead to the destruction of an army of eight hundred thousand men, the best in the world and led by the best generals, in conflict with a twice weaker Russian army, inexperienced and led by inexperienced generals; not only did no one foresee this, but all efforts on the part of the Russians were constantly aimed at hindering the one thing that could save Russia, and, on the part of the French, despite Napoleon's experience and so-called military genius, all efforts were aimed at extending as far as Moscow by the end of summer, that is, at doing the very thing that was to destroy them. • Leo Tolstoy, "War and Peace", trans. Pevear and Volokhonsky "Possibly the best statistical graph ever drawn" http://www.edwardtufte.com/tufte/posters You know those people who say "you can use numbers to show anything" and "numbers lie" and "I don't trust numbers, don't give me numbers, God, anything but numbers"? These are the very same people who use numbers in the wrong way. "Junior", FIRE JOE MORGAN "If your plan is for one year plant rice. If your plan is for 10 years plant trees. If your plan is for 100 years educate children" - Confucius ...If your plan is for eternity, invent FAI? 3Eliezer Yudkowsky10y Depends how you interpret the proverb. If you told me the Earth would last a hundred years, it would increase the immediate priority of CFAR and decrease that of SIAI. It's a moot point since the Earth won't last a hundred years. 5[anonymous]10y Sorry, Earth won't last a hundred years? 5MugaSofer10y Nanotech and/or UFAI. 3Mitchell_Porter10y The idea seems to be that even if there is a friendly singularity, Earth will be turned into computronium or otherwise transformed. 3TheOtherDave10y I am surprised that this claim surprises you. A big part of SI's claimed value proposition is the idea that humanity is on the cusp of developing technologies that will kill us all if not implemented in specific ways that non-SI folk don't take seriously enough. 0[anonymous]10y Of course you're right. I guess I haven't noticed the topic come up here for a while, and haven't seen the apocalypse predicted so straightforwardly (and quantitatively) before so am surprised in spite of myself. Although, in context, it sounds like EY is saying that the apocalypse is so inevitable that there's no need to make plans for the alternative. Is that really the consensus at EY's institute? 1TheOtherDave10y I have no idea what the consensus at SI is. 0[anonymous]10y I guess he means “only last a hundred years”, not “last at least a hundred years”. 0TheOtherDave10y Just to make sure I understand: you interpret EY to be saying that the Earth will last more than a hundred years, not saying that the Earth will fail to last more than a hundred years. Yes? If so, can you clarify how you arrive at that interpretation? 0[anonymous]10y “If you told me the Earth would only last a hundred years (i.e. won't last longer than that) .... It's a moot point since the Earth won't only last a hundred year (i.e. it will last longer).” At least that's what I got on the first reading. I think I could kind-of make sense “it would increase the immediate priority of CFAR and decrease that of SIAI” under either hypothesis about what he means, though one interpretation would need to be more strained than the other. 4ArisKatsaris10y The idea is that if Earth lasts at least a hundred years, (if that's a given), then the possibility of a uFAI in that timespan severely decreases -- so SIAI (which seeks to prevent a uFAI by building a FAI) is less of an immediate priority and it becomes a higher priority to develop CFAR that will increase the public's rationality for the future generations, so that the future generations don't launch a uFAI. 0[anonymous]10y (The other interpretation would be “If the Earth is going to only last a hundred years, then there's not much point in trying to make a FAI since in the long-term we're screwed anyway, and raising the sanity waterline will make us enjoy more what time there is left.) EDIT: Also, if your interpretation is correct, by saying that the Earth won't last 100 years he's either admitting defeat (i.e. saying that an uFAI will be built) or saying that even a FAI would destroy the Earth within 100 years (which sounds unlikely to me -- even if the CEV of humanity would eventually want to do that, I guess it would take more than 100 years to terraform another place for us to live and for us all to move there). 4Eliezer Yudkowsky10y I was just using "Earth" as a synonym for "the world as we know it". I think I disagree; care to make it precise enough to bet on? I'm expecting life still around, Earth the main population center, most humans not uploaded, some people dying of disease or old age or in wars, most people performing dispreferred activities in exchange for scarce resources at least a couple months in their lives, most children coming out of a biological parent and not allowed to take major decisions for themselves for at least a decade. I'm offering$100 at even odds right now and will probably want to bet again in the next few years. I can give it to you (if you're going to transfer it to SIAI/CFAR tell me and I'll donate directly), and you pay me $200 if the world has not ended in 100 years as soon as we're both available (e.g. thawed). If you die you can keep the money; if I die then win give it to some sensible charity. How's that sound? All of the above is up for negotiation. As wedifrid says, this is a no-brainer "accept" (including the purchasing-power-adjusted caveat). If you are inside the US and itemize deductions, please donate to SIAI, otherwise I'll accept via Paypal. Your implied annual interest rate assuming a 100% probability of winning is 0.7% (plus inflation adjustment). Please let me know whether you decide to go through with it; withdrawal is completely understandable - I have no particular desire for money at the cost of forcing someone else to go through with a bet they feel uncomfortable about. (Or rather, my desire for$100 is not this strong - I would probably find $100,000 much more tempting.) PayPal-ed to sentience at pobox dot com. Don't worry, my only debitor who pays higher interest rates than that is my bank. As long as that's not my main liquidity bottleneck I'm happy to follow medieval morality on lending. If you publish transaction data to confirm the bet, please remove my legal name. Bet received. I feel vaguely guilty and am reminding myself hard that money in my Paypal account is hopefully a good thing from a consequentialist standpoint. Bet recorded: LW bet registry, PB.com. 7wedrifid10y (Neglecting any logistic or legal isses) this sounds like a no brainer for Eliezer (accept). Like you would be better served by making the amounts you give and expect to receive if you win somewhat more proportionate to expected utility of the resources at the time. If Eliezer was sure he was going to lose he should still take the low interest loan. Even once the above is accounted for Eliezer should still accept the bet (in principle). 3MixedNuts10y Dollar amounts are meant as purchasing-power-adjusted. I am sticking my fingers in my ears and chanting "La la, can't hear you" at discounting effects. 3Mitchell_Porter10y That's a nice set of criteria by which to distinguish various futures (and futurists). 0MugaSofer10y Care to explain why? You sound like you expect nanotech by then. I definitely expect nanotech a few orders of magnitude awesomer than we have now. I expect great progress on aging and disease, and wouldn't be floored by them being solved in theory (though it does sound hard). What I don't expect is worldwide deployment. There are still people dying from measles, when in any halfway-developed country every baby gets an MMR shot as a matter of course. I wouldn't be too surprised if everyone who can afford basic care in rich countries was immortal while thousands of brown kids kept drinking poo water and dying. I also expect longevity treatments to be long-term, not permanent fixes, and thus hard to access in poor or politically unstable countries. The above requires poor countries to continue existing. I expect great progress, but not abolition of poverty. If development continues the way it has (e.g. Brazil), a century isn't quite enough for Somalia to get its act together. If there's a game-changing, universally available advance that bumps everyone to cutting-edge tech levels (or even 2012 tech levels), then I won't regret that$100 much. I have no idea what wars will look like, but I don't expect them to be nonexistent or nonlethal. Given no gam... (read more) 2MugaSofer10y Thanks for explaining! Of course, nanotech could be self replicating and thus exponentially cheap, but the likelihood of that is ... debatable. 3Paul Crowley10y I feel an REM song coming on... 3[anonymous]10y (I guess I had been primed to take “Earth” to mean ‘a planet or dwarf planet (according to the current IAU definition) orbiting the Sun between Venus and Mars’ by this [http://qntm.org/destroy]. EDIT: Dragon Ball too, where destroying a planet means turning it into dust, not just rendering it inhabitable.) 2ArisKatsaris10y EY does seem in a darker mood than usual lately, so it wouldn't surprise me to see him implying pessimism about our chances out loud, even if it doesn't go so far as "admitting defeat". I do hope it's just a mood, rather than that he has rationally updated his estimation of our chances of survival to be even lower than they already were. :-) 1Decius10y "The world as we know it" ends if FAI is released into the wild. 4[anonymous]10y When I had commented, EY hadn't clarified yet that by Earth he meant “the world as we know it”, so I didn't expect “Earth” to exclude ‘the planet between Venus and Mars 50 years after a FAI is started on it’. 0Decius10y So, we can construct an argument that CFAR would rise in relative importance over SIAIif we see strong evidence the world as we know it will end within 100 years, and an argument with the same conclusion if we see strong evidence that the world as we know it will last for at least 100 years. There is something wrong. Nothing can be soundly understood If daylight itself needs proof. 6siodine10y Richard Carrier on solipsism, but not nearly as pithy: I think that's actually a really terrible bit of arguing. There are only two logically possible explanations: random chance, or design. We can stop right there. If we're all the way back at solipsism, we haven't even gotten to defining concepts like 'random chance' or 'design', which presume an entire raft of external beliefs and assumptions, and we surely cannot immediately say there are only two categories unless, in response to any criticism, we're going to include a hell of a lot under one of those two rubrics. Which probability are we going to use, anyway? There are many more formalized versions than just Kolmogorov's axioms (which brings us to the analytic and synthetic problem). And much of the rest goes on in a materialist vein which itself requires a lot of further justification (why can't minds be ontologically simple elements? Oh, your experience in the real world with various regularities has persuaded you that is inconsistent with the evidence? I see...) Even if we granted his claims about complexity, why do we care about complexity? And so on. Yes, if you're going to buy into a (very large) number of materialist non-solipsist claims, then you're going to have trouble making a case in such terms for solipsism. But if you've bought all those materialist or externalist claims, you've already rejected solipsism and there's no tension in the first place. And he doesn't do a good case of explaining that at all. 2siodine10y Good points, but then likewise how do you define and import the designations of 'hand' or 'here' and justify intuitions or a axiomatic system of logic (and I understood Carrier to be referring to epistemic solipsism like Moore -- you seem to be going metaphysical)? (or were you not referring to Moore's argument in the context of skepticism?) 0gwern10y I think Moore's basic argument works on the level of epistemic skepticism, yes, but also metaphysics: some sort of regular metaphysics and externalism is what one believes, and what provides the grist for the philosophical mill. If you don't credit the regular metaphysics, then why do you credit the reasoning and arguments which led you to the more exotic metaphysics? I'm not sure what skeptical arguments it doesn't work for. I think it may stop at the epistemic level, but that may just be because I'm having a hard time thinking of any ethics examples (which is my usual interest on the next level down of abstraction). 0siodine10y The way I see it, Moore's argument gets you to where you're uncertain of the reasoning pro or contra skepticism. But If you start from the position of epistemic solipsism (I know my own mind, but I'm uncertain of the external world), then you have reason (more or less depending how uncertain you are) to side with common sense. However, if you start at metaphysical solipsism (I'm uncertain of my own mind), then such an argument could even be reason to not side with common sense (e.g., there are little people in my mind trying to manipulate my beliefs; I must not allow them to). 2Mitchell_Porter10y A hypothesis like... I'm dreaming. 5Jay_Schweikert10y This also made me think of the aphorism "if water sticks in your throat, with what will you wash it down?" Subway ad: "146 people were hit by trains in 2011. 47 were killed." Guy on Subway: "That tells me getting hit by a train ain't that dangerous." • Nate Silver, on his Twitter feed @fivethirtyeight This reminds me of how I felt when I learned that a third of the passengers of the Hindenburg survived. Went something like this, if I recall: Apparently if you drop people out of the sky in a ball of fire, that's not enough to kill all of them, or even 90% of them. Actually, according to Wikipedia, only 35 out of the 97 people aboard were killed. Not enough to kill even 50% of them. 1[anonymous]10y jaw drops 6TheOtherDave10y It helps to remember that the Hindenburg was more or less parked when it exploded... I think it was like 30 feet in the air? (I'm probably wrong about the number, but I don't think I'm very wrong.) Most of the passengers basically jumped off. And, sure, a 30 foot drop is no walk in the park, but it's not that surprising that most people survive it. 3[anonymous]10y (Well, then “out of the sky” is kind of an exaggeration, since you wouldn't normally consider yourself to be in the sky when on a balcony on the fourth floor.) 1TheOtherDave10y Well, unlike the balcony of a building, a floating blimp (even close to the ground) is floating, rather than resting on the ground, so I suppose one could make the argument. But yeah, I'm inclined to agree that wherever "the sky" is understood to be, and I accept that this is a social construct rather than a physical entity, it's at least a hundred feet or so above ground [-][anonymous]10y 16 Wait, 32% probability of dying “ain't that dangerous”? Are you f***ing kidding me? [-][anonymous]10y 29 If I expect to be hit by a train, I certainly don't expect a ~68% survival chance. Not intuitively, anyways. I'm guessing that even if you survive, your quality of life is going to take a hit. Accounting for this will probably bring our intuitive expectation of harm closer to the actual harm. 5[anonymous]10y Hmmm, I can't think of any way of figuring out what probability I would have guessed if I had to guess before reading that. Damn you, hindsight bias! (Maybe you could spell out and rot-13 the second figure in the ad...) 3faul_sname10y I would expect something like that chance. Being hit by a train will be very similar to landing on your side or back after falling 3 to 10 meters (I'm guessing most people hit by trains are at or near a train station, so the impacts will be relatively slow). So the fatality rate should be similar. Of course, that prediction gives a fatality rate of only 5-20%, so I'm probably missing something. 4khafra10y There's the whole crushing and high voltage shock thing, depending on how you land. 0[anonymous]10y Well, lightning strikes kill less than half the people they hit. 2mfb10y Lightning strikes usually do not involve physical impacts - I think "falling from 3-10 meters and getting struck by lightning" would be worse. In addition, the length of the current flow depends on the high voltage system. 4wedrifid10y This seems overwhelmingly likely. 6DanielLC10y I can't help but think: Subway ad: "146 people were hit by trains in 2011. 47 were killed." Guy at Subway: "What does that have to do with sandwiches?" "In a society in which the narrow pursuit of material self-interest is the norm, the shift to an ethical stance is more radical than many people realize. In comparison with the needs of people starving in Somalia, the desire to sample the wines of the leading French vineyards pales into insignificance. Judged against the suffering of immobilized rabbits having shampoos dripped into their eyes, a better shampoo becomes an unworthy goal. An ethical approach to life does not forbid having fun or enjoying food and wine, but it changes our sense of priorities. The effort and expense put into buying fashionable clothes, the endless search for more and more refined gastronomic pleasures, the astonishing additional expense that marks out the prestige car market in cars from the market in cars for people who just want a reliable means to getting from A to B, all these become disproportionate to people who can shift perspective long enough to take themselves, at least for a time, out of the spotlight. If a higher ethical consciousness spreads, it will utterly change the society in which we live." -- Peter Singer As it is probably intended, the more reminders like this I read, the more ethical I should become. As it actually works, the more of this I read, the less I become interested in ethics. Maybe I am extraordinarily selfish and this effect doesn't happen to most, but it should be at least considered that constant preaching of moral duties can have counterproductive results. I suspect it's because authors of "ethical remainders" are usually very bad at understanding human nature. What they essentially do is associate "ethical" with "unpleasant", because as long as you have some pleasure, you are obviously not ethical enough; you could do better by giving up some more pleasure, and it's bad that you refuse to do so. The attention is drawn away from good things you are really doing, to the hypothetical good things you are not doing. But humans are usually driven by small incentives, by short-term feelings. The best thing our rationality can do is better align these short-term feelings with out long-term goals, so we actually feel happy when contributing to our long-term goals. And how exactly are these "ethical remainders" contributing to the process? Mostly by undercutting your short-term ethical motivators, by always reminding you that what you did was not enough, therefore you don't deserve the feelings of satisfaction. Gradually they turn these motivators off, and you no longer feel like doing anything ethical, because they convinced you (your "elephant") that you can't. Ethics without understanding human nature is just a pile of horseshit. Of course that does not prevent other people from admiring those who speak it. 2prase10y Yes. And it works this way even without insisting that more can be done; even if you live up to the demands, or even if the moral preachers recognise your right to be happy sometimes, the warm feeling from doing good is greatly diminished when you are told that philantrophy is just being expected, that helping others is not something one does naturally with joy, but that it should be a conscious effort, a hard work, to be done properly. Not to mention the remarks of Mark Twain on a fundraiser he attended once: Well, Hawley worked me up to a great state. I couldn't wait for him to get through [his speech]. I had four hundred dollars in my pocket. I wanted to give that and borrow more to give. You could see greenbacks in every eye. But he didn't pass the plate, and it grew hotter and we grew sleepier. My enthusiasm went down, down, down - \$100 at a time, till finally when the plate came round I stole 10 cents out of it. [Prolonged laughter.] So you see a neglect like that may lead to crime. 7NancyLebovitz10y It might be worth taking a look at Karen Horney's [http://www.en.wikipedia.org/wiki/Karen_Horney] work. She was an early psychoanalyst who wrote that if a child is abused, neglected, or has normal developmental stages overly interfered with, they are at risk of concluding that just being a human being isn't good enough, and will invent inhuman standards for themselves. I'm working on understanding the implications (how do you get living as a human being right? :-/ ), but I think she was on to something. I wasn't abused or neglected. Did she check experimentally that abuse or neglect is more prevalent among rationalists than in the general population? Of course that's not something a human would ordinarily do to check a plausible-sounding hypothesis, so I guess she probably didn't, unless something went horribly wrong in her childhood. 5NancyLebovitz10y Second thought: Maybe I should have not mentioned her theory about why people adopt inhuman standards, and just focused on the idea that inhuman standards are likely to backfire, Viliam_Bur did. Also-- if I reread I'll check this-- I think Horney focused on inhuman standards of already having a quality, which is not quite the same thing as having inhuman standards about what one ought to achieve, though I think they're related. 2NancyLebovitz10y I was thinking about prase in particular, who sounds as though he might have some problems with applying high standards in a way that's bad for him. Horney died in 1952, so she might not have had access to rationalists in your sense of the word. When I said it might be worth taking a look at Horney's work, I really did mean I thought it might be worth exploring, not that I'm very sure it applies. It seems to be of some use for me. 3prase10y To be clear, I don't have problems with applying high standards to myself, unless not wishing to apply such standards qualifies as a problem. However I am far more willing to consider myself an altruist (and perhaps behave accordingly) when other people don't constantly remind me that it's my moral obligation. 4NancyLebovitz10y Thanks for the explanation, and my apologies for jumping to conclusions. I've been wondering why cheerleading sometimes damages motivation-- there's certainly a big risk of it damaging mine. The other half would be why cheerleading sometimes works, and what the differences are between when it works and when it doesn't. At least for me, I tend to interpret cheerleading as "Let me take you over for my purposes. This project probably isn't worth it for you, that's why I'm pushing you into it instead of letting you see its value for yourself." with a side order of "You're too stupid to know what's valuable, that's why you have to be pushed." I'm not sure what cheerleading feels like to people who like it. 0prase10y No need to apologise. The feeling of being forced to pursue someone else's goals is certainly part of it. But even if the goals align, being pushed usually means that one's good deeds aren't going to be fully appreciated by others, which too is a great demotivator. 0NancyLebovitz10y I think the feeling that one's good deeds will be unappreciated is especially a risk for altruism. Judged against the suffering of immobilized rabbits having shampoos dripped into their eyes, a better shampoo becomes an unworthy goal. I'm not at all convinced that this is the case. After all, the shampoos are being designed to be less painful, and you don't need to test on ten thousand rabbits. Considering the distribution of the shampoos, this may save suffering even if you regard human and rabbit suffering as equal in disutility. 2Dolores198410y I'm not at all convinced of this. It seems to me that a genuinely ethical life requires extraordinary, desperate asceticism. Anything less is to place your own wellbeing above those of your fellow man. Not just above, but many orders of magnitude above, for even trivial luxuries. Julia Wise would disagree, on the grounds that this is impossible to maintain and you do more good if you stay happy. 2katydee10y And the great philosopher Diogenes [http://en.wikipedia.org/wiki/Diogenes_of_Sinope] would disagree with her. So, how many lives did he save again? Clever guy, but I'm not sure if you want to follow his example. 2IlyaShpitser9y If I may be so bold as to summarize this thread: 1. Whatever utility calculus you follow, it is a mathematical model. 2. "All models are false." 3. In particular, what's going wrong here is your model is treating you, the agent, as atomic. In reality, as Kaj Sotala described very well below, you are not an atomic agent, you have an internal architecture, and this architecture has very important ramifications for how you should think about utilities. If I may make an analogy from the field of AI. In the old days, AI was concerned about something called "discrete search," which is just a brute force way to look for an optimum in a state space, where each state is essentially an atomic point. Alpha-beta pruning search Deep Blue uses to play chess is an example of discrete search. At some point it was realized that for many problems atomic point-like states resulted in a combinatorial explosion, and in addition states had salient features describable by, say, logical languages. As this realization was implemented, you no longer had a state-as-a-point, but state-as-a-collection-of-logical-statements. And the field of planning was born. Planning has some similarities to discrete search, but because we "opened up" the states into a full blown logical description, the character of the problem is quite different. I think we need to "open up the agent." [-][anonymous]10y 25 To use an analogy, if you attend a rock concert and take a box to stand on then you will get a better view. If others do the same, you will be in exactly the same position as before. Worse, even, as it may be easier to loose your balance and come crashing down in a heap (and, perhaps, bringing others with you). -- Iain McKay et al., An Anarchist FAQ, section C.7.3 Tropical rain forests, bizarrely, are the products of prisoner's dilemmas. The trees that grow in them spend the great majority of their energy growing upwards towards the sky, rather than reproducing. If they could come to a pact with their competitors to outlaw all tree trunks and respect a maximum tree height of ten feet, every tree would be better off. But they cannot. Matt Ridley, in The Origins of Virtue 0[anonymous]10y "Better off" according to whose utility function? 6alex_zag_al10y yeah, it's not obvious from this quote, but having read the book, I know what he means. The utility function of the tree is the sum, over all individuals, of the fraction of genes that each other individual has in common with it. He constantly talks as if plants, chromosomes, insects etc. desire to maximize this number. I think it works, because when an organism is in its environment of evolutionary adaptation, finding that a behavior makes this number bigger than alternative behaviors would explains why the organism carries out that behavior. And if the organism does not carry out the behavior, then you need some explanation for why not. Right? 1thomblake10y 0[anonymous]10y They'd expend less energy per surviving descendent produced. [-][anonymous]10y 25 Neither side of the road is inherently superior to the other, so we should all choose for ourselves on which side to drive. #enlightenment 9roystgnr10y Don't we all choose for ourselves on which side to drive? There's usually nobody else ready to grab the wheel away from you... 0Document10y There are police ready to pull you over, for certain values of "ready". (Not commenting on whether that relates to Evans' point.) 6DaFranker10y Have successfully quoted this to counter a relativist-truth argument that was aimed towards supporting "freedom of faith" even in hypothetical scenarios where the majority of actors would end up promoting and following harmful faiths. While counterintuitive to me, it was apparently a necessary step before the other party could even comprehend the fallacy of gray that was being committed. 1Grognor10y You may find it felicitous to link directly to the tweet [https://twitter.com/aristosophy/status/242391295295901697]. 0[anonymous]10y You responded to the wrong post or gave the wrong link. I do see your point, fixed both quotes. Oh, right, Senjōgahara. I've got a great story to tell you. It's about that man who tried to rape you way back when. He was hit by a car and died in a place with no connection to you, in an event with no connection to you. Without any drama at all. [...] That's the lesson for you here: You shouldn't expect your life to be like the theater. -- Kaiki Deishū, Episode 7 of Nisemonogatari. Does the order of the two terminal conditions matter? / Think about it. Does the order of the two terminal conditions matter? / Try it out! Does the order of the two previous answers matter? / Yes. Think first, then try. • Friedman and Felleisen, The Little Schemer 5RomanDavis10y Could you unpack that for me? Sure. The book is a sort of resource for learning the programming language Scheme, where the authors will present an illustrative piece of code and discuss different aspects of its behavior in the form of a question-and-answer dialogue with the reader. In this case, the authors are discussing how to perform numerical comparisons using only a simple set of basic procedures, and they've come up with a method that has a subtle error. The lines above encourage the reader to figure out if and why it's an error. With computers, it's really easy to just have a half-baked idea, twiddle some bits, and watch things change, but sometimes the surface appearance of a change is not the whole story. Remembering to "think first, then try" helps me maintain the right discipline for really understanding what's going on in complex systems. Thinking first about my mental model of a situation prompts questions like this: • Does my model explain the whole thing? • What would I expect to see if my model is accurate? Can I verify that I see those things? • Does my model make useful predictions about future behavior? Can I test that now, or make sure that when it happens, I gather the data I need to confirm it? It's harder psychologically (and maybe too late) to ask those questions in retrospect if you try first, and then think, and if you skip asking them, then you'll suffer later. 6RomanDavis10y You know, I've seen a lot on here about how programming relates to thinking relates to rationality. I wonder if it'd be worth trying and where/how I might get started. 4cata10y It's certainly at least worth trying, since among things to learn it may be both unusually instructive and unusually useful. Here's the big list of LW recommendations. [http://lesswrong.com/lw/cpz/computer_science_and_programming_links_and/] 3RomanDavis10y Khan Academy has a programming course? I might try it. Mostly, I want the easiest, most handholdy experience possible. Baby talk if necessary. Every experience informs me that programming is hard. This is the easiest, most handholdy experience possible: http://learnpythonthehardway.org/book/ A coworker of mine who didn't know any programming, and who probably isn't smarter than you, enjoyed working through it and has learned a lot. Programming is hard, but a lot of good things are hard. 1CCC10y The first trick is to be able to describe how to solve a problem; and then break that description down into the smallest possible units and write it out such that there's absolutely no possibility of a misunderstanding, no matter what conditions occur. Once you've got that done, it's fairly easy to learn how to translate it into a programming language. 3DaFranker10y Which is also why it helps, conversely, for reduction and rational thinking: The same skill that applies to formulating clear programs applies to formulating clear algorithms and concepts in any format, including thought. 1RobinZ10y I would recommend trying, although I'm not really the right person to ask on starting points, if for no other reason than to test the hypothesis that learning programming aids the study of rationality. 0Antisuji10y This is a great reminder, and is not always easy advice to follow, especially if your edit-compile-run cycle tightens or collapses [http://vimeo.com/36579366] completely [http://www.chris-granger.com/2012/04/12/light-table---a-new-ide-concept/]. I think there's a tricky balance between understanding something explicitly, which you can only do by training your model by thinking carefully, and understanding something intuitively, which is made much easier with tools like the ones I linked. Do you have a sense for which kind of understanding is more useful in practice? I suspect that when I design or debug software I am making heavier use of System 1 thinking than it seems, and I am often amazed at how detailed a model I have of the behavior of the code I am working with. 3cata10y No, I don't, which I realized after spending half an hour trying to compose a reply to this. Sorry. The problem with any ideology is that it gives the answer before you look at the evidence. Bill Clinton This is why I think it's not too terribly useful to give labels like "good person" or "bad person," especially if our standard for being a "bad person" is "someone with anything less than 100% adherence to all the extrapolated consequences of their verbally espoused values." In the end, I think labeling people is just a useful approximation to labeling consequences of actions. Julia, Jeff, and others accomplish a whole lot of good. Would they, on average, end up accomplishing more good if they spent more time feeling guilty about the fact that they could, in theory, be helping more? This is a testable hypothesis. Are people in general more likely to save more lives if they spend time thinking about being happy and avoiding burnout, or if they spend time worrying that they are bad people making excuses for allowing themselves to be happy? The question here is not whether any individual person could be giving more; the answer is virtually always "yes." The question is, what encourages giving? How do we ensure that lives are actually being saved, given our human limitations and selfish impulses? I think there's great value in not generating an ugh-field around charity. I've always thought of the SkiFree monster as a metaphor for the inevitability of death. "SkiFree, huh? You know, you can press 'F' to go faster than the monster and escape." -- xkcd 667 There is nothing noble in being superior to your fellow man; true nobility is being superior to your former self. Ernest Hemingway 9wedrifid10y Excellent. A shortcut to nobility. One day of being as despicable as I can practically manage and I'm all set. 4WingedViper10y It does not state which (!) former self, so I would expect some sort of median or mean or summary of your former self and not just the last day. So I'm sorry but there is no shortcut ;-) [-][anonymous]10y 22 "If at first you don't succeed, switch to power tools." -- The Red Green Show 2DanArmak10y I can confirm that this works. Julia Wise holds the distinction of having actually tried it though. Few people are selfless enough to even make the attempt. When we were first drawn together as a society, it had pleased God to enlighten our minds so far as to see that some doctrines, which we once esteemed truths, were errors; and that others, which we had esteemed errors, were real truths. From time to time He has been pleased to afford us farther light, and our principles have been improving, and our errors diminishing. Now we are not sure that we are arrived at the end of this progression, and at the perfection of spiritual or theological knowledge; and we fear that, if we should once print our confession of faith, we should feel ourselves as if bound and confin'd by it, and perhaps be unwilling to receive farther improvement, and our successors still more so, as conceiving what we their elders and founders had done, to be something sacred, never to be departed from. Michael Welfare, quoted in The Autobiography of Benjamin Franklin "Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity -- in all this vastness -- there is no hint that help will come from elsewhere to save us from ourselves. It is up to us." - Sagan Rorschach: You see, Doctor, God didn't kill that little girl. Fate didn't butcher her and destiny didn't feed her to those dogs. If God saw what any of us did that night he didn't seem to mind. From then on I knew... God doesn't make the world this way. We do. EDIT: Quote above is from the movie. Verbatim from the comic: It is not God who kills the children. Not fate that butchers them or destiny that feeds them to the dogs. It's us. Only us. I personally think that Watchmen is a fantastic study* on all the different ways people react to that realisation. ("Study" in the artistic sense rather than the scientific.) If a thing can be observed in any way at all, it lends itself to some type of measurement method. No matter how “fuzzy” the measurement is, it’s still a measurement if it tells you more than you knew before. Douglas Hubbard, How to Measure Anything 0tgb10y This is the second time [http://lesswrong.com/r/discussion/lw/efw/chief_probability_officer/] I've come across you mentioning Hubbard. Is the book good and, if so, what audience is it goo for? 0lukeprog10y How to Measure Anything is surprisingly good, so I added it here [http://lukeprog.com/GeneralNonfiction.html]. [-][anonymous]10y 19 Erode irreplaceable institutions related to morality and virtue because of their contingent associations with flawed human groups #lifehacks 5simplicio10y I was ready to applaud the wise contrarianism here, but I'm having trouble coming up with actual examples... marriage, maybe? 3arundelo10y I don't know if this is what she was thinking of but church [http://lesswrong.com/lw/5v/church_vs_taskforce/] is what I thought of when I read it. 2simplicio10y I thought of that too but dismissed it on the grounds that church is hardly "contingently associated" with religion. But I think you're probably right that that's what she meant... and that being the case it is a pretty good point. I wish I belonged to something vaguely churchlike. 0Document10y I disagree, but won't present any arguments to avoid derailing or getting involved in a debate. It may be of course that savages put food on a dead man because they think that a dead man can eat, or weapons with a dead man because they think a dead man can fight. But personally I do not believe that they think anything of the kind. I believe they put food or weapons on the dead for the same reason that we put flowers, because it is an exceedingly natural and obvious thing to do. We do not understand, it is true, the emotion that makes us think it is obvious and natural; but that is because, like all the important emotions of human existence it is essentially irrational. • G. K. Chesterton Chesterton doesn't understand the emotion because he doesn't know enough about psychology, not because emotions are deep sacred mysteries we must worship. 3RobinZ10y I read "irrational" as a genuflection in the direction of the is-ought problem more than anything else. 3MixedNuts10y My beef isn't with "irrational", he meant "arational" anyway. It's with the idea that this property of emotions make our ignorance about them okay. 2RobinZ10y Ah - I missed that implication. Agreed. Or better, arational. 5[anonymous]10y That is an incredible term. Going to use it all the time. Let us together seek, if you wish, the laws of society, the manner in which these laws are reached, the process by which we shall succeed in discovering them; but, for God's sake, after having demolished all the a priori dogmatisms, do not let us in our turn dream of indoctrinating the people...let us not - simply because we are at the head of a movement - make ourselves into the new leaders of intolerance, let us not pose as the apostles of a new religion, even if it be the religion of logic, the religion of reason. Pierre Proudhon, to Karl Marx When a precise, narrowly focused technical idea becomes metaphor and sprawls globally, its credibility must be earned afresh locally by means of specific evidence demonstrating the relevance and explanatory power of the idea in its new application. Edward Tufte, "Beautiful Evidence" 4simplicio10y * Evolution * Relativity * Foundational assumptions of standard economics ...what else? 6RichardKennaway10y * Bayes' theorem * Status * Computation * Utility * Optimisation 2benelliott10y Quantum physics ...the 2008 financial crisis showed that some [mathematical finance] models were flawed. But those flaws were based on flawed assumptions about the distribution of price changes... Nassim Taleb, a popular author and critic of the financial industry, points out many such flaws but does not include the use of Monte Carlo simulations among them. He himself is a strong proponent of these simulations. Monte Carlo simulations are simply the way we do the math with uncertain quantities. Abandoning Monte Carlos because of the failures of the financial markets makes as much sense as giving up on addition and subtraction because of the failure of accounting at Enron or AIG’s overexposure in credit default swaps. Douglas Hubbard, How to Measure Anything As far as I know, Robespierre, Lenin, Stalin, Mao, and Pol Pot were indeed unusually incorruptible, and I do hate them for this trait. Why? Because when your goal is mass murder, corruption saves lives. Corruption leads you to take the easy way out, to compromise, to go along to get along. Corruption isn't a poison that makes everything worse. It's a diluting agent like water. Corruption makes good policies less good, and evil policies less evil. I've read thousands of pages about Hitler. I can't recall the slightest hint of "corruption" on his record. Like Robespierre, Lenin, Stalin, Mao, and Pol Pot, Hitler was a sincerely murderous fanatic. The same goes for many of history's leading villains - see Eric Hoffer's classic The True Believer. Sincerity is so overrated. If only these self-righteous monsters had been corrupt hypocrites, millions of their victims could have bargained and bribed their way out of hell. 5MixedNuts10y Hitler was at least a hypocrite - he got his Jewish friends to safety, and accepted same-sex relationships in himself and people he didn't want to kill yet. The kind of corruption Caplan is pointing at is a willingness to compromise with anyone who makes offers, not any kind of ignoring your principles. And Nazis were definitely against that - see the Duke in Jud Süß. 6Stuart_Armstrong10y ? Please provide evidence for this bizarre claim? Spared Jews: • Ernst Hess, his unit commander in WWI, protected until 1942 then sent to a labor (not extermination) camp • Eduard Bloch, his and his mother's doctor, allowed to emigrate out of Austria with more money than normally allowed • I've heard things about fellow artists (a commenter on Caplan's post mentions an art gallery owner) but I don't have a source. • There are claims about his cook, Marlene(?) Kunde, but he seems to have fired her when Himmler complained. Anyone has Musmanno's book or some other non-Stormfronty source? Whether Hitler batted for both teams is hotly debated. There are suspected relationships (August Kubizek, Emil Maurice) but any evidence could as well have been faked to smear him. Hitler clearly knew that Ernst Röhm and Edmund Heines were gay and didn't care until it was Long Knives time. I'm less sure he knew about Karl Ernst's sexuality. 9TimS10y Wittgenstein paid a huge bribe [http://en.wikipedia.org/wiki/Ludwig_Wittgenstein#Professor_of_philosophy] to allow his family to leave Germany. Somewhere I read that this particular agreement was approve personally be Hitler (or someone very senior in the hierarchy). That doesn't contradict the general point that Nazi Germany was generally willing to kill and steal from its victims (especially during the war) rather than accept bribes for escape. 4DanArmak10y This may have happened some of the time, but everything I read suggests it was the exception and not the rule. The reason Jews did not emigrate out of Germany during the 30s was that Germany had a big foreign balance problem, and managed tight government control over allocation of foreign currency. Jews (and Germans) could not convert their Reichsmarks to any other currency, either in Germany or out of it, and so they were less willing to leave. And no other country was willing to take them in in large numbers (since they would be poor refugees). This continued during the war in the West European countries conquered by Germany. (Ref: Wages of Destruction [http://www.amazon.com/Wages-Destruction-Making-Breaking-Economy/dp/0143113208], Adam Tooze) Later, all Jewish property was expropriated and the Jews sent to camps, so there was no more room for bribes - the Jews had nothing to offer since the Nazis took what they wanted by force. 2TheOtherDave10y The last bit is most famously true of Rohm, though of course there's a dozen different things going on there. [-][anonymous]10y 17 The Perfect Way is only difficult for those who pick and choose; Do not like, do not dislike; all will then be clear. and Heaven and Earth are set apart; if you want the truth to stand clear before you, never be for or against. The struggle between "for" and "against" is the mind's worst disease. -- Jianzhi Sengcan Edit: Since I'm not Will Newsome (yet!) I will clarify. There are several useful points in this but I think the key one is the virtue of keeping one's identity small. Speaking it out loud is a sort of primer, meditation or prayer before approaching difficult or emotional subjects has for me proven a useful ritual for avoiding motivated cognition. 3Emile10y For the curious, it's the opening of 信心铭 (Xinxin Ming) [http://en.wikipedia.org/wiki/Xinxin_Ming], whose authorship is disputed (probably not the zen patriarch Jiangzhi Sengcan). In Chinese, that part goes: (The Wikipedia article [http://en.wikipedia.org/wiki/Xinxin_Ming] lists a few alternate translations of the first verses, with different meanings) 0TimS10y Do I understand you to be saying that you avoid "the struggle between 'for' and 'against'" to an unusual degree compared to the average person? Compared to the average LWer? 6[anonymous]10y No. I'm claiming this helps me avoid it more than I otherwise could. Much for the same reason I try as hard as I can to maintain an apolitical identity. From my personal experience (mere anecdotal evidence) both improve my thinking. 4TimS10y Respectfully, your success at being apolitical is poor. Further, I disagree with the quote to extent that it implies that taking strong positions is never appropriate. So I'm not sure that your goal of being "apolitical" is a good goal. 4[anonymous]10y Since we've already had exchanges on how I use "being apolitical", could you please clarify your feedback. Are you saying I display motivated cognition when it comes to politically charged subjects or behave tribally in discussions? Or are you just saying I adopt stances that are associated with certain political clusters on the site? Also like I said it is something I struggle with. 4TimS10y My impression that you are unusually NOT-mindkilled compared to the average person with political positions/terminal values as far from the "mainstream" as your positions are. You seem extremely sensitive to the facts and the nuances of opposing positions. 3[anonymous]10y Now I feel embarrassed by such flattery. But if you think this an accurate description then perhaps me trying evicting "the struggle between 'for' and 'against'" from my brain might have something to do with it? I'm not sure I understand what you mean by this then. Let's taboo apolitical. To rephrase my original statement: "I try as hard as I can to maintain an identity, a self-conception that doesn't include political tribal affiliations." 2TimS10y You certainly seem to have succeeded in maintaining a self-identity that does not include a partisan political affiliation. I don't know whether you consider yourself Moldbuggian (a political identity) or simply think Moldbug's ideas are very interesting. (someday, we should hash out better what interests you in Moldbug). My point when I've challenged your self-label "apolitical" is that you've sometime used the label to suggest that you don't have preferences about how society should be changed to better reflect how you think it should be organized. At the very least, there's been some ambiguity in your usage. There's nothing wrong with having opinions and advocating for particular social changes. But sometimes you act like you aren't doing that, which I think is empirically false. 3simplicio10y I disagree with the quote too. On the other hand, the idea of keeping one's identity small is not the same as being apolitical. It means you have opinions on political issues, but you keep them out of your self-definition so that (a) changing those opinions is relatively painless, (b) their correlations with other opinions don't influence you as much. (Caricatured example of the latter: "I think public health care is a good idea. That's a liberal position, so I must be a liberal. What do I think about building more nuclear plants, you ask? It appears liberals are against nuclear power, so since I am a liberal I guess I am also against nuclear power.") 1TimS10y I agree with everything you just said - keeping one's identity small does not imply that one cannot be extremely active trying to create some kind of social/political change. 2[anonymous]10y I understand how a position can be correct or incorrect. I don't understand how a position can be strong or weak. 6Vaniver10y In a world of uncertainty, numbers between 0 and 1 find quite a bit of use. 3[anonymous]10y I understand what it means to believe that an outcome will occur with probability p. I don't know what it means to believe this very strongly. I understand what it means to believe that an outcome will occur with probability p. I don't know what it means to believe this very strongly. It means that many kinds of observation that you could make will tend to cause you to update that probability less. 7Vaniver10y Concretely: Beta(1,2) and Beta(400,800) have the same mean. 2[anonymous]10y I don't understand K to be arguing in favor of high-entropy priors, or T to be arguing in favor of low-entropy priors. My guess is that TimS would call a position a "strong position" if it was accompanied by some kind of political activism. 2Vaniver10y I think of a strong position as a low-entropy posterior, but rereading I am not confident that's what TimS meant, and I also don't see the connection to politics. 2[anonymous]10y E.T. Jaynes' Probability Theory goes into some detail about that in the chapter about what he calls the A_p distribution. 0[anonymous]10y It means roughly that you give a high probability estimate that the thought process you used to come to that conclusion was sound. 0PhilosophyTutor10y A possible interpretation is that the "strength" of a belief reflects the importance one attaches to acting upon that belief. Two people might both believe with 99% confidence that a new nuclear power plant is a bad idea, yet one of the two might go to a protest about the power plant and the other might not, and you might try to express what is going on there by saying that one holds that belief strongly and the other weakly. You could of course also try to express it in terms of the two people's confidence in related propositions like "protests are effective" or "I am the sort of person who goes to protests". In that case strength would be referring to the existence or nonexistence of related beliefs which together are likely to be action-driving. 1TimS10y As I was using the term, "strong" is a measure of how far one's political positions/terminal values are from the "mainstream." I'm very aware that distance from mainstream is not particularly good evidence of the correctness of one's political positions/terminal values. 4Vaniver10y The claim looks narrower: repeating the poem makes Konkvistador more likely to avoid the struggle. 1TimS10y I like his contributions, but Konkvistador is not avoiding the struggle, when compared to the average LWer. 8[anonymous]10y Sick people for some reason use up more medicine and may end up talking a lot about various kind of treatments. 0J_Taylor10y Case in point: -- Ro-Man 0MixedNuts10y I don't get it. Is this saying "Don't be prejudiced or push for any overarching principle [http://lesswrong.com/lw/13k/missing_the_trees_for_the_forest/]; take each situation as new and unknown, and then you'll find easily the appropriate response to this situation", or is this the same old stoicist "Don't struggle trying to find food, choose to be indifferent to starvation" platitude? 0[anonymous]10y Edited in a clarification. Though it will not help you since I have shown you the path you can not find it yourself. Sorry couldn't resist teasing or am I? :P I particularly like the reminder that I'm physics. Makes me feel like a superhero. "Imbued with the properties of matter and energy, able to initiate activity in a purely deterministic universe, it's Physics Man!" -- GoodDamon (this may skirt the edge of the rules, since it's a person reacting to a sequence post, but a person who's not a member of LW.) 4RobinZ10y ...and, more importantly, not on LessWrong.com. Er... actually the genie is offering at most two rounds of feedback. Sorry about the pedantry, it's just that as a professional specialist in genies I have a tendency to notice that sort of thing. 6wedrifid10y Rather than a technical correction you seem just to be substituting a different meaning of 'feedback'. The author would certainly not agree that "You get 0 feedback from 1 wish". Mind you I am wary of the the fundamental message of the quote. Feedback? One of the most obviously important purposes of getting feedback is to avoid catastrophic failure. Yet catastrophic failures are exactly the kind of thing that will prevent you from using the next wish. So this is "Just Feedback" that can Kill You Off For Real [http://tvtropes.org/pmwiki/pmwiki.php/Main/KilledOffForReal] despite the miraculous intervention you have access to. I'd say "What the genie is really offering is a wish and two chances to change your mind---assuming you happen to be still alive and capable of constructing corrective wishes". 7Morendil10y One well-known folk tale [http://bit.ly/NHpmQC] is based on precisely this interpretation. Probably more than one. 4Eliezer Yudkowsky10y 0 feedback is exactly what you get from 1 wish. "Feedback" isn't just information, it's something that can control a system's future behavior - so unless you expect to find another genie bottle later, "Finding out how your wish worked" isn't the same as feedback at all. so unless you expect to find another genie bottle later ...or unless genies granting wishes is actually part of the same system as the larger world, such that what I learn from the results of a wish can be applied (by me or some other observer) to better calibrate expectations from other actions in that system besides wishing-from-genies. 9wedrifid10y I think it was clear that I inferred this as the new definition you were trying to substitute. I was very nearly as impressed as if you 'corrected' him by telling him that it isn't "feedback" if nobody is around to hear it [http://lesswrong.com/lw/np/disputing_definitions/], or perhaps told him that oxygen is a metal [http://lesswrong.com/lw/ecf/open_thread_september_115_2012/7bg4]. 0roland10y Why only 2 rounds of feedback if you have 3 wishes? 5RomanDavis10y The third one's for keeps: you can't wish the consequences away. An elderly man was sitting alone on a dark path, right? He wasn't certain of which direction to go, and he'd forgotten both where he was traveling to and who he was. He'd sat down for a moment to rest his weary legs, and suddenly looked up to see an elderly woman before him. She grinned toothlessly and with a cackle, spoke: 'Now your third wish. What will it be?' 'Third wish?' The man was baffled. 'How can it be a third wish if I haven't had a first and second wish?' 'You've had two wishes already,' the hag said, 'but your second wish was for me to return everything to the way it was before you had made your first wish. That's why you remember nothing; because everything is the way it was before you made any wishes.' She cackled at the poor berk. 'So it is that you have one wish left.' 'All right,' said the man, 'I don't believe this, but there's no harm in wishing. I wish to know who I am.' 'Funny,' said the old woman as she granted his wish and disappeared forever. 'That was your first wish.' • Morte's Tale to Yves (Planescape: Torment) I should like to point out that anyone in this situation who wishes what would've been their first wish if they had three wishes is a bloody idiot. So: A genie pops up and says, "You have one wish left." What do you wish for? Because presumably the giftwrapped FAI didn't work so great. 7CCC10y "I wish to know what went wrong with my first wish." This way, I at least end up with improved knowledge of what to avoid in the future. Alternatively, "I wish for a magical map, which shows me, in real time, the location of every trapped genie and other potential source of wishes in the world." Depending on how many there are, I can potentially get a lot more feedback that way. 7siodine10y I bet he'd wish "to erase all uFAI from existence before they're even born. Every uFAI in every universe, from the past and the future, with my own hands." 3Eliezer Yudkowsky10y Nobody believes in the future. Nobody accepts the future. Then - 0MugaSofer10y Perhaps I'm simply being an idiot, but ... huh? 2ArisKatsaris10y It's a reference to an anime; you're not an idiot, just unlikely to get the reference and its appropriateness if you've not seen it yourself. PM me for the anime's name, if you are one of the people who either don't mind getting slightly spoiled, or are pretty sure that you would never get a chance to watch it on your own anyway. 2RichardKennaway10y Could you just rot13 it? I'm curious too, I don't mind the spoiler, and whatever it is, I'd probably be more likely to watch it (even if only 2epsilon rather than epsilon) for knowing the relevance to LW. 0ArisKatsaris10y I'll just PM you the title too, and anyone else who wants me to likewise. Sorry, it just happens to be one of my favourite series, and all other things being equal I tend to prefer that people go into it as completely unspoilered as possible... Even knowing Eliezer's quote is a reference to it counts as a mild spoiler... explanation about how it is a reference would count as a major spoiler. 0TimS10y I think that's Eliezer's prediction of the results of siodine's wish. Because wishes are NOT SAFE [http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/]. 2MugaSofer10y But what is he predicting, exactly? 1Cyan10y "I wish for this wish to have no further effect beyond this utterance." 8wedrifid10y Overwhelmingly probable dire consequence: You and everyone you love dies (over a period of 70 years) then, eventually, your entire species goes extinct. But hey, at least it's not "your fault". 0Cyan10y But, alas, it's the wish that maximizes my expected utility -- for the malicious genie, anyway. 3wedrifid10y Possibly. I don't off hand see what a malicious genie could do about that statement. However it does at least require it to honor a certain interpretation of your words as well as your philosophy about causality---in particular accept a certain idea of what the 'default' is relative to which 'no effect' can have meaning. There is enough flexibility in how to interpret your wish that I begin to suspect that conditional on the genie being sufficiently amiable and constrained that it gives you what you want in response to this wish there is likely to be possible to construct another wish that has no side effects beyond something that you can exploit as a fungible resource. "No effect" is a whole heap more complicated and ambiguous than it looks! 0JulianMorrison10y "Destroy yourself as near to immediately as possible, given that your method of self destruction causes no avoidable harm to anything larger than an ant." 3MugaSofer10y They shrink the planet down to below our Schwarzschild radius, holding spacetime in place for just long enough to explain what you just did. Alternately, they declare your wish is logically contradictory - genies are larger than ants. 0[anonymous]10y A sphere whose radius equals the Earth's Schwarzschild radius is larger than an ant. 0JulianMorrison10y At the start of the scenario, you are already dead with probability approaching 1. Trying to knock the gun away can't hurt. 3MugaSofer10y I was criticizing the wording of the "ant" qualifier, not the attempt to destroy the genie. 4[anonymous]10y That's not what's going on though. The traveller is assuming, reasonably, that his third wish is reversing the amnesiac effects of his second. He's not just starting fr om scratch. 7ArisKatsaris10y I don't think this follows from the text. The hag tells him "but second wish was for me to return everything to the way it was before you had made your first wish. That's why you remember nothing; because everything is the way it was before you made any wishes". So she told him that he had been an amnesiac before any wishes were granted. Therefore he should have already guessed that his first wish was to know who he was -- and that this proved a bad idea, since his second wish was to reverse the first. 0Xachariah10y It should be noted that night hags are sufficiently smart, powerful, and evil that your best case scenario upon meeting one is a quick and painful death. But not everything is the way it was. Before he made any wishes, he had three. She missed the chance to trap him in an infinite loop. 6Xachariah10y But then the Hag would be trapped too. She gets delight from tormenting mortals, but tormenting the same one, in the same way eternally, would probably be too close to wireheading for her. 0Alicorn10y Well, if she got bored, she could experiment with different ways to present his wishes to him at the "beginning" and see if she can get him to wish for something to else, or word it a bit differently. Since she seems to retain memories of the whole thing. (Which is again, things not being how they were, but.) 4Xachariah10y The psuedo-meta-textual answer is that Morte is lying to Yves while the main character overhears. Morte's making up the story just to mess around with him. Background information is that gur znva punenpgre znqr n qrny jvgu n Unt (rivy cneg-gvzr travr), tnvavat vzzbegnyvgl naq nzarfvn. At the start of the story, the main character somehow broke out of an infinite loop of torture; he's stopped having Anterograde Amnesia, but still cannot remember much from before the cycle broke, and is on a quest to remember who he is. Morte is trying to dissuade the main character from finding out who he is, showing that things can be terrible even without an infinite loop. 0MartinB10y Now that would be evil. 0siodine10y If his first wished disappeared him forever, how did he ever get a second wish? Apparently I suck at reading. 0[anonymous]10y The old woman is the one disappearing forever, and only because the wishes ran out. 2roland10y Right, but the consequences still qualify as feedback, no? 0RomanDavis10y I always imagine the genie just goes back into his lamp to sleep or whatever, so in the hypothetical as it exists in my head, no. But I guess there could be a highly ambitious Genie looking for feedback after your last wish, so maybe. I think in this case, Eliezer in talking about a genie like in Failed Utopia 4-2 who grants his wish, and then keeps working, ignoring feedback, because he just doesn't care, because caring isn't part of the wish. The genie doesn't care about consequences, he just cares about the wishes. The second wish and third wish are the feedback. 4wedrifid10y The feedback is for you, not what you happen to say to the genie. [-][anonymous]10y 16 He had bought a large map representing the sea, / Without the least vestige of land: / And the crew were much pleased when they found it to be / A map they could all understand. “What’s the good of Mercator’s North Poles and Equators, / Tropics, Zones, and Meridian Lines?" / So the Bellman would cry: and the crew would reply / “They are merely conventional signs! “Other maps are such shapes, with their islands and capes! / But we’ve got our brave Captain to thank: / (So the crew would protest) “that he’s bought us the best— / A perfect and absolute blank!” -Lewis Carroll, The Hunting of the snark [-][anonymous]10y 16 . 4Armok_GoB10y "Do you want 1111 1111 0000 0000 1111 1111 or 1111 1101 0000 0100 1111 1111? " Proceed only with the simplest terms, for all others are enemies and will confuse you. — Michael Kirkbride / Vivec, "The Thirty Six Lessons of Vivec", Morrowind. 5Ezekiel10y Am I the only one who thinks we should stop using the word "simple" for Occam's Razor / Solomonoff's Whatever? In 99% of use-cases by actual humans, it doesn't mean Solomonoff induction, so it's confusing. 2Kawoomba10y How would you characterise the in your opinion most prevalent use-cases? 0Ezekiel10y "Easy to communicate to other humans", "easy to understand", or "having few parts". 6OnTheOtherHandle10y "Having few parts" is what Occam's razor seems to be going for. We can speak specifically of "burdensome details," but I can't think of a one-word replacement for "simple" used in this sense. It is a problem that people tend to use "simple" to mean "intuitive" or "easy to understand," and "complicated" to mean "counterintuitive." Based on the "official" definitions, quantum mechanics and mathematics are extremely simple while human emotions are exceedingly complex. I think human beings have internalized a crude version of Occam's Razor that works for most normal social situations - the absurdity heuristic. We use it to see through elaborate, highly improbable excuses, for example. It just misfires when dealing with deeper physical reality because its focus is on minds and emotions. Hence, two different, nearly opposite meanings of the word "simple." Conspiracy Theory, n. A theory about a conspiracy that you are not supposed to believe. -L. A. Rollins, Lucifer's Lexicon: An Updated Abridgment Major Greene this evening fell into some conversation with me about the Divinity and satisfaction of Jesus Christ. All the argument he advanced was, "that a mere creature or finite being could not make satisfaction to infinite justice for any crimes," and that "these things are very mysterious." Thus mystery is made a convenient cover for absurdity. Jesus used a clever quip to point out the importance of self-monitoring for illusory superiority? [-][anonymous]10y 14 For a hundred years or so, mathematical statisticians have been in love with the fact that the probability distribution of the sum of a very large number of very small random deviations almost always converges to a normal distribution. ... This infatuation tended to focus interest away from the fact that, for real data, the normal distribution is often rather poorly realized, if it is realized at all. We are often taught, rather casually, that, on average, measurements will fall within ±σ of the true value 68% of the time, within ±2σ 95% of the time, and within ±3σ 99.7% of the time. Extending this, one would expect a measurement to be off by ±20σ only one time out of 2 × 10^88. We all know that “glitches” are much more likely than that! -- W.H. Press et al., Numerical Recipes, Sec. 15.1 0ThirdOrderScientist10y I don't think it's fair to blame the mathematical statisticians. Any mathematical statistician worth his / her salt knows that the Central Limit Theorem applies to the sample mean of a collection of independent and identically distributed random variables, not to the random variables themselves. This, and the fact that the t-statistic converges in distribution to the normal distribution as the sample size increases, is the reason we apply any of this normal theory at all. Press's comment applies more to those who use the statistics blindly, without understanding the underlying theory. Which, admittedly, can be blamed on those same mathematical statisticians who are teaching this very deep theory to undergraduates in an intro statistics class with a lot of (necessary at that level) hand-waving. If the statistics user doesn't understand that a random variable is a measurable function from its sample space to the real line, then he/she is unlikely to appreciate the finer points of the Central Limit Theorem. But that's because mathematical statistics is hard (i.e. requires non-trivial amounts of work to really grasp), not because the mathematical statisticians have done a disservice to science. "You're very smart. Smarter than I am, I hope. Though of course I have such incredible vanity that I can't really believe that anyone is actually smarter than I am. Which means that I'm all the more in need of good advice, since I can't actually conceive of needing any." • New Peter / Orson Scott Card, Children of the Mind 2Fyrius10y That's a modest thing to say for a vain person. It even sounds a bit like Moore's paradox - I need advice, but I don't believe I do. (Not that I'm surprised. I've met ambivalent people like that and could probably count myself among them. Being aware that you habitually make a mistake is one thing, not making it any more is another. Or, if you have the discipline and motivation, one step and the next.) 1chaosmosis10y I love New Peter. He's so interesting and twisted and bizarre. .... he who works to understand the true causes of miracles and to understand Nature as a scholar, and not just to gape at them like a fool, is universally considered an impious heretic and denounced by those to whom the common people bow down as interpreters of Nature and the gods. For these people know that the dispelling of ignorance would entail the disappearance of that sense of awe which is the one and only support of their argument and the safeguard of their authority. Baruch Spinoza Ethics 1Document10y http://lesswrong.com/lw/i0/are_your_enemies_innately_evil/ [http://lesswrong.com/lw/i0/are_your_enemies_innately_evil/] 0chaosmosis10y That seems really odd to me, coming from Spinoza. I've never read him, but I thought that he was supposed to believe that God and Nature are the same thing. Does he do that, but then also investigate the nature of God through analyzing the way that Nature's laws work? How does he reconcile those two positions, I guess, is what I'm asking. Can someone more familiar with his work than I help me out here? 7asparisi10y Spinoza held that God and Nature are the same thing. His reasoning in a nutshell: an infinite being would need to have everything else as a part of it, so God has to just be the entire universe. It's not clear whether he really thought of God as a conscious agent, although he did think that there were "ideas" in God's mind (read: the Universe) and that these perfectly coincided with the existance of real objects in the world. As an example, he seems to reject the notion of God as picking from among possible worlds and "choosing" the best one, opting instead to say that God just is the actual world and that there is no difference between them. So basically, studying nature for Spinoza is "knowing the mind of God." He may also have been reacting to his excommunication, in fact, that's pretty likely. So the quote may have some sour grapes hidden inside of it. 5kilobug10y That doesn't hold in maths at least. N, Z, Q have the same size, but clearly Q isn't part of N. And there are as many rational numbers between 0 and 1 (or between 0 and 0.0000000000000000000001) than in Q as a whole, and yet, we can have an infinity of such different subsets. And it goes even worse with bigger sets. It saddens me how much philosopher/theologists speak about "infinity" as if we had no set theory, no Peano arithmetic, no calculus, nothing. Intuition is usually wrong on "infinity". [-][anonymous]10y 16 Baruch Spinoza: 1632-1677 Isaac Newton: 1642-1727 Georg Cantor: 1845-1918 Richard Dedekind: 1831-1916 Guiseppe Peano: 1858-1932 3kilobug10y Ok, I stand corrected on the dates, my mistake. But still, didn't we already know that if you take a line, two distinct points A and B on it, there are an infinite number of points between A and B, and yet an infinite number of points outside [AB] ? Didn't we know that since the ancient greeks ? 8Tyrrell_McAllister10y First, Spinoza is not using infinite in its modern mathematical sense. For him, "infinite" means "lacking limits" (see Definition 2, Part I of Ethics). Second, Spinoza distinguished between "absolutely infinite" and "infinite in its kind" (see the Explication following Definition 6, Part I). Something is "infinite in its kind" if it is not limited by anything "of the same nature". For example, if we fix a Euclidean line L, then any line segment s within L is not "infinite in its kind" because there are line segments on either side that limit the extent of s. Even a ray r within L is not "infinite in its kind", because there is another ray in L from which r is excluded. Among the subsets of L, only the entire line is "infinite in its kind". However, the entire line is not "absolutely infinite" because there are regions of the plane from which it is excluded (although the limits are not placed by lines). 3prase10y I suspect "infinite" was supposed to mean "having infinite measure" rather than "having infinite number of points / subsets". In the latter sense every being, not only God, would be infinite. 2[anonymous]10y That's a good point. Spinoza himself was a mathematician of no mean talent, so we should assume that he was aware of it as well. So the question is, does his argument avoid the mistake of taking 'infinite' to mean 'all encompassing?' without any argument to that effect? There are certainly questions to be raised about his argument, but I don't think this is one of his mistakes. If you don't want to take my word for it, here's the opening argument of the Ethics [http://www.gutenberg.org/files/3800/3800-h/3800-h.htm]. Good luck, it's quite a slog. The idea seems to be that the one substance has to be infinite and singular, because substances can't share attributes (see his definitions), and things which have nothing in common can't interact. Therefore substances can't cause each other to exist, and therefore if any exists, it must exist necessarily. If that's true, then existence is an attribute of a substance, and so no other substance could exist. At any rate, the argument concerns an 'infinity' of attributes, and I think these are reasonably taken as countably infinite. Spinoza also defines infinite as 'not being limited by anything of the same kind', so by that definition he would say that with reference to the 'kind' 'number', the even numbers are finite, though they're infinite with reference to the 'kind' 'even number'. 0chaosmosis10y Thanks. My understanding was basically correct then. I just didn't understand why he'd go from that overall position to talk about why we need to investigate nature, when his whole approach really seemed more like laid back speculation than any form of science, or advocacy of science. The excommunication detail clarifies a lot though, as Spinoza's approach seems much more active and investigative when compared to the approach of the church. Excellent, thanks again. 2asparisi10y It's notable that Spinoza was a part of a Jewish community, rather than "a church." I've actually read the letter of his excommunication, and WOW. They really went all out. You're considered cursed just for reading what he wrote. 3Tyrrell_McAllister10y Are you reacting to Spinoza's mention of "miracles" and "gods"? Spinoza held that there are no miracles in the sense that everything without exception proceeds according to cause and effect. So he must mean something like "alleged miracles". As for the "gods", they are mentioned only as part of the belief system of the "common people". 0hairyfigment10y What do you- he's a pantheist. Contemporaries called him an atheist because his position works exactly like atheism. We're talking about morality that is based around technology. There is no technological advance that allows us to not criminalize homosexuality now where we couldn't have in the past. 2[anonymous]10y Naming three: 1. Condoms. 2. Widespread circumcision. 3. Antibiotics. [-][anonymous]10y 11 What? 1CCC10y Didn't the Jews have that back in the years BC? It's sort of cultural, but it's been around for a while in some cultures... 8Alicorn10y I didn't specify promiscuous homosexuality. Monogamously inclined gay people are as protected from STDs as anyone else at a comparable tech level - maybe more so among lesbians. 3[anonymous]10y Neither did I, but would rather refrain from explaining in detail why I didn't assume promiscuity. It's really annoying that you jumped to that conclusion, though. Further, I'm confused why the existence of some minority of a minority of the population that doesn't satisfy the ancestor's hypothetical matters. 4shminux10y Homosexuality was common/accepted/expected [http://en.wikipedia.org/wiki/Homosexuality#History] in many societies without leading to any negative consequences, so technology is not an enabler of morality here. 3Salemicus10y Homosexuality has certainly been present in many societies. However, your link does not state, nor even suggest, that it did not lead to any negative consequences. People respond to incentives. Especially loss-related incentives. I do not give homeless people nickels even though I can afford to give a nearly arbitrary number of homeless people nickels. The set of people with karma less than five will be outright unable to reply - the set of people with karma greater than five will just be disincentivized, and that's still something. I believe Peter Singer actually originally advocated the asceticism you mention, but eventually moved towards "try to give 10% of your income", because people were actually willing to do that, and his goal was to actually help people, not uphold a particular abstract ideal. 8fubarobfusco10y An interesting implication, if this generalizes: "Don't advocate the moral beliefs you think people should follow. Advocate the moral beliefs which hearing you advocate them would actually cause other people to behave better." 7Matt_Caulfield10y Just a sidenote: If you are the kind of person who is often worried about letting people down, entertaining the suspicion that most people follow this strategy already is a fast, efficient way to drive yourself completely insane. "You're doing fine." "Oh, I know this game. I'm actually failing massively, but you thought, well, this is the best he can do, so I might as well make him think he succeeded. DON'T LIE TO ME! AAAAH..." 6IlyaShpitser9y Sometimes I wonder how much of LW is "nerds" rediscovering on their own how neuro-typical communication works. I don't mean to say I am not a "nerd" in this sense :). “The real purpose of the scientific method is to make sure nature hasn’t misled you into thinking you know something you actually don’t know.” ― Robert M. Pirsig, Zen and the Art of Motorcycle Maintenance: An Inquiry Into Values 2Fyrius10y Well. Surely that's only part of the real purpose of the scientific method. "Even in a minute instance, it is best to look first to the main tendencies of Nature. A particular flower may not be dead in early winter, but the flowers are dying; a particular pebble may never be wetted with the tide, but the tide is coming in." G. K. Chesterton, "The Absence of Mr Glass" Note: this was put in the mouth of the straw? atheist. It's still correct. 0Document10y Then Chesterton didn't say it. 0Vaniver10y It is typical to quote the author of fictional works for quotes from that fictional work, though I think it's somewhat more conventional here on LW to quote the character. Wait actual humans are afraid of losing karma? Actual humans are afraid of being considered obnoxious, stupid or antisocial. Karma loss is just an indication that perception may be heading in that direction. 5Luke_A_Somers10y Attempts to avoid karma loss by procedural hacks are a stronger indication... 9Kindly10y This is how lost purposes form. Once you've figured out that karma loss is a sign of something bad, you start avoiding it even when it's not a sign of that bad thing. 0[anonymous]10y #noshitsherlock
2022-06-29 19:31:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47553813457489014, "perplexity": 1606.3937684963716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103642979.38/warc/CC-MAIN-20220629180939-20220629210939-00145.warc.gz"}
https://paperswithcode.com/paper/unsupervised-reinforcement-learning-in
# Unsupervised Reinforcement Learning in Multiple Environments 16 Dec 2021  ·  , , · Several recent works have been dedicated to unsupervised reinforcement learning in a single environment, in which a policy is first pre-trained with unsupervised interactions, and then fine-tuned towards the optimal policy for several downstream supervised tasks defined over the same environment. Along this line, we address the problem of unsupervised reinforcement learning in a class of multiple environments, in which the policy is pre-trained with interactions from the whole class, and then fine-tuned for several tasks in any environment of the class. Notably, the problem is inherently multi-objective as we can trade off the pre-training objective between environments in many ways. In this work, we foster an exploration strategy that is sensitive to the most adverse cases within the class. Hence, we cast the exploration problem as the maximization of the mean of a critical percentile of the state visitation entropy induced by the exploration strategy over the class of environments. Then, we present a policy gradient algorithm, $\alpha$MEPOL, to optimize the introduced objective through mediated interactions with the class. Finally, we empirically demonstrate the ability of the algorithm in learning to explore challenging classes of continuous environments and we show that reinforcement learning greatly benefits from the pre-trained exploration strategy w.r.t. learning from scratch. PDF Abstract ## Datasets Add Datasets introduced or used in this paper ## Results from the Paper Add Remove Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers. ## Methods Add Remove No methods listed for this paper. Add relevant methods here
2022-09-25 11:07:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4295790493488312, "perplexity": 1237.26682870797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00413.warc.gz"}
https://www.paddlepaddle.org.cn/documentation/api/en/0.14.0/backward.html
# fluid.backward¶ ## append_backward¶ Append backward part to main_program. A complete neural network training is made up of forward and backward propagation. However, when we configure a network, we only need to specify its forwrd part. The backward part is generated automatically according to the forward part by this function. In most cases, users do not need to invoke this function manually. It will be automatically invoked by the optimizer’s minimize function. Examples # network configuration code # ... avg_loss = fluid.layers.mean(loss)
2019-08-26 10:03:05
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9188855886459351, "perplexity": 2047.0495662532207}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027331485.43/warc/CC-MAIN-20190826085356-20190826111356-00223.warc.gz"}
http://support.hfm.io/1.7/api/hashtables-1.2.3.1/Data-HashTable-IO.html
hashtables-1.2.3.1: Mutable hash tables in the ST monad Data.HashTable.IO Description This module provides wrappers in IO around the functions from Data.HashTable.Class. This module exports three concrete hash table types, one for each hash table implementation in this package: type BasicHashTable k v = IOHashTable (B.HashTable) k v type CuckooHashTable k v = IOHashTable (Cu.HashTable) k v type LinearHashTable k v = IOHashTable (L.HashTable) k v The IOHashTable type can be thought of as a wrapper around a concrete hashtable type, which sets the ST monad state type to PrimState IO, a.k.a. RealWorld: type IOHashTable tabletype k v = tabletype (PrimState IO) k v This module provides stToIO wrappers around the hashtable functions (which are in ST) to make it convenient to use them in IO. It is intended to be imported qualified and used with a user-defined type alias, i.e.: import qualified Data.HashTable.IO as H type HashTable k v = H.CuckooHashTable k v foo :: IO (HashTable Int Int) foo = do ht <- H.new H.insert ht 1 1 return ht Essentially, anywhere you see IOHashTable h k v in the type signatures below, you can plug in any of BasicHashTable k v, CuckooHashTable k v, or LinearHashTable k v. Synopsis # Documentation type BasicHashTable k v = IOHashTable HashTable k v # A type alias for a basic open addressing hash table using linear probing. See Data.HashTable.ST.Basic. type CuckooHashTable k v = IOHashTable HashTable k v # A type alias for the cuckoo hash table. See Data.HashTable.ST.Cuckoo. type LinearHashTable k v = IOHashTable HashTable k v # A type alias for the linear hash table. See Data.HashTable.ST.Linear. type IOHashTable tabletype k v = tabletype (PrimState IO) k v # A type alias for our hash tables, which run in ST, to set the state token type to PrimState IO (aka RealWorld) so that we can use them in IO. new :: HashTable h => IO (IOHashTable h k v) # See the documentation for this function in "Data.HashTable.Class#v:new". newSized :: HashTable h => Int -> IO (IOHashTable h k v) # See the documentation for this function in "Data.HashTable.Class#v:newSized". insert :: (HashTable h, Eq k, Hashable k) => IOHashTable h k v -> k -> v -> IO () # See the documentation for this function in "Data.HashTable.Class#v:insert". delete :: (HashTable h, Eq k, Hashable k) => IOHashTable h k v -> k -> IO () # See the documentation for this function in "Data.HashTable.Class#v:delete". lookup :: (HashTable h, Eq k, Hashable k) => IOHashTable h k v -> k -> IO (Maybe v) # See the documentation for this function in "Data.HashTable.Class#v:lookup". mutate :: (HashTable h, Eq k, Hashable k) => IOHashTable h k v -> k -> (Maybe v -> (Maybe v, a)) -> IO a # See the documentation for this function in "Data.HashTable.Class#v:mutate". mutateIO :: (HashTable h, Eq k, Hashable k) => IOHashTable h k v -> k -> (Maybe v -> IO (Maybe v, a)) -> IO a # See the documentation for this function in "Data.HashTable.Class#v:mutateST". fromList :: (HashTable h, Eq k, Hashable k) => [(k, v)] -> IO (IOHashTable h k v) # See the documentation for this function in "Data.HashTable.Class#v:fromList". fromListWithSizeHint :: (HashTable h, Eq k, Hashable k) => Int -> [(k, v)] -> IO (IOHashTable h k v) # See the documentation for this function in "Data.HashTable.Class#v:fromListWithSizeHint". toList :: (HashTable h, Eq k, Hashable k) => IOHashTable h k v -> IO [(k, v)] # See the documentation for this function in "Data.HashTable.Class#v:toList". mapM_ :: HashTable h => ((k, v) -> IO a) -> IOHashTable h k v -> IO () # See the documentation for this function in "Data.HashTable.Class#v:mapM_". foldM :: HashTable h => (a -> (k, v) -> IO a) -> a -> IOHashTable h k v -> IO a # See the documentation for this function in "Data.HashTable.Class#v:foldM". computeOverhead :: HashTable h => IOHashTable h k v -> IO Double # See the documentation for this function in "Data.HashTable.Class#v:computeOverhead". lookupIndex :: (HashTable h, Eq k, Hashable k) => IOHashTable h k v -> k -> IO (Maybe Word) # See the documentation for this function in "Data.HashTable.Class#v:lookupIndex". nextByIndex :: (HashTable h, Eq k, Hashable k) => IOHashTable h k v -> Word -> IO (Maybe (Word, k, v)) # See the documentation for this function in "Data.HashTable.Class#v:nextByIndex".
2020-01-25 22:27:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4204002320766449, "perplexity": 14540.7304478573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251681625.83/warc/CC-MAIN-20200125222506-20200126012506-00203.warc.gz"}
https://typeset.io/authors/pritam-ranjan-2z2zmsz5zx
Author Pritam Ranjan Bio: Pritam Ranjan is an academic researcher from Indian Institute of Management Indore. The author has contributed to research in topic(s): Gaussian process & Computer experiment. The author has an hindex of 14, co-authored 64 publication(s) receiving 1038 citation(s). Previous affiliations of Pritam Ranjan include Simon Fraser University & Acadia University. Papers More filters Journal ArticleDOI TL;DR: A sequential methodology for estimating a contour from a complex computer code using a stochastic process model as a surrogate for the computer simulator is developed and applied to exploration of a contours for a network queuing system. Abstract: Computer simulation often is used to study complex physical and engineering processes. Although a computer simulator often can be viewed as an inexpensive way to gain insight into a system, it still can be computationally costly. Much of the recent work on the design and analysis of computer experiments has focused on scenarios where the goal is to fit a response surface or process optimization. In this article we develop a sequential methodology for estimating a contour from a complex computer code. The approach uses a stochastic process model as a surrogate for the computer simulator. The surrogate model and associated uncertainty are key components in a new criterion used to identify the computer trials aimed specifically at improving the contour estimate. The proposed approach is applied to exploration of a contour for a network queuing system. Issues related to practical implementation of the proposed approach also are addressed. 253 citations Journal ArticleDOI TL;DR: In this paper, the authors propose a lower bound on the nugget that minimizes the over-smoothing and an iterative regularization approach to construct a predictor that further improves the inter... Abstract: For many expensive deterministic computer simulators, the outputs do not have replication error and the desired metamodel (or statistical emulator) is an interpolator of the observed data. Realizations of Gaussian spatial processes (GP) are commonly used to model such simulator outputs. Fitting a GP model to n data points requires the computation of the inverse and determinant of n×n correlation matrices, R, that are sometimes computationally unstable due to near-singularity of R. This happens if any pair of design points are very close together in the input space. The popular approach to overcome near-singularity is to introduce a small nugget (or jitter) parameter in the model that is estimated along with other model parameters. The inclusion of a nugget in the model often causes unnecessary over-smoothing of the data. In this article, we propose a lower bound on the nugget that minimizes the over-smoothing and an iterative regularization approach to construct a predictor that further improves the inter... 99 citations Posted Content TL;DR: A lower bound on the nugget is proposed that minimizes the over-smoothing and an iterative regularization approach to construct a predictor that further improves the interpolation accuracy is proposed. Abstract: For many expensive deterministic computer simulators, the outputs do not have replication error and the desired metamodel (or statistical emulator) is an interpolator of the observed data. Realizations of Gaussian spatial processes (GP) are commonly used to model such simulator outputs. Fitting a GP model to $n$ data points requires the computation of the inverse and determinant of $n \times n$ correlation matrices, $R$, that are sometimes computationally unstable due to near-singularity of $R$. This happens if any pair of design points are very close together in the input space. The popular approach to overcome near-singularity is to introduce a small nugget (or jitter) parameter in the model that is estimated along with other model parameters. The inclusion of a nugget in the model often causes unnecessary over-smoothing of the data. In this paper, we propose a lower bound on the nugget that minimizes the over-smoothing and an iterative regularization approach to construct a predictor that further improves the interpolation accuracy. We also show that the proposed predictor converges to the GP interpolator. 89 citations Journal ArticleDOI TL;DR: This paper implements a slightly modified version of the model proposed by Ranjan et al. (2011) in the R package GPfit, with a novel parameterization of the spatial correlation function and a clustering based multi-start gradient based optimization algorithm that yield robust optimization that is typically faster than the genetic algorithm based approach. Abstract: Gaussian process (GP) models are commonly used statistical metamodels for emulating expensive computer simulators. Fitting a GP model can be numerically unstable if any pair of design points in the input space are close together. Ranjan, Haynes, and Karsten (2011) proposed a computationally stable approach for fitting GP models to deterministic computer simulators. They used a genetic algorithm based approach that is robust but computationally intensive for maximizing the likelihood. This paper implements a slightly modified version ofthe model proposed by Ranjan et al. (2011 ) in the R package GPfit. A novel parameterization of the spatial correlation function and a clustering based multi-start gradient based optimization algorithm yield robust optimization that is typically faster than the genetic algorithm based approach. We present two examples with R codes to illustrate the usage of the main functions in GPfit . Several test functions are used for performance comparison with the popular R package mlegp . We also use GPfit for a real application, i.e., for emulating the tidal kinetic energy model for the Bay of Fundy, Nova Scotia, Canada. GPfit is free software and distributed under the General Public License and available from the Comprehensive R Archive Network. 86 citations Journal ArticleDOI TL;DR: In this article, a combination of response surface modeling, expected improvement, and the augmented Lagrangian numerical optimization framework is proposed to solve the problem of constrained black-box optimization. Abstract: Constrained blackbox optimization is a difficult problem, with most approaches coming from the mathematical programming literature. The statistical literature is sparse, especially in addressing problems with nontrivial constraints. This situation is unfortunate because statistical methods have many attractive properties: global scope, handling noisy objectives, sensitivity analysis, and so forth. To narrow that gap, we propose a combination of response surface modeling, expected improvement, and the augmented Lagrangian numerical optimization framework. This hybrid approach allows the statistical model to think globally and the augmented Lagrangian to act locally. We focus on problems where the constraints are the primary bottleneck, requiring expensive simulation to evaluate and substantial modeling effort to map out. In that context, our hybridization presents a simple yet effective solution that allows existing objective-oriented statistical approaches, like those based on Gaussian process surrogates ... 66 citations Cited by More filters Journal Article TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment. Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. \$8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON 12,326 citations Journal ArticleDOI 01 Jan 2016 TL;DR: This review paper introduces Bayesian optimization, highlights some of its methodological aspects, and showcases a wide range of applications. Abstract: Big Data applications are typically associated with systems involving large numbers of users, massive complex software systems, and large-scale heterogeneous computing and storage architectures. The construction of such systems involves many distributed design choices. The end products (e.g., recommendation systems, medical analysis tools, real-time game engines, speech recognizers) thus involve many tunable configuration parameters. These parameters are often specified and hard-coded into the software by various developers or teams. If optimized jointly, these parameters can result in significant improvements. Bayesian optimization is a powerful tool for the joint optimization of design choices that is gaining great popularity in recent years. It promises greater automation so as to increase both product quality and human productivity. This review paper introduces Bayesian optimization, highlights some of its methodological aspects, and showcases a wide range of applications. 2,341 citations Journal ArticleDOI TL;DR: An iterative approach based on Monte Carlo Simulation and Kriging metamodel to assess the reliability of structures in a more efficient way and is shown to be very efficient as the probability of failure obtained with AK-MCS is very accurate and this, for only a small number of calls to the performance function. Abstract: An important challenge in structural reliability is to keep to a minimum the number of calls to the numerical models. Engineering problems involve more and more complex computer codes and the evaluation of the probability of failure may require very time-consuming computations. Metamodels are used to reduce these computation times. To assess reliability, the most popular approach remains the numerous variants of response surfaces. Polynomial Chaos [1] and Support Vector Machine [2] are also possibilities and have gained considerations among researchers in the last decades. However, recently, Kriging, originated from geostatistics, have emerged in reliability analysis. Widespread in optimisation, Kriging has just started to appear in uncertainty propagation [3] and reliability [4] , [5] studies. It presents interesting characteristics such as exact interpolation and a local index of uncertainty on the prediction which can be used in active learning methods. The aim of this paper is to propose an iterative approach based on Monte Carlo Simulation and Kriging metamodel to assess the reliability of structures in a more efficient way. The method is called AK-MCS for Active learning reliability method combining Kriging and Monte Carlo Simulation. It is shown to be very efficient as the probability of failure obtained with AK-MCS is very accurate and this, for only a small number of calls to the performance function. Several examples from literature are performed to illustrate the methodology and to prove its efficiency particularly for problems dealing with high non-linearity, non-differentiability, non-convex and non-connex domains of failure and high dimensionality. 796 citations Book 01 Jan 1987 653 citations Journal ArticleDOI TL;DR: This paper develops an efficient reliability analysis method that accurately characterizes the limit state throughout the random variable space and is both accurate for any arbitrarily shaped limit state and computationally efficient even for expensive response functions. Abstract: Many engineering applications are characterized by implicit response functions that are expensive to evaluate and sometimes nonlinear in their behavior, making reliability analysis difficult. This paper develops an efficient reliability analysis method that accurately characterizes the limit state throughout the random variable space. The method begins with a Gaussian process model built from a very small number of samples, and then adaptively chooses where to generate subsequent samples to ensure that the model is accurate in the vicinity of the limit state. The resulting Gaussian process model is then sampled using multimodal adaptive importance sampling to calculate the probability of exceeding (or failing to exceed) the response level of interest. By locating multiple points on or near the limit state, more complex and nonlinear limit states can be modeled, leading to more accurate probability integration. By concentrating the samples in the area where accuracy is important (i.e., in the vicinity of the limit state), only a small number of true function evaluations are required to build a quality surrogate model. The resulting method is both accurate for any arbitrarily shaped limit state and computationally efficient even for expensive response functions. This new method is applied to a collection of example problems including one that analyzes the reliability of a microelectromechanical system device that current available methods have difficulty solving either accurately or efficiently. 581 citations
2023-04-02 05:19:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5610349178314209, "perplexity": 667.3239789716441}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950383.8/warc/CC-MAIN-20230402043600-20230402073600-00618.warc.gz"}
http://mathoverflow.net/feeds/question/86381
Twisted Gelfand pairs (Reference and examples) - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-22T08:49:38Z http://mathoverflow.net/feeds/question/86381 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/86381/twisted-gelfand-pairs-reference-and-examples Twisted Gelfand pairs (Reference and examples) Marc Palm 2012-01-22T16:01:26Z 2012-01-22T18:08:55Z <p>Let $G$ be a locally compact group and let $K$ be a compact group. Let $(\tau, V_\tau)$ be an irreducible representation of $K$.</p> <p>We consider the space of $Endo_K(\tau)$-valued, compactly supported continuous functions $f$ on $G$<br> with $$f(k_1 g k_2) = \tau(k_1) f(g) \tau(k_2),$$ which is an $*$ algebra under convolution.</p> <p>What is a good reference for such algebras, especially in the context with reductive group over local fields and the connection to representation theory?</p> http://mathoverflow.net/questions/86381/twisted-gelfand-pairs-reference-and-examples/86386#86386 Answer by Benjamin Steinberg for Twisted Gelfand pairs (Reference and examples) Benjamin Steinberg 2012-01-22T17:20:04Z 2012-01-22T17:20:04Z <p>A classical reference for twisted Gelfand pairs is J.R. Stembridge, On Schur's Q-functions and the primitive idempotents of a commutative Hecke algebra, J. Algebr. Comb. 1 (1992) 71–95, but I believe this paper considers only finite groups. </p> http://mathoverflow.net/questions/86381/twisted-gelfand-pairs-reference-and-examples/86392#86392 Answer by Paul Broussous for Twisted Gelfand pairs (Reference and examples) Paul Broussous 2012-01-22T18:08:55Z 2012-01-22T18:08:55Z <p>These Hecke algebras are intensively studied in the field of "type theory" for reductive $p$-adic groups.</p> <p>You have a nice summary of basic facts <em>with proofs</em> in chapter 4 of Bushnell and Kutzko's book "The admissible dual of ${\rm GL}(N)$ via compact open subgroups" (the chapter is entitled "Interlude with Hecke algebras"). </p> <p>You may also read the monography "The Langlands conjecture for ${\rm GL}(2)$", written by Bushnell and Henniart. You'll find there a nice introduction to these algebras.</p> <p>There are many other references. But it depends on what exactly you're interested in.</p>
2013-05-22 08:49:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8390100598335266, "perplexity": 1301.2259121880822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701543416/warc/CC-MAIN-20130516105223-00000-ip-10-60-113-184.ec2.internal.warc.gz"}
https://gmatclub.com/forum/what-is-the-value-of-k-w-k-x-k-y-258065.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 10 Dec 2018, 00:42 # Dec 10th is GMAT Club's BDAY :-) Free GMAT Club Tests & Quizzes for 24 hrs to celebrate together! ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History ## Events & Promotions ###### Events & Promotions in December PrevNext SuMoTuWeThFrSa 2526272829301 2345678 9101112131415 16171819202122 23242526272829 303112345 Open Detailed Calendar • ### Free lesson on number properties December 10, 2018 December 10, 2018 10:00 PM PST 11:00 PM PST Practice the one most important Quant section - Integer properties, and rapidly improve your skills. • ### Free GMAT Prep Hour December 11, 2018 December 11, 2018 09:00 PM EST 10:00 PM EST Strategies and techniques for approaching featured GMAT topics. December 11 at 9 PM EST. # What is the value of (k – w ) + (k – x ) + (k – y )? Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 51055 What is the value of (k – w ) + (k – x ) + (k – y )?  [#permalink] ### Show Tags 21 Jan 2018, 02:19 00:00 Difficulty: 45% (medium) Question Stats: 64% (01:23) correct 36% (01:40) wrong based on 108 sessions ### HideShow timer Statistics What is the value of (k – w ) + (k – x ) + (k – y )? (1) w, x, and y are three consecutive odd numbers (2) The mean of w, x, and y is k _________________ PS Forum Moderator Joined: 25 Feb 2013 Posts: 1217 Location: India GPA: 3.82 What is the value of (k – w ) + (k – x ) + (k – y )?  [#permalink] ### Show Tags 21 Jan 2018, 02:26 2 Bunuel wrote: What is the value of (k – w ) + (k – x ) + (k – y )? (1) w, x, and y are three consecutive odd numbers (2) The mean of w, x, and y is k $$k-w+k-x+k-y=3k-(w+x+y)$$ Statement 1: nothing mentioned about $$k$$. Insufficient Statement 2: implies $$w+x+y=3k$$. Hence our stem equation will be $$3k-3k=0$$. Sufficient Option B Director Joined: 06 Jan 2015 Posts: 524 Location: India Concentration: Operations, Finance GPA: 3.35 WE: Information Technology (Computer Software) Re: What is the value of (k – w ) + (k – x ) + (k – y )?  [#permalink] ### Show Tags 21 Jan 2018, 02:28 Bunuel wrote: What is the value of (k – w ) + (k – x ) + (k – y )? (1) w, x, and y are three consecutive odd numbers (2) The mean of w, x, and y is k Modify the Que as 3k-(w+x+y) 1) w, x, and y are three consecutive odd numbers We don't know the value of k, hence not suff 2) The mean of w, x, and y is k w+x+y/3=k It means w+x+y =3k hence gives ans as 0 Hence Suff B _________________ आत्मनॊ मोक्षार्थम् जगद्धिताय च Resource: GMATPrep RCs With Solution CEO Joined: 11 Sep 2015 Posts: 3222 Re: What is the value of (k – w ) + (k – x ) + (k – y )?  [#permalink] ### Show Tags 28 Aug 2018, 13:15 Top Contributor Bunuel wrote: What is the value of (k – w ) + (k – x ) + (k – y )? (1) w, x, and y are three consecutive odd numbers (2) The mean of w, x, and y is k Target question: What is the value of (k – w ) + (k – x ) + (k – y )? This is a good candidate for rephrasing the target question. Take: (k – w ) + (k – x ) + (k – y ) Simplify to get: 3k – w – x – y Rewrite as: 3k – (w + x + y) REPHRASED target question: What is the value of 3k – (w + x + y) ? Aside: The video below has tips on rephrasing the target question Statement 1: w, x, and y are three consecutive odd numbers Since we have no information about k, there's no way to determine the value of 3k – (w + x + y) Since we cannot answer the REPHRASED target question with certainty, statement 1 is NOT SUFFICIENT Statement 2: The mean of w, x, and y is k This tells us that (w + x + y)/3 = k Multiply both sides by 3 to get: w + x + y = 3k If w + x + y = 3k, then 3k – (w + x + y) = 3k – (3k) = 0 Perfect, the answer to the REPHRASED target question is 0 Since we can answer the REPHRASED target question with certainty, statement 2 is SUFFICIENT RELATED VIDEO FROM OUR COURSE _________________ Test confidently with gmatprepnow.com Re: What is the value of (k – w ) + (k – x ) + (k – y )? &nbs [#permalink] 28 Aug 2018, 13:15 Display posts from previous: Sort by
2018-12-10 08:42:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.63474041223526, "perplexity": 3138.6044185150313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823320.11/warc/CC-MAIN-20181210080704-20181210102204-00033.warc.gz"}
http://www.muhlbergerweb.com/article/polynomial_fitting_pitfalls
A common task in research is to represent trends in a dataset by a smooth analytic function. If a physical model for the trend exists, then the goal is to fit for the parameters of the model. In other situations, however, no good model is available, so one tries instead to represent the trend with a general functional form or a basis function expansion. Polynomials are a popular choice, partially thanks to their general familiarity and their role in Taylor series. What many practitioners might not realize, however, is that there is more than one way to represent a polynomial fit. Intuition does us a disservice in this context. When we think of a polynomial, we undoubtedly consider the form ${a}_{0}+{a}_{1}x+{a}_{2}{x}^{2}+\dots$ In the context of fitting data, then, it seems natural to fit for the coefficients ${a}_{n}$ (which can be done efficiently using linear least squares). This can have serious consequences, though, especially if the order of the polynomial is high (which is likely when you care about accuracy the most!). When implemented with finite-precision numerics, this power series form is very ill-conditioned (in fact, in the context of spectral methods, the power series basis gives rise to the Hilbert matrix - see “Method of moments” in Boyd §3.1). What this means is that small errors in your fitting coefficients can produce large errors when the polynomial is evaluated. So what are the alternatives? Fortunately, one doesn’t have to give up the familiarity of polynomials nor the convenience of linear least squares. One must simply choose a different polynomial basis than the “power series” basis of ${x}^{n}$. In particular, orthogonal polynomials like Chebyshev and Legendre polynomials produce much more robust results. So instead of solving for the coefficients of ${x}^{n}$, we solve for the coefficients of ${T}_{n}\left(x\right)$, for example. And lest you think this is of merely theoretical importance, here is an example from the astronomical literature. In 1996, Flower published a set of polynomial fits for determining intrinsic properties of stars from observations made on Earth. In particular, they presented a 7th-order fit for determining the effective temperature (Teff) of a star from its color (B-V). Unfortunately, you can’t use these fits as they appear in the original publication; not only are they riddled with typos, but the precision of the coefficients as printed is insufficient to faithfully reproduce the results for the reason discussed above. Fortunately, Torrres published corrections to these fits in 2007, enabling us to explore the robustness of an alternative polynomial representation. For this comparison, I first computed the Chebyshev coefficients of this fitting polynomial (this was done symbolically, resulting in almost perfect agreement when the two forms are evaluated numerically; see figure above). When evaluating these polynomials, I used Horner’s method for the power series form and Clenshaw’s algorithm for the Chebyshev series. With these pieces in place, we can compare how the two representations fare in different circumstances. First, let’s examine how much accuracy is lost when the functions are evaluated in single-precision: While the Chebyshev form remains accurate over the whole domain, the power series form exhibits growing errors as B-V grows farther from zero. Next, let’s see what we get when the coefficients are truncated to 6 decimal places (the same precision as in the original paper, but without the typos). Again, while the magnitude of the error may be small, we see that it is poorly controlled as B-V deviates from zero, while the Chebyshev results are robust across the domain under this perturbation to the coefficients. Finally, let’s consider the situation where we would like to reduce the order of the fit. Intuitively, we expect high-order terms to be less important than low-order ones, since we expect the coefficients of a convergent series to approach zero as the order approaches infinity. However, dropping terms from a power series fit is disastrous away from $x=0$. The truncated Chebyshev series, on the other hand, remains a good approximation: So the next time you’re considering a polynomial fit to represent a trend in your data, choose an orthogonal basis when performing linear least squares; the trustworthiness of your results depends on it.
2018-07-19 15:23:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.848106324672699, "perplexity": 342.9631734351921}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591140.45/warc/CC-MAIN-20180719144851-20180719164851-00398.warc.gz"}
https://kerodon.net/tag/02TN
# Kerodon $\Newextarrow{\xRightarrow}{5,5}{0x21D2}$ $\newcommand\empty{}$ Construction 4.5.9.1 (Direct Images). Let $V: \operatorname{\mathcal{E}}\rightarrow \operatorname{\mathcal{D}}$ and $U: \operatorname{\mathcal{D}}\rightarrow \operatorname{\mathcal{C}}$ be morphisms of simplicial sets. For every integer $n \geq 0$, we let $\operatorname{Res}_{\operatorname{\mathcal{D}}/\operatorname{\mathcal{C}}}(\operatorname{\mathcal{E}})_{n}$ denote the collection of pairs $(\sigma , f)$, where $\sigma$ is an $n$-simplex of $\operatorname{\mathcal{C}}$ and $f: \Delta ^{n} \times _{\operatorname{\mathcal{C}}} \operatorname{\mathcal{D}}\rightarrow \operatorname{\mathcal{E}}$ is a morphism for which the composition $\Delta ^{n} \times _{\operatorname{\mathcal{C}}} \operatorname{\mathcal{D}}\xrightarrow {f} \operatorname{\mathcal{E}}\xrightarrow {V} \operatorname{\mathcal{D}}$ coincides with projection onto the second factor. Note that every nondecreasing function $\alpha : [m] \rightarrow [n]$ induces a map $\operatorname{Res}_{\operatorname{\mathcal{D}}/\operatorname{\mathcal{C}}}(\operatorname{\mathcal{E}})_{n} \rightarrow \operatorname{Res}_{\operatorname{\mathcal{D}}/\operatorname{\mathcal{C}}}(\operatorname{\mathcal{E}})_{m} \quad \quad (\sigma ,f) \mapsto (\alpha ^{\ast }(\sigma ), f' ),$ where $f'$ denotes the composite map $\Delta ^{m} \times _{\operatorname{\mathcal{C}}} \operatorname{\mathcal{D}}\xrightarrow { \alpha \times \operatorname{id}} \Delta ^{n} \times _{\operatorname{\mathcal{C}}} \operatorname{\mathcal{D}}\xrightarrow {f} \operatorname{\mathcal{E}}.$ This construction is compatible with composition, and therefore endows $\{ \operatorname{Res}_{\operatorname{\mathcal{D}}/\operatorname{\mathcal{C}}}( \operatorname{\mathcal{E}})_{n} \} _{n \geq 0}$ with the structure of a simplicial set $\operatorname{Res}_{\operatorname{\mathcal{D}}/\operatorname{\mathcal{C}}}(\operatorname{\mathcal{E}}) = \operatorname{Res}_{\operatorname{\mathcal{D}}/\operatorname{\mathcal{C}}}(\operatorname{\mathcal{E}})_{\bullet }$ which we will refer to as the direct image of $\operatorname{\mathcal{E}}$ along $U$. Note that the construction $(\sigma ,f) \mapsto \sigma$ determines a morphism of simplicial sets $\pi : \operatorname{Res}_{\operatorname{\mathcal{D}}/\operatorname{\mathcal{C}}}( \operatorname{\mathcal{E}}) \rightarrow \operatorname{\mathcal{C}}$. Moreover, there is a tautological evaluation map $\operatorname{ev}: \operatorname{\mathcal{D}}\times _{\operatorname{\mathcal{C}}} \operatorname{Res}_{\operatorname{\mathcal{D}}/\operatorname{\mathcal{C}}}(\operatorname{\mathcal{E}}) \rightarrow \operatorname{\mathcal{E}}$, which carries an $n$-simplex $( \widetilde{\sigma }, (\sigma ,f) )$ of the fiber product $\operatorname{\mathcal{D}}\times _{\operatorname{\mathcal{C}}} \operatorname{Res}_{\operatorname{\mathcal{D}}/\operatorname{\mathcal{C}}}(\operatorname{\mathcal{E}})$ to the $n$-simplex of $\operatorname{\mathcal{E}}$ given by the composite map $\Delta ^{n} \xrightarrow { \operatorname{id}\times \widetilde{\sigma } } \Delta ^{n} \times _{\operatorname{\mathcal{C}}} \operatorname{\mathcal{D}}\xrightarrow {f} \operatorname{\mathcal{E}}$.
2022-05-21 21:26:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.992963969707489, "perplexity": 68.02785989064769}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662541747.38/warc/CC-MAIN-20220521205757-20220521235757-00437.warc.gz"}
https://www.physicsforums.com/threads/directional-derivatives.592769/
# Homework Help: Directional derivatives 1. Apr 2, 2012 ### lonewolf219 1. The problem statement, all variables and given/known data f(x.y)=4x^2-y^2 2. Relevant equations Ʃ partial derivative components(?) 3. The attempt at a solution The solution when θ=pi and f(1,-1) is -8. Does this mean that one of the coordinates of this function is (1,-1,-8)? What exactly is the directional derivative, and what does the solution represent? 2. Apr 2, 2012 ### LCKurtz You haven't stated the problem for which you are giving the solution. I'm guessing it was "Find the directional derivative of f(x,y) at the point (1,-1) in the direction of $\theta=\pi$. To help you visualize what you are calculating, think of a flat metal plate and suppose $f(x,y)=4x^2-y^2$ as the temperature at each point in the plate. If you were at (1,-1) the temperature there would be f(1,-1) = 3. Depending on what direction you move from that point, it may get warmer or colder. The directional derivative in some direction at that point is the rate of change of temperature in that direction. So according to your calculations above, if you move in the $\pi$ direction from there it is cooling off at 8 degrees / unit length.
2018-09-25 20:53:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5935633778572083, "perplexity": 363.1975250197037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267162385.84/warc/CC-MAIN-20180925202648-20180925223048-00222.warc.gz"}
https://tex.stackexchange.com/questions/118925/vertical-spacing-on-enumitem
# vertical spacing on enumitem In the example below I want the enumerate list to essentially be inline'' with the word Example from the example environment. It works perfectly with text or enumerate* (inline list), but I cannot seam to figure out how to control some the remainder of the spacing created by the enumerate environment. \documentclass{article} \usepackage{amsthm} \usepackage{enumitem} \usepackage{hyperref} \theoremstyle{define} \newtheorem{example}{Example} \begin{document} \begin{example} \fbox{\parbox[t]{9cm}{ \begin{enumerate}[label=(\arabic*),nosep] \item The duplicate ratio of $2a:3b$ is $4a^2:9b^2$. \item The subduplicate ratio of $49:25$ is $7:5$. \item The triplicate ratio of $2x:1$ is $8x^3:1$. \end{enumerate} }} \end{example} \begin{example} \fbox{Can you see the difference?} \end{example} \end{document} Have you tried using minipage instead of \parbox? \begin{example} \fbox{% \begin{minipage}[t]{9cm}% \begin{enumerate}[label=(\arabic*),nosep] \item The duplicate ratio of $2a:3b$ is $4a^2:9b^2$. \item The subduplicate ratio of $49:25$ is $7:5$. \item The triplicate ratio of $2x:1$ is $8x^3:1$. \end{enumerate} \end{minipage}} \end{example} Look at this answer to \parbox vs. minipage: Differences in applicability to see an explanation of the differences between \parbox and minipage. In particular, note the third difference listed in the answer. # Issue with hyperref A MWE for hyperref is realized as: \documentclass{article} \usepackage{hyperref} \begin{document} \noindent Hello \fbox{% \begin{minipage}[t]{9cm} \begin{enumerate} \item The duplicate ratio of $2a:3b$ is $4a^2:9b^2$. \item The subduplicate ratio of $49:25$ is $7:5$. \item The triplicate ratio of $2x:1$ is $8x^3:1$. \end{enumerate} \end{minipage}} \end{document} Something seems to be happening with how hyperref is manipulating \item within an enumerate environment. If you use itemize or create your own list environment, the spacing issue vanishes. It also vanishes if you use \item[] as the first item in an enumerate environment (though I'm sure this is not what you want to do). • This is great! If I add \usepackage{hyperref} to the preamble it breaks it though. What is going on here? – Ra is dead Jun 13 '13 at 2:27 • @Raisdead There seems to be a deeper issue going on here. I've posted a new question about how hyperref is interacting with enumerated lists: see here – A.Ellett Jun 13 '13 at 5:04
2019-09-15 20:49:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.947804868221283, "perplexity": 1323.5980248013173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572289.5/warc/CC-MAIN-20190915195146-20190915221146-00162.warc.gz"}
https://botorch.org/api/utils.html
# botorch.utils¶ ## Constraints¶ Helpers for handling outcome constraints. botorch.utils.constraints.get_outcome_constraint_transforms(outcome_constraints)[source] Create outcome constraint callables from outcome constraint tensors. Parameters: outcome_constraints (Optional[Tuple[Tensor, Tensor]]) – A tuple of (A, b). For k outcome constraints and m outputs at f(x), A is k x m and b is k x 1 such that A f(x) <= b. Returns: A list of callables, each mapping a Tensor of size b x q x m to a tensor of size b x q, where m is the number of outputs of the model. Negative values imply feasibility. The callables support broadcasting (e.g. for calling on a tensor of shape mc_samples x b x q x m). Return type: Optional[List[Callable[[Tensor], Tensor]]] Example >>> # constrain f(x)[0] <= 0 >>> A = torch.tensor([[1., 0.]]) >>> b = torch.tensor([[0.]]) >>> outcome_constraints = get_outcome_constraint_transforms((A, b)) ## Containers¶ Representations for different kinds of data. class botorch.utils.containers.DenseContainer(values, event_shape)[source] Bases: BotorchContainer Basic representation of data stored as a dense Tensor. Parameters: • values (Tensor) – • event_shape (Size) – values: Tensor event_shape: Size property shape: Size property device: device property dtype: dtype class botorch.utils.containers.SliceContainer(values, indices, event_shape)[source] Bases: BotorchContainer Represent data points formed by concatenating (n-1)-dimensional slices taken from the leading dimension of an n-dimensional source tensor. Parameters: • values (Tensor) – • indices (LongTensor) – • event_shape (Size) – values: Tensor indices: LongTensor event_shape: Size property shape: Size property device: device property dtype: dtype ## Context Managers¶ Utilities for optimization. class botorch.utils.context_managers.TensorCheckpoint(values, device, dtype)[source] Bases: tuple Create new instance of TensorCheckpoint(values, device, dtype) Parameters: • values (Tensor) – • device (Optional[device]) – • dtype (Optional[dtype]) – values: Tensor Alias for field number 0 device: Optional[device] Alias for field number 1 dtype: Optional[dtype] Alias for field number 2 botorch.utils.context_managers.del_attribute_ctx(instance, *attrs, enforce_hasattr=False)[source] Contextmanager for temporarily deleting attributes. Parameters: • instance (object) – • attrs (str) – • enforce_hasattr (bool) – Return type: Generator[None, None, None] Contextmanager for temporarily setting the requires_grad field of a module’s parameters. Parameters: • module (Module) – • assignments (Dict[str, bool]) – Return type: Generator[None, None, None] botorch.utils.context_managers.parameter_rollback_ctx(parameters, checkpoint=None, **tkwargs)[source] Contextmanager that exits by rolling back a module’s state_dict. Parameters: • module – Module instance. • name_filter – Optional Boolean function used to filter items by name. • checkpoint (Optional[Dict[str, TensorCheckpoint]]) – Optional cache of values and tensor metadata specifying the rollback state for the module (or some subset thereof). • **tkwargs (Any) – Keyword arguments passed to torch.Tensor.to when copying data from each tensor in module.state_dict() to the internally created checkpoint. Only adhered to when the checkpoint argument is None. • parameters (Dict[str, Tensor]) – Yields: A dictionary of TensorCheckpoints for the module’s state_dict. Any in-places changes to the checkpoint will be observed at rollback time. If the checkpoint is cleared, no rollback will occur. Return type: Generator[Dict[str, TensorCheckpoint], None, None] botorch.utils.context_managers.module_rollback_ctx(module, name_filter=None, checkpoint=None, **tkwargs)[source] Contextmanager that exits by rolling back a module’s state_dict. Parameters: • module (Module) – Module instance. • name_filter (Optional[Callable[[str], bool]]) – Optional Boolean function used to filter items by name. • checkpoint (Optional[Dict[str, TensorCheckpoint]]) – Optional cache of values and tensor metadata specifying the rollback state for the module (or some subset thereof). • **tkwargs (Any) – Keyword arguments passed to torch.Tensor.to when copying data from each tensor in module.state_dict() to the internally created checkpoint. Only adhered to when the checkpoint argument is None. Yields: A dictionary of TensorCheckpoints for the module’s state_dict. Any in-places changes to the checkpoint will be observed at rollback time. If the checkpoint is cleared, no rollback will occur. Return type: Generator[Dict[str, TensorCheckpoint], None, None] Parameters: • parameters (Union[Dict[str, Tensor], Iterable[Tensor]]) – • zero_on_enter (bool) – • zero_on_exit (bool) – Return type: Generator[None, None, None] ## Datasets¶ Representations for different kinds of datasets. class botorch.utils.datasets.BotorchDataset[source] Bases: object class botorch.utils.datasets.SupervisedDatasetMeta[source] Bases: type class botorch.utils.datasets.SupervisedDataset(*args, **kwargs)[source] Bases: BotorchDataset Base class for datasets consisting of labelled pairs (x, y). This class object’s __call__ method converts Tensors src to DenseContainers under the assumption that event_shape=src.shape[-1:]. Example: X = torch.rand(16, 2) Y = torch.rand(16, 1) A = SupervisedDataset(X, Y) B = SupervisedDataset( DenseContainer(X, event_shape=X.shape[-1:]), DenseContainer(Y, event_shape=Y.shape[-1:]), ) assert A == B Parameters: • args (Any) – • kwargs (Any) – X: BotorchContainer Y: BotorchContainer classmethod dict_from_iter(X, Y, *, keys=None)[source] Returns a dictionary of SupervisedDataset from iterables. Parameters: • X (Union[BotorchContainer, Tensor, Iterable[Union[BotorchContainer, Tensor]]]) – • Y (Union[BotorchContainer, Tensor, Iterable[Union[BotorchContainer, Tensor]]]) – • keys (Optional[Iterable[Hashable]]) – Return type: Dict[Hashable, SupervisedDataset] class botorch.utils.datasets.FixedNoiseDataset(*args, **kwargs)[source] A SupervisedDataset with an additional field Yvar that stipulates observations variances so that Y[i] ~ N(f(X[i]), Yvar[i]). Parameters: • args (Any) – • kwargs (Any) – X: BotorchContainer Y: BotorchContainer Yvar: BotorchContainer classmethod dict_from_iter(X, Y, Yvar=None, *, keys=None)[source] Returns a dictionary of FixedNoiseDataset from iterables. Parameters: • X (Union[BotorchContainer, Tensor, Iterable[Union[BotorchContainer, Tensor]]]) – • Y (Union[BotorchContainer, Tensor, Iterable[Union[BotorchContainer, Tensor]]]) – • Yvar (Optional[Union[BotorchContainer, Tensor, Iterable[Union[BotorchContainer, Tensor]]]]) – • keys (Optional[Iterable[Hashable]]) – Return type: Dict[Hashable, SupervisedDataset] class botorch.utils.datasets.RankingDataset(*args, **kwargs)[source] A SupervisedDataset whose labelled pairs (x, y) consist of m-ary combinations x ∈ Z^{m} of elements from a ground set Z = (z_1, …) and ranking vectors y {0, …, m - 1}^{m} with properties: 1. Ranks start at zero, i.e. min(y) = 0. 2. Sorted ranks are contiguous unless one or more ties are present. 3. k ranks are skipped after a k-way tie. Example: X = SliceContainer( values=torch.rand(16, 2), indices=torch.stack([torch.randperm(16)[:3] for _ in range(8)]), event_shape=torch.Size([3 * 2]), ) Y = DenseContainer( torch.stack([torch.randperm(3) for _ in range(8)]), event_shape=torch.Size([3]) ) dataset = RankingDataset(X, Y) Parameters: • args (Any) – • kwargs (Any) – X: SliceContainer Y: BotorchContainer ## Dispatcher¶ botorch.utils.dispatcher.type_bypassing_encoder(arg)[source] Parameters: arg (Any) – Return type: Type class botorch.utils.dispatcher.Dispatcher(name, doc=None, encoder=<class 'type'>)[source] Bases: Dispatcher Clearing house for multiple dispatch functionality. This class extends <multipledispatch.Dispatcher> by: (i) generalizing the argument encoding convention during method lookup, (ii) implementing __getitem__ as a dedicated method lookup function. Parameters: • name (str) – A string identifier for the Dispatcher instance. • doc (Optional[str]) – A docstring for the multiply dispatched method(s). • encoder (Callable[Any, Type]) – A callable that individually transforms the arguments passed at runtime in order to construct the key used for method lookup as tuple(map(encoder, args)). Defaults to type. dispatch(*types)[source] Method lookup strategy. Checks for an exact match before traversing the set of registered methods according to the current ordering. Parameters: types (Type) – A tuple of types that gets compared with the signatures of registered methods to determine compatibility. Returns: The first method encountered with a matching signature. Return type: Callable encode_args(args)[source] Converts arguments into a tuple of types used during method lookup. Parameters: args (Any) – Return type: Tuple[Type] help(*args, **kwargs)[source] Prints the retrieved method’s docstring. Parameters: • args (Any) – • kwargs (Any) – Return type: None source(*args, **kwargs)[source] Prints the retrieved method’s source types. Return type: None property encoder: Callable[Any, Type] name funcs doc ## Low-Rank Cholesky Update Utils¶ botorch.utils.low_rank.extract_batch_covar(mt_mvn)[source] Extract a batched independent covariance matrix from an MTMVN. Parameters: mt_mvn (MultitaskMultivariateNormal) – A multi-task multivariate normal with a block diagonal covariance matrix. Returns: A lazy covariance matrix consisting of a batch of the blocks of Return type: LinearOperator botorch.utils.low_rank.sample_cached_cholesky(posterior, baseline_L, q, base_samples, sample_shape, max_tries=6)[source] Get posterior samples at the q new points from the joint multi-output posterior. Parameters: • posterior (GPyTorchPosterior) – The joint posterior is over (X_baseline, X). • baseline_L (Tensor) – The baseline lower triangular cholesky factor. • q (int) – The number of new points in X. • base_samples (Tensor) – The base samples. • sample_shape (Size) – The sample shape. • max_tries (int) – The number of tries for computing the Cholesky decomposition with increasing jitter. Returns: A sample_shape x batch_shape x q x m-dim tensor of posterior samples at the new points. Return type: Tensor ## Objective¶ Helpers for handling objectives. botorch.utils.objective.get_objective_weights_transform(weights)[source] Create a linear objective callable from a set of weights. Create a callable mapping a Tensor of size b x q x m and an (optional) Tensor of size b x q x d to a Tensor of size b x q, where m is the number of outputs of the model using scalarization via the objective weights. This callable supports broadcasting (e.g. for calling on a tensor of shape mc_samples x b x q x m). For m = 1, the objective weight is used to determine the optimization direction. Parameters: weights (Optional[Tensor]) – a 1-dimensional Tensor containing a weight for each task. If not provided, the identity mapping is used. Returns: Transform function using the objective weights. Return type: Callable[[Tensor, Optional[Tensor]], Tensor] Example >>> weights = torch.tensor([0.75, 0.25]) >>> transform = get_objective_weights_transform(weights) botorch.utils.objective.apply_constraints_nonnegative_soft(obj, constraints, samples, eta)[source] Applies constraints to a non-negative objective. This function uses a sigmoid approximation to an indicator function for each constraint. Parameters: • obj (Tensor) – A n_samples x b x q (x m’)-dim Tensor of objective values. • constraints (List[Callable[[Tensor], Tensor]]) – A list of callables, each mapping a Tensor of size b x q x m to a Tensor of size b x q, where negative values imply feasibility. This callable must support broadcasting. Only relevant for multi- output models (m > 1). • samples (Tensor) – A n_samples x b x q x m Tensor of samples drawn from the posterior. • eta (Union[Tensor, float]) – The temperature parameter for the sigmoid function. Can be either a float or a 1-dim tensor. In case of a float the same eta is used for every constraint in constraints. In case of a tensor the length of the tensor must match the number of provided constraints. The i-th constraint is then estimated with the i-th eta value. Returns: A n_samples x b x q (x m’)-dim tensor of feasibility-weighted objectives. Return type: Tensor botorch.utils.objective.soft_eval_constraint(lhs, eta=0.001)[source] Element-wise evaluation of a constraint in a ‘soft’ fashion value(x) = 1 / (1 + exp(x / eta)) Parameters: • lhs (Tensor) – The left hand side of the constraint lhs <= 0. • eta (float) – The temperature parameter of the softmax function. As eta decreases, this approximates the Heaviside step function. Returns: Element-wise ‘soft’ feasibility indicator of the same shape as lhs. For each element x, value(x) -> 0 as x becomes positive, and value(x) -> 1 as x becomes negative. Return type: Tensor botorch.utils.objective.apply_constraints(obj, constraints, samples, infeasible_cost, eta=0.001)[source] Apply constraints using an infeasible_cost M for negative objectives. This allows feasibility-weighting an objective for the case where the objective can be negative by using the following strategy: (1) Add M to make obj non-negative; (2) Apply constraints using the sigmoid approximation; (3) Shift by -M. Parameters: • obj (Tensor) – A n_samples x b x q (x m’)-dim Tensor of objective values. • constraints (List[Callable[[Tensor], Tensor]]) – A list of callables, each mapping a Tensor of size b x q x m to a Tensor of size b x q, where negative values imply feasibility. This callable must support broadcasting. Only relevant for multi- output models (m > 1). • samples (Tensor) – A n_samples x b x q x m Tensor of samples drawn from the posterior. • infeasible_cost (float) – The infeasible value. • eta (Union[Tensor, float]) – The temperature parameter of the sigmoid function. Can be either a float or a 1-dim tensor. In case of a float the same eta is used for every constraint in constraints. In case of a tensor the length of the tensor must match the number of provided constraints. The i-th constraint is then estimated with the i-th eta value. Returns: A n_samples x b x q (x m’)-dim tensor of feasibility-weighted objectives. Return type: Tensor ## Rounding¶ Discretization (rounding) functions for acquisition optimization. References [Daulton2022bopr] (1,2) S. Daulton, X. Wan, D. Eriksson, M. Balandat, M. A. Osborne, E. Bakshy. Bayesian Optimization over Discrete and Mixed Spaces via Probabilistic Reparameterization. Advances in Neural Information Processing Systems 35, 2022. botorch.utils.rounding.approximate_round(X, tau=0.001)[source] Diffentiable approximate rounding function. This method is a piecewise approximation of a rounding function where each piece is a hyperbolic tangent function. Parameters: • X (Tensor) – The tensor to round to the nearest integer (element-wise). • tau (float) – A temperature hyperparameter. Returns: The approximately rounded input tensor. Return type: Tensor class botorch.utils.rounding.IdentitySTEFunction(*args, **kwargs)[source] Bases: Function Base class for functions using straight through gradient estimators. This class approximates the gradient with the identity function. Use a straight-through estimator the gradient. This uses the identity function. Parameters: Returns: The provided tensor. Return type: Tensor class botorch.utils.rounding.RoundSTE(*args, **kwargs)[source] Round the input tensor and use a straight-through gradient estimator. [Daulton2022bopr] proposes using this in acquisition optimization. static forward(ctx, X)[source] Round the input tensor element-wise. Parameters: X (Tensor) – The tensor to be rounded. Returns: A tensor where each element is rounded to the nearest integer. Return type: Tensor class botorch.utils.rounding.OneHotArgmaxSTE(*args, **kwargs)[source] Discretize a continuous relaxation of a one-hot encoded categorical. This returns a one-hot encoded categorical and use a straight-through gradient estimator via an identity function. [Daulton2022bopr] proposes using this in acquisition optimization. static forward(ctx, X)[source] Discretize the input tensor. This applies a argmax along the last dimensions of the input tensor and one-hot encodes the result. Parameters: X (Tensor) – The tensor to be rounded. Returns: A tensor where each element is rounded to the nearest integer. Return type: Tensor ## Sampling¶ Utilities for MC and qMC sampling. References T. A. Trikalinos and G. van Valkenhoef. Efficient sampling from uniform density n-polytopes. Technical report, Brown University, 2014. botorch.utils.sampling.manual_seed(seed=None)[source] Contextmanager for manual setting the torch.random seed. Parameters: seed (Optional[int]) – The seed to set the random number generator to. Returns: Generator Return type: Generator[None, None, None] Example >>> with manual_seed(1234): >>> X = torch.rand(3) botorch.utils.sampling.draw_sobol_samples(bounds, n, q, batch_shape=None, seed=None)[source] Draw qMC samples from the box defined by bounds. Parameters: • bounds (Tensor) – A 2 x d dimensional tensor specifying box constraints on a d-dimensional space, where bounds[0, :] and bounds[1, :] correspond to lower and upper bounds, respectively. • n (int) – The number of (q-batch) samples. As a best practice, use powers of 2. • q (int) – The size of each q-batch. • batch_shape (Optional[Iterable[int], torch.Size]) – The batch shape of the samples. If given, returns samples of shape n x batch_shape x q x d, where each batch is an n x q x d-dim tensor of qMC samples. • seed (Optional[int]) – The seed used for initializing Owen scrambling. If None (default), use a random seed. Returns: A n x batch_shape x q x d-dim tensor of qMC samples from the box defined by bounds. Return type: Tensor Example >>> bounds = torch.stack([torch.zeros(3), torch.ones(3)]) >>> samples = draw_sobol_samples(bounds, 16, 2) botorch.utils.sampling.draw_sobol_normal_samples(d, n, device=None, dtype=None, seed=None)[source] Draw qMC samples from a multi-variate standard normal N(0, I_d). A primary use-case for this functionality is to compute an QMC average of f(X) over X where each element of X is drawn N(0, 1). Parameters: • d (int) – The dimension of the normal distribution. • n (int) – The number of samples to return. As a best practice, use powers of 2. • device (Optional[device]) – The torch device. • dtype (Optional[dtype]) – The torch dtype. • seed (Optional[int]) – The seed used for initializing Owen scrambling. If None (default), use a random seed. Returns: A tensor of qMC standard normal samples with dimension n x d with device and dtype specified by the input. Return type: Tensor Example >>> samples = draw_sobol_normal_samples(2, 16) botorch.utils.sampling.sample_hypersphere(d, n=1, qmc=False, seed=None, device=None, dtype=None)[source] Sample uniformly from a unit d-sphere. Parameters: • d (int) – The dimension of the hypersphere. • n (int) – The number of samples to return. • qmc (bool) – If True, use QMC Sobol sampling (instead of i.i.d. uniform). • seed (Optional[int]) – If provided, use as a seed for the RNG. • device (Optional[device]) – The torch device. • dtype (Optional[dtype]) – The torch dtype. Returns: An n x d tensor of uniform samples from from the d-hypersphere. Return type: Tensor Example >>> sample_hypersphere(d=5, n=10) botorch.utils.sampling.sample_simplex(d, n=1, qmc=False, seed=None, device=None, dtype=None)[source] Sample uniformly from a d-simplex. Parameters: • d (int) – The dimension of the simplex. • n (int) – The number of samples to return. • qmc (bool) – If True, use QMC Sobol sampling (instead of i.i.d. uniform). • seed (Optional[int]) – If provided, use as a seed for the RNG. • device (Optional[device]) – The torch device. • dtype (Optional[dtype]) – The torch dtype. Returns: An n x d tensor of uniform samples from from the d-simplex. Return type: Tensor Example >>> sample_simplex(d=3, n=10) botorch.utils.sampling.sample_polytope(A, b, x0, n=10000, n0=100, seed=None)[source] Hit and run sampler from uniform sampling points from a polytope, described via inequality constraints A*x<=b. Parameters: • A (Tensor) – A Tensor describing inequality constraints so that all samples satisfy Ax<=b. • b (Tensor) – A Tensor describing the inequality constraints so that all samples satisfy Ax<=b. • x0 (Tensor) – A d-dim Tensor representing a starting point of the chain satisfying the constraints. • n (int) – The number of resulting samples kept in the output. • n0 (int) – The number of burn-in samples. The chain will produce n+n0 samples but the first n0 samples are not saved. • seed (Optional[int]) – The seed for the sampler. If omitted, use a random seed. Returns: (n, d) dim Tensor containing the resulting samples. Return type: Tensor botorch.utils.sampling.batched_multinomial(weights, num_samples, replacement=False, generator=None, out=None)[source] Sample from multinomial with an arbitrary number of batch dimensions. Parameters: • weights (Tensor) – A batch_shape x num_categories tensor of weights. For each batch index i, j, …, this functions samples from a multinomial with input weights[i, j, …, :]. Note that the weights need not sum to one, but must be non-negative, finite and have a non-zero sum. • num_samples (int) – The number of samples to draw for each batch index. Must be smaller than num_categories if replacement=False. • replacement (bool) – If True, samples are drawn with replacement. • generator (Optional[Generator]) – A a pseudorandom number generator for sampling. • out (Optional[Tensor]) – The output tensor (optional). If provided, must be of size batch_shape x num_samples. Returns: A batch_shape x num_samples tensor of samples. Return type: LongTensor This is a thin wrapper around torch.multinomial that allows weight (input) tensors with an arbitrary number of batch dimensions (torch.multinomial only allows a single batch dimension). The calling signature is the same as for torch.multinomial. Example >>> weights = torch.rand(2, 3, 10) >>> samples = batched_multinomial(weights, 4) # shape is 2 x 3 x 4 botorch.utils.sampling.find_interior_point(A, b, A_eq=None, b_eq=None)[source] Find an interior point of a polytope via linear programming. Parameters: • A (ndarray) – A n_ineq x d-dim numpy array containing the coefficients of the constraint inequalities. • b (ndarray) – A n_ineq x 1-dim numpy array containing the right hand sides of the constraint inequalities. • A_eq (Optional[ndarray]) – A n_eq x d-dim numpy array containing the coefficients of the constraint equalities. • b_eq (Optional[ndarray]) – A n_eq x 1-dim numpy array containing the right hand sides of the constraint equalities. Returns: A d-dim numpy array containing an interior point of the polytope. This function will raise a ValueError if there is no such point. Return type: ndarray This method solves the following Linear Program: min -s subject to A @ x <= b - 2 * s, s >= 0, A_eq @ x = b_eq In case the polytope is unbounded, then it will also constrain the slack variable s to s<=1. class botorch.utils.sampling.HitAndRunPolytopeSampler(inequality_constraints=None, equality_constraints=None, bounds=None, interior_point=None, n_burnin=0)[source] Bases: PolytopeSampler A sampler for sampling from a polyope using a hit-and-run algorithm. A sampler for sampling from a polyope using a hit-and-run algorithm. Parameters: • inequality_constraints (Optional[Tuple[Tensor, Tensor]]) – Tensors (A, b) describing inequality constraints A @ x <= b, where A is a n_ineq_con x d-dim Tensor and b is a n_ineq_con x 1-dim Tensor, with n_ineq_con the number of inequalities and d the dimension of the sample space. • equality_constraints (Optional[Tuple[Tensor, Tensor]]) – Tensors (C, d) describing the equality constraints C @ x = d, where C is a n_eq_con x d-dim Tensor and d is a n_eq_con x 1-dim Tensor with n_eq_con the number of equalities. • bounds (Optional[Tensor]) – A 2 x d-dim tensor of box bounds, where inf (-inf) means that the respective dimension is unbounded from above (below). • interior_point (Optional[Tensor]) – A d x 1-dim Tensor representing a point in the (relative) interior of the polytope. If omitted, determined automatically by solving a Linear Program. • n_burnin (int) – The number of burn in samples. draw(n=1, seed=None)[source] Draw samples from the polytope. Parameters: • n (int) – The number of samples. • seed (Optional[int]) – The random seed. Returns: A n x d Tensor of samples from the polytope. Return type: Tensor class botorch.utils.sampling.DelaunayPolytopeSampler(inequality_constraints=None, equality_constraints=None, bounds=None, interior_point=None)[source] Bases: PolytopeSampler A polytope sampler using Delaunay triangulation. This sampler first enumerates the vertices of the constraint polytope and then uses a Delaunay triangulation to tesselate its convex hull. The sampling happens in two stages: 1. First, we sample from the set of hypertriangles generated by the Delaunay triangulation (i.e. which hyper-triangle to draw the sample from) with probabilities proportional to the triangle volumes. 2. Then, we sample uniformly from the chosen hypertriangle by sampling uniformly from the unit simplex of the appropriate dimension, and then computing the convex combination of the vertices of the hypertriangle according to that draw from the simplex. The best reference (not exactly the same, but functionally equivalent) is [Trikalinos2014polytope]. A simple R implementation is available at https://github.com/gertvv/tesselample. Initialize DelaunayPolytopeSampler. Parameters: • inequality_constraints (Optional[Tuple[Tensor, Tensor]]) – Tensors (A, b) describing inequality constraints A @ x <= b, where A is a n_ineq_con x d-dim Tensor and b is a n_ineq_con x 1-dim Tensor, with n_ineq_con the number of inequalities and d the dimension of the sample space. • equality_constraints (Optional[Tuple[Tensor, Tensor]]) – Tensors (C, d) describing the equality constraints C @ x = d, where C is a n_eq_con x d-dim Tensor and d is a n_eq_con x 1-dim Tensor with n_eq_con the number of equalities. • bounds (Optional[Tensor]) – A 2 x d-dim tensor of box bounds, where inf (-inf) means that the respective dimension is unbounded from above (below). • interior_point (Optional[Tensor]) – A d x 1-dim Tensor representing a point in the (relative) interior of the polytope. If omitted, determined automatically by solving a Linear Program. Warning: The vertex enumeration performed in this algorithm can become extremely costly if there are a large number of inequalities. Similarly, the triangulation can get very expensive in high dimensions. Only use this algorithm for moderate dimensions / moderately complex constraint sets. An alternative is the HitAndRunPolytopeSampler. draw(n=1, seed=None)[source] Draw samples from the polytope. Parameters: • n (int) – The number of samples. • seed (Optional[int]) – The random seed. Returns: A n x d Tensor of samples from the polytope. Return type: Tensor botorch.utils.sampling.normalize_linear_constraints(bounds, constraints)[source] Normalize linear constraints to the unit cube. Parameters: • bounds (Tensor) – A 2 x d-dim tensor containing the box bounds. • constraints (List[Tuple[Tensor, Tensor, float]]) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) >= rhs or sum_i (X[indices[i]] * coefficients[i]) = rhs. Return type: List[Tuple[Tensor, Tensor, float]] botorch.utils.sampling.get_polytope_samples(n, bounds, inequality_constraints=None, equality_constraints=None, seed=None, thinning=32, n_burnin=10000)[source] Sample from polytope defined by box bounds and (in)equality constraints. This uses a hit-and-run Markov chain sampler. TODO: make this method return the sampler object, to avoid doing burn-in every time we draw samples. Parameters: • n (int) – The number of samples. • bounds (Tensor) – A 2 x d-dim tensor containing the box bounds. • constraints (equality) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) >= rhs. • constraints – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) = rhs. • seed (Optional[int]) – The random seed. • thinning (int) – The amount of thinning. • n_burnin (int) – The number of burn-in samples for the Markov chain sampler. • inequality_constraints (Optional[List[Tuple[Tensor, Tensor, float]]]) – • equality_constraints (Optional[List[Tuple[Tensor, Tensor, float]]]) – Returns: A n x d-dim tensor of samples. Return type: Tensor botorch.utils.sampling.sparse_to_dense_constraints(d, constraints)[source] Convert parameter constraints from a sparse format into a dense format. This method converts sparse triples of the form (indices, coefficients, rhs) to constraints of the form Ax >= b or Ax = b. Parameters: • d (int) – The input dimension. • constraints (inequality) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an (in)equality constraint of the form sum_i (X[indices[i]] * coefficients[i]) >= rhs or sum_i (X[indices[i]] * coefficients[i]) = rhs. Returns: • A: A n_constraints x d-dim tensor of coefficients. • b: A n_constraints x 1-dim tensor of right hand sides. Return type: A two-element tuple containing ## Sampling from GP priors¶ class botorch.utils.gp_sampling.GPDraw(model, seed=None)[source] Bases: Module Convenience wrapper for sampling a function from a GP prior. This wrapper implicitly defines the GP sample as a self-updating function by keeping track of the evaluated points and respective base samples used during the evaluation. This does not yet support multi-output models. Construct a GP function sampler. Parameters: • model (Model) – The Model defining the GP prior. • seed (Optional[int]) – property Xs: Tensor A (batch_shape) x n_eval x d-dim tensor of locations at which the GP was evaluated (or None if the sample has never been evaluated). property Ys: Tensor A (batch_shape) x n_eval x d-dim tensor of associated function values (or None if the sample has never been evaluated). forward(X)[source] Evaluate the GP sample function at a set of points X. Parameters: X (Tensor) – A batch_shape x n x d-dim tensor of points Returns: The value of the GP sample at the n points. Return type: Tensor training: bool class botorch.utils.gp_sampling.RandomFourierFeatures(kernel, input_dim, num_rff_features, sample_shape=None)[source] Bases: Module A class that represents Random Fourier Features. Initialize RandomFourierFeatures. Parameters: • kernel (Kernel) – The GP kernel. • input_dim (int) – The input dimension to the GP kernel. • num_rff_features (int) – The number of Fourier features. • sample_shape (Optional[torch.Size]) – The shape of a single sample. For a single-element torch.Size object, this is simply the number of RFF draws. forward(X)[source] Get Fourier basis features for the provided inputs. Note that the right-most subset of the batch shape of X should be (sample_shape) x (kernel_batch_shape) if using either the sample_shape argument or a batched kernel. In other words, X should be of shape (added_batch_shape) x (sample_shape) x (kernel_batch_shape) x n x input_dim, where parantheses denote that the given batch shape can be empty. X can always be a tensor of shape n x input_dim, in which case broadcasting will take care of the batch shape. This will raise a ValueError if the batch shapes are not compatible. Parameters: X (Tensor) – Input tensor of shape (batch_shape) x n x input_dim. Returns: A Tensor of shape (batch_shape) x n x rff. If X does not have a batch_shape, the output batch_shape will be (sample_shape) x (kernel_batch_shape). Return type: Tensor training: bool botorch.utils.gp_sampling.get_deterministic_model_multi_samples(weights, bases)[source] Get a batched deterministic model that batch evaluates n_samples function samples. This supports multi-output models as well. Parameters: • weights (List[Tensor]) – A list of weights with num_outputs elements. Each weight is of shape (batch_shape_input) x n_samples x num_rff_features, where (batch_shape_input) is the batch shape of the inputs used to obtain the posterior weights. • bases (List[RandomFourierFeatures]) – A list of RandomFourierFeatures with num_outputs elements. Each basis has a sample shape of n_samples. • n_samples – The number of function samples. Returns: A batched GenericDeterministicModels that batch evaluates n_samples function samples. Return type: GenericDeterministicModel botorch.utils.gp_sampling.get_eval_gp_sample_callable(w, basis)[source] Parameters: Return type: Tensor botorch.utils.gp_sampling.get_deterministic_model(weights, bases)[source] Get a deterministic model using the provided weights and bases for each output. Parameters: • weights (List[Tensor]) – A list of weights with m elements. • bases (List[RandomFourierFeatures]) – A list of RandomFourierFeatures with m elements. Returns: A deterministic model. Return type: GenericDeterministicModel botorch.utils.gp_sampling.get_deterministic_model_list(weights, bases)[source] Get a deterministic model list using the provided weights and bases for each output. Parameters: • weights (List[Tensor]) – A list of weights with m elements. • bases (List[RandomFourierFeatures]) – A list of RandomFourierFeatures with m elements. Returns: A deterministic model. Return type: ModelList botorch.utils.gp_sampling.get_weights_posterior(X, y, sigma_sq)[source] Sample bayesian linear regression weights. Parameters: • X (Tensor) – A tensor of inputs with shape (*batch_shape, n num_rff_features). • y (Tensor) – A tensor of outcomes with shape (*batch_shape, n). • sigma_sq (Tensor) – The likelihood noise variance. This should be a tensor with shape kernel_batch_shape, 1, 1 if using a batched kernel. Otherwise, it should be a scalar tensor. Returns: The posterior distribution over the weights. Return type: MultivariateNormal botorch.utils.gp_sampling.get_gp_samples(model, num_outputs, n_samples, num_rff_features=512)[source] Sample functions from GP posterior using RFFs. The returned GenericDeterministicModel effectively wraps num_outputs models, each of which has a batch shape of n_samples. Refer get_deterministic_model_multi_samples for more details. NOTE: If using input / outcome transforms, the gp samples must be accessed via the gp_sample.posterior(X) call. Otherwise, gp_sample(X) will produce bogus values that do not agree with the underlying model. It is also highly recommended to use outcome transforms to standardize the input data, since the gp samples do not work well when training outcomes are not zero-mean. Parameters: • model (Model) – The model. • num_outputs (int) – The number of outputs. • n_samples (int) – The number of functions to be sampled IID. • num_rff_features (int) – The number of random Fourier features. Returns: A GenericDeterministicModel that evaluates n_samples sampled functions. If n_samples > 1, this will be a batched model. Return type: GenericDeterministicModel ## Testing¶ class botorch.utils.testing.BotorchTestCase(methodName='runTest')[source] Bases: TestCase Basic test case for Botorch. This 1. sets the default device to be torch.device(“cpu”) 2. ensures that no warnings are suppressed by default. Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name. device = device(type='cpu') setUp()[source] Hook method for setting up the test fixture before exercising it. class botorch.utils.testing.BaseTestProblemBaseTestCase[source] Bases: object functions: List[BaseTestProblem] test_forward()[source] class botorch.utils.testing.SyntheticTestFunctionBaseTestCase[source] test_optimal_value()[source] test_optimizer()[source] functions: List[BaseTestProblem] class botorch.utils.testing.MockPosterior(mean=None, variance=None, samples=None, base_shape=None, batch_range=None)[source] Bases: Posterior Mock object that implements dummy methods and feeds through specified outputs Parameters: • mean – The mean of the posterior. • variance – The variance of the posterior. • samples – Samples to return from rsample, unless base_samples is provided. • base_shape – If given, this is returned as base_sample_shape, and also used as the base of the _extended_shape. • batch_range – If given, this is returned as batch_range. Defaults to (0, -2). property device: device The torch device of the distribution. property dtype: dtype The torch dtype of the distribution. property batch_shape: Size property base_sample_shape: Size The base shape of the base samples expected in rsample. Informs the sampler to produce base samples of shape sample_shape x base_sample_shape. property batch_range: Tuple[int, int] The t-batch range. This is used in samplers to identify the t-batch component of the base_sample_shape. The base samples are expanded over the t-batches to provide consistency in the acquisition values, i.e., to ensure that a candidate produces same value regardless of its position on the t-batch. property mean property variance rsample(sample_shape=None, base_samples=None)[source] Mock sample by repeating self._samples. If base_samples is provided, do a shape check but return the same mock samples. Parameters: • sample_shape (Optional[Size]) – • base_samples (Optional[Tensor]) – Return type: Tensor rsample_from_base_samples(sample_shape, base_samples)[source] Sample from the posterior (with gradients) using base samples. This is intended to be used with a sampler that produces the corresponding base samples, and enables acquisition optimization via Sample Average Approximation. Parameters: • sample_shape (Size) – A torch.Size object specifying the sample shape. To draw n samples, set to torch.Size([n]). To draw b batches of n samples each, set to torch.Size([b, n]). • base_samples (Tensor) – The base samples, obtained from the appropriate sampler. This is a tensor of shape sample_shape x base_sample_shape. Returns: Samples from the posterior, a tensor of shape self._extended_shape(sample_shape=sample_shape). Return type: Tensor class botorch.utils.testing.MockModel(posterior)[source] Mock object that implements dummy methods and feeds through specified outputs Initializes internal Module state, shared by both nn.Module and ScriptModule. Parameters: posterior (MockPosterior) – posterior(X, output_indices=None, posterior_transform=None, observation_noise=False)[source] Computes the posterior over model outputs at the provided points. Note: The input transforms should be applied here using self.transform_inputs(X) after the self.eval() call and before any model.forward or model.likelihood calls. Parameters: • X (Tensor) – A b x q x d-dim Tensor, where d is the dimension of the feature space, q is the number of points considered jointly, and b is the batch dimension. • output_indices (Optional[List[int]]) – A list of indices, corresponding to the outputs over which to compute the posterior (if the model is multi-output). Can be used to speed up computation if only a subset of the model’s outputs are required for optimization. If omitted, computes the posterior over all model outputs. • observation_noise (bool) – If True, add observation noise to the posterior. • posterior_transform (Optional[PosteriorTransform]) – An optional PosteriorTransform. Returns: A Posterior object, representing a batch of b joint distributions over q points and m outputs each. Return type: MockPosterior property num_outputs: int The number of outputs of the model. property batch_shape: Size The batch shape of the model. This is a batch shape from an I/O perspective, independent of the internal representation of the model (as e.g. in BatchedMultiOutputGPyTorchModel). For a model with m outputs, a test_batch_shape x q x d-shaped input X to the posterior method returns a Posterior object over an output of shape broadcast(test_batch_shape, model.batch_shape) x q x m. state_dict()[source] Returns a dictionary containing references to the whole state of the module. Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Parameters and buffers set to None are not included. Note The returned object is a shallow copy. It contains references to the module’s parameters and buffers. Warning Currently state_dict() also accepts positional arguments for destination, prefix and keep_vars in order. However, this is being deprecated and keyword arguments will be enforced in future releases. Warning Please avoid the use of argument destination as it is not designed for end-users. Parameters: • destination (dict, optional) – If provided, the state of module will be updated into the dict and the same object is returned. Otherwise, an OrderedDict will be created and returned. Default: None. • prefix (str, optional) – a prefix added to parameter and buffer names to compose the keys in state_dict. Default: ''. • keep_vars (bool, optional) – by default the Tensor s returned in the state dict are detached from autograd. If it’s set to True, detaching will not be performed. Default: False. Returns: a dictionary containing a whole state of the module Return type: dict Example: >>> # xdoctest: +SKIP("undefined vars") >>> module.state_dict().keys() ['bias', 'weight'] Copies parameters and buffers from state_dict into this module and its descendants. If strict is True, then the keys of state_dict must exactly match the keys returned by this module’s state_dict() function. Parameters: • state_dict (dict) – a dict containing parameters and persistent buffers. • strict (bool, optional) – whether to strictly enforce that the keys in state_dict match the keys returned by this module’s state_dict() function. Default: True Returns: • missing_keys is a list of str containing the missing keys • unexpected_keys is a list of str containing the unexpected keys Return type: NamedTuple with missing_keys and unexpected_keys fields Note If a parameter or buffer is registered as None and its corresponding key exists in state_dict, load_state_dict() will raise a RuntimeError. class botorch.utils.testing.MockAcquisitionFunction[source] Bases: object Mock acquisition function object that implements dummy methods. set_X_pending(X_pending=None)[source] Parameters: X_pending (Optional[Tensor]) – class botorch.utils.testing.MultiObjectiveTestProblemBaseTestCase[source] test_attributes()[source] test_max_hv()[source] test_ref_point()[source] functions: List[BaseTestProblem] class botorch.utils.testing.ConstrainedMultiObjectiveTestProblemBaseTestCase[source] test_num_constraints()[source] test_evaluate_slack_true()[source] functions: List[BaseTestProblem] ## Torch¶ class botorch.utils.torch.BufferDict(buffers=None)[source] Bases: Module Holds buffers in a dictionary. BufferDict can be indexed like a regular Python dictionary, but buffers it contains are properly registered, and will be visible by all Module methods. BufferDict is an ordered dictionary that respects • the order of insertion, and • in update(), the order of the merged OrderedDict or another BufferDict (the argument to update()). Note that update() with other unordered mapping types (e.g., Python’s plain dict) does not preserve the order of the merged mapping. Parameters: buffers (iterable, optional) – a mapping (dictionary) of (string : Tensor) or an iterable of key-value pairs of type (string, Tensor) Example: class MyModule(nn.Module): def __init__(self): super(MyModule, self).__init__() self.buffers = nn.BufferDict({ 'left': torch.randn(5, 10), 'right': torch.randn(5, 10) }) def forward(self, x, choice): x = self.buffers[choice].mm(x) return x Parameters: buffers – A mapping (dictionary) from string to Tensor, or an iterable of key-value pairs of type (string, Tensor). clear()[source] Remove all items from the BufferDict. pop(key)[source] Remove key from the BufferDict and return its buffer. Parameters: key (string) – key to pop from the BufferDict keys()[source] Return an iterable of the BufferDict keys. items()[source] Return an iterable of the BufferDict key/value pairs. values()[source] Return an iterable of the BufferDict values. update(buffers)[source] Update the BufferDict with the key-value pairs from a mapping or an iterable, overwriting existing keys. Note If buffers is an OrderedDict, a BufferDict, or an iterable of key-value pairs, the order of new elements in it is preserved. Parameters: buffers (iterable) – a mapping (dictionary) from string to Tensor, or an iterable of key-value pairs of type (string, Tensor) extra_repr()[source] Set the extra representation of the module To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable. training: bool ## Transformations¶ Some basic data transformation helpers. botorch.utils.transforms.squeeze_last_dim(Y)[source] Squeeze the last dimension of a Tensor. Parameters: Y (Tensor) – A … x d-dim Tensor. Returns: The input tensor with last dimension squeezed. Return type: Tensor Example >>> Y = torch.rand(4, 3) >>> Y_squeezed = squeeze_last_dim(Y) botorch.utils.transforms.standardize(Y)[source] Standardizes (zero mean, unit variance) a tensor by dim=-2. If the tensor is single-dimensional, simply standardizes the tensor. If for some batch index all elements are equal (or if there is only a single data point), this function will return 0 for that batch index. Parameters: Y (Tensor) – A batch_shape x n x m-dim tensor. Returns: The standardized Y. Return type: Tensor Example >>> Y = torch.rand(4, 3) >>> Y_standardized = standardize(Y) botorch.utils.transforms.normalize(X, bounds)[source] Min-max normalize X w.r.t. the provided bounds. Parameters: • X (Tensor) – … x d tensor of data • bounds (Tensor) – 2 x d tensor of lower and upper bounds for each of the X’s d columns. Returns: A … x d-dim tensor of normalized data, given by (X - bounds[0]) / (bounds[1] - bounds[0]). If all elements of X are contained within bounds, the normalized values will be contained within [0, 1]^d. Return type: Tensor Example >>> X = torch.rand(4, 3) >>> bounds = torch.stack([torch.zeros(3), 0.5 * torch.ones(3)]) >>> X_normalized = normalize(X, bounds) botorch.utils.transforms.unnormalize(X, bounds)[source] Un-normalizes X w.r.t. the provided bounds. Parameters: • X (Tensor) – … x d tensor of data • bounds (Tensor) – 2 x d tensor of lower and upper bounds for each of the X’s d columns. Returns: A … x d-dim tensor of unnormalized data, given by X * (bounds[1] - bounds[0]) + bounds[0]. If all elements of X are contained in [0, 1]^d, the un-normalized values will be contained within bounds. Return type: Tensor Example >>> X_normalized = torch.rand(4, 3) >>> bounds = torch.stack([torch.zeros(3), 0.5 * torch.ones(3)]) >>> X = unnormalize(X_normalized, bounds) botorch.utils.transforms.normalize_indices(indices, d)[source] Normalize a list of indices to ensure that they are positive. Parameters: • indices (Optional[List[int]]) – A list of indices (may contain negative indices for indexing “from the back”). • d (int) – The dimension of the tensor to index. Returns: A normalized list of indices such that each index is between 0 and d-1, or None if indices is None. Return type: Optional[List[int]] botorch.utils.transforms.is_fully_bayesian(model)[source] Check if at least one model is a SaasFullyBayesianSingleTaskGP Parameters: • model (Model) – A BoTorch model (may be a ModelList or ModelListGP) • d – The dimension of the tensor to index. Returns: True if at least one model is a SaasFullyBayesianSingleTaskGP Return type: bool botorch.utils.transforms.t_batch_mode_transform(expected_q=None, assert_output_shape=True)[source] Factory for decorators enabling consistent t-batch behavior. This method creates decorators for instance methods to transform an input tensor X to t-batch mode (i.e. with at least 3 dimensions). This assumes the tensor has a q-batch dimension. The decorator also checks the q-batch size if expected_q is provided, and the output shape if assert_output_shape is True. Parameters: • expected_q (Optional[int]) – The expected q-batch size of X. If specified, this will raise an AssertionError if X’s q-batch size does not equal expected_q. • assert_output_shape (bool) – If True, this will raise an AssertionError if the output shape does not match either the t-batch shape of X, or the acqf.model.batch_shape for acquisition functions using batched models. Returns: The decorated instance method. Return type: Callable[[Callable[[AcquisitionFunction, Any], Any]], Callable[[AcquisitionFunction, Any], Any]] Example >>> class ExampleClass: >>> @t_batch_mode_transform(expected_q=1) >>> def single_q_method(self, X): >>> ... >>> >>> @t_batch_mode_transform() >>> def arbitrary_q_method(self, X): >>> ... botorch.utils.transforms.concatenate_pending_points(method)[source] Decorator concatenating X_pending into an acquisition function’s argument. This decorator works on the forward method of acquisition functions taking a tensor X as the argument. If the acquisition function has an X_pending attribute (that is not None), this is concatenated into the input X, appropriately expanding the pending points to match the batch shape of X. Example >>> class ExampleAcquisitionFunction: >>> @concatenate_pending_points >>> @t_batch_mode_transform() >>> def forward(self, X): >>> ... Parameters: method (Callable[[Any, Tensor], Any]) – Return type: Callable[[Any, Tensor], Any] botorch.utils.transforms.match_batch_shape(X, Y)[source] Matches the batch dimension of a tensor to that of another tensor. Parameters: • X (Tensor) – A batch_shape_X x q x d tensor, whose batch dimensions that correspond to batch dimensions of Y are to be matched to those (if compatible). • Y (Tensor) – A batch_shape_Y x q’ x d tensor. Returns: A batch_shape_Y x q x d tensor containing the data of X expanded to the batch dimensions of Y (if compatible). For instance, if X is b’’ x b’ x q x d and Y is b x q x d, then the returned tensor is b’’ x b x q x d. Return type: Tensor Example >>> X = torch.rand(2, 1, 5, 3) >>> Y = torch.rand(2, 6, 4, 3) >>> X_matched = match_batch_shape(X, Y) >>> X_matched.shape torch.Size([2, 6, 5, 3]) botorch.utils.transforms.convert_to_target_pre_hook(module, *args)[source] Pre-hook for automatically calling .to(X) on module prior to forward ## Feasible Volume¶ botorch.utils.feasible_volume.get_feasible_samples(samples, inequality_constraints=None)[source] Checks which of the samples satisfy all of the inequality constraints. Parameters: • samples (Tensor) – A sample size x d size tensor of feature samples, where d is a feature dimension. • constraints (inequality) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) >= rhs. • inequality_constraints (Optional[List[Tuple[Tensor, Tensor, float]]]) – Returns: 2-element tuple containing • Samples satisfying the linear constraints. • Estimated proportion of samples satisfying the linear constraints. Return type: Tuple[Tensor, float] botorch.utils.feasible_volume.get_outcome_feasibility_probability(model, X, outcome_constraints, threshold=0.1, nsample_outcome=1000, seed=None)[source] Monte Carlo estimate of the feasible volume with respect to the outcome constraints. Parameters: • model (Model) – The model used for sampling the posterior. • X (Tensor) – A tensor of dimension batch-shape x 1 x d, where d is feature dimension. • outcome_constraints (List[Callable[[Tensor], Tensor]]) – A list of callables, each mapping a Tensor of dimension sample_shape x batch-shape x q x m to a Tensor of dimension sample_shape x batch-shape x q, where negative values imply feasibility. • threshold (float) – A lower limit for the probability of posterior samples feasibility. • nsample_outcome (int) – The number of samples from the model posterior. • seed (Optional[int]) – The seed for the posterior sampler. If omitted, use a random seed. Returns: Estimated proportion of features for which posterior samples satisfy given outcome constraints with probability above or equal to the given threshold. Return type: float botorch.utils.feasible_volume.estimate_feasible_volume(bounds, model, outcome_constraints, inequality_constraints=None, nsample_feature=1000, nsample_outcome=1000, threshold=0.1, verbose=False, seed=None, device=None, dtype=None)[source] Monte Carlo estimate of the feasible volume with respect to feature constraints and outcome constraints. Parameters: • bounds (Tensor) – A 2 x d tensor of lower and upper bounds for each column of X. • model (Model) – The model used for sampling the outcomes. • outcome_constraints (List[Callable[[Tensor], Tensor]]) – A list of callables, each mapping a Tensor of dimension sample_shape x batch-shape x q x m to a Tensor of dimension sample_shape x batch-shape x q, where negative values imply feasibility. • constraints (inequality) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) >= rhs. • nsample_feature (int) – The number of feature samples satisfying the bounds. • nsample_outcome (int) – The number of outcome samples from the model posterior. • threshold (float) – A lower limit for the probability of outcome feasibility • seed (Optional[int]) – The seed for both feature and outcome samplers. If omitted, use a random seed. • verbose (bool) – An indicator for whether to log the results. • inequality_constraints (Optional[List[Tuple[Tensor, Tensor, float]]]) – • device (Optional[device]) – • dtype (Optional[dtype]) – Returns: • Estimated proportion of volume in feature space that is feasible wrt the bounds and the inequality constraints (linear). • Estimated proportion of feasible features for which posterior samples (outcome) satisfies the outcome constraints with probability above the given threshold. Return type: 2-element tuple containing ## Constants¶ botorch.utils.constants.get_constants(values, device=None, dtype=None)[source] Returns scalar-valued Tensors containing each of the given constants. Used to expedite tensor operations involving scalar arithmetic. Note that the returned Tensors should not be modified in-place. Parameters: • values (Union[Number, Iterator[Number]]) – • device (Optional[device]) – • dtype (Optional[dtype]) – Return type: Union[Tensor, Tuple[Tensor, …]] botorch.utils.constants.get_constants_like(values, ref)[source] Parameters: • values (Union[Number, Iterator[Number]]) – • ref (Tensor) – Return type: Union[Tensor, Iterator[Tensor]] ## Safe Math¶ botorch.utils.safe_math.exp(x, **kwargs)[source] Parameters: x (Tensor) – Return type: Tensor botorch.utils.safe_math.log(x, **kwargs)[source] Parameters: x (Tensor) – Return type: Tensor Parameters: • a (Tensor) – • b (Tensor) – Return type: Tensor botorch.utils.safe_math.sub(a, b)[source] Parameters: • a (Tensor) – • b (Tensor) – Return type: Tensor botorch.utils.safe_math.div(a, b)[source] Parameters: • a (Tensor) – • b (Tensor) – Return type: Tensor botorch.utils.safe_math.mul(a, b)[source] Parameters: • a (Tensor) – • b (Tensor) – Return type: Tensor ## Abstract Box Decompositions¶ Box decomposition algorithms. References [Lacour17] (1,2,3,4,5) R. Lacour, K. Klamroth, C. Fonseca. A box decomposition algorithm to compute the hypervolume indicator. Computers & Operations Research, Volume 79, 2017. ## Box Decomposition List¶ Box decomposition container. class botorch.utils.multi_objective.box_decompositions.box_decomposition_list.BoxDecompositionList(*box_decompositions)[source] Bases: Module A list of box decompositions. Initialize the box decomposition list. Parameters: *box_decompositions (BoxDecomposition) – An variable number of box decompositions Example >>> bd1 = FastNondominatedPartitioning(ref_point, Y=Y1) >>> bd2 = FastNondominatedPartitioning(ref_point, Y=Y2) >>> bd = BoxDecompositionList(bd1, bd2) property pareto_Y: List[Tensor] This returns the non-dominated set. Note: Internally, we store the negative pareto set (minimization). Returns: A list where the ith element is the n_pareto_i x m-dim tensor of pareto optimal outcomes for each box_decomposition i. property ref_point: Tensor Get the reference point. Note: Internally, we store the negative reference point (minimization). Returns: A n_box_decompositions x m-dim tensor of outcomes. get_hypercell_bounds()[source] Get the bounds of each hypercell in the decomposition. Returns: A 2 x n_box_decompositions x num_cells x num_outcomes-dim tensor containing the lower and upper vertices bounding each hypercell. Return type: Tensor update(Y)[source] Update the partitioning. Parameters: Y (Union[List[Tensor], Tensor]) – A n_box_decompositions x n x num_outcomes-dim tensor or a list where the ith element contains the new points for box_decomposition i. Return type: None compute_hypervolume()[source] Compute hypervolume that is dominated by the Pareto Froniter. Returns: A (batch_shape)-dim tensor containing the hypervolume dominated by each Pareto frontier. Return type: Tensor training: bool ## Box Decomposition Utilities¶ Utilities for box decomposition algorithms. botorch.utils.multi_objective.box_decompositions.utils.compute_local_upper_bounds(U, Z, z)[source] Compute local upper bounds. Note: this assumes minimization. This uses the incremental algorithm (Alg. 1) from [Lacour17]. Parameters: • U (Tensor) – A n x m-dim tensor containing the local upper bounds. • Z (Tensor) – A n x m x m-dim tensor containing the defining points. • z (Tensor) – A m-dim tensor containing the new point. Returns: • A new n’ x m-dim tensor local upper bounds. • A n’ x m x m-dim tensor containing the defining points. Return type: 2-element tuple containing botorch.utils.multi_objective.box_decompositions.utils.get_partition_bounds(Z, U, ref_point)[source] Get the cell bounds given the local upper bounds and the defining points. This implements Equation 2 in [Lacour17]. Parameters: • Z (Tensor) – A n x m x m-dim tensor containing the defining points. The first dimension corresponds to u_idx, the second dimension corresponds to j, and Z[u_idx, j] is the set of definining points Z^j(u) where u = U[u_idx]. • U (Tensor) – A n x m-dim tensor containing the local upper bounds. • ref_point (Tensor) – A m-dim tensor containing the reference point. Returns: A 2 x num_cells x m-dim tensor containing the lower and upper vertices bounding each hypercell. Return type: Tensor botorch.utils.multi_objective.box_decompositions.utils.update_local_upper_bounds_incremental(new_pareto_Y, U, Z)[source] Update the current local upper with the new pareto points. This assumes minimization. Parameters: • new_pareto_Y (Tensor) – A n x m-dim tensor containing the new Pareto points. • U (Tensor) – A n’ x m-dim tensor containing the local upper bounds. • Z (Tensor) – A n x m x m-dim tensor containing the defining points. Returns: • A new n’ x m-dim tensor local upper bounds. • A n’ x m x m-dim tensor containing the defining points Return type: 2-element tuple containing botorch.utils.multi_objective.box_decompositions.utils.compute_non_dominated_hypercell_bounds_2d(pareto_Y_sorted, ref_point)[source] Compute an axis-aligned partitioning of the non-dominated space for 2 objectives. Parameters: • pareto_Y_sorted (Tensor) – A (batch_shape) x n_pareto x 2-dim tensor of pareto outcomes that are sorted by the 0th dimension in increasing order. All points must be better than the reference point. • ref_point (Tensor) – A (batch_shape) x 2-dim reference point. Returns: A 2 x (batch_shape) x n_pareto + 1 x m-dim tensor of cell bounds. Return type: Tensor botorch.utils.multi_objective.box_decompositions.utils.compute_dominated_hypercell_bounds_2d(pareto_Y_sorted, ref_point)[source] Compute an axis-aligned partitioning of the dominated space for 2-objectives. Parameters: • pareto_Y_sorted (Tensor) – A (batch_shape) x n_pareto x 2-dim tensor of pareto outcomes that are sorted by the 0th dimension in increasing order. • ref_point (Tensor) – A 2-dim reference point. Returns: A 2 x (batch_shape) x n_pareto x m-dim tensor of cell bounds. Return type: Tensor ## Dominated Partitionings¶ Algorithms for partitioning the dominated space into hyperrectangles. class botorch.utils.multi_objective.box_decompositions.dominated.DominatedPartitioning(ref_point, Y=None)[source] Bases: FastPartitioning Partition dominated space into axis-aligned hyperrectangles. This uses the Algorithm 1 from [Lacour17]. Example >>> bd = DominatedPartitioning(ref_point, Y) Parameters: • ref_point (Tensor) – A m-dim tensor containing the reference point. • Y (Optional[Tensor]) – A (batch_shape) x n x m-dim tensor training: bool ## Hypervolume¶ Hypervolume Utilities. References [Fonseca2006] (1,2) C. M. Fonseca, L. Paquete, and M. Lopez-Ibanez. An improved dimension-sweep algorithm for the hypervolume indicator. In IEEE Congress on Evolutionary Computation, pages 1157-1163, Vancouver, Canada, July 2006. H. Ishibuchi, N. Akedo, and Y. Nojima. A many-objective test problem for visually examining diversity maintenance behavior in a decision space. Proc. 13th Annual Conf. Genetic Evol. Comput., 2011. botorch.utils.multi_objective.hypervolume.infer_reference_point(pareto_Y, max_ref_point=None, scale=0.1, scale_max_ref_point=False)[source] Get reference point for hypervolume computations. This sets the reference point to be ref_point = nadir - 0.1 * range when there is no pareto_Y that is better than the reference point. [Ishibuchi2011] find 0.1 to be a robust multiplier for scaling the nadir point. Note: this assumes maximization of all objectives. Parameters: • pareto_Y (Tensor) – A n x m-dim tensor of Pareto-optimal points. • max_ref_point (Optional[Tensor]) – A m dim tensor indicating the maximum reference point. • scale (float) – A multiplier used to scale back the reference point based on the range of each objective. • scale_max_ref_point (bool) – A boolean indicating whether to apply scaling to the max_ref_point based on the range of each objective. Returns: A m-dim tensor containing the reference point. Return type: Tensor class botorch.utils.multi_objective.hypervolume.Hypervolume(ref_point)[source] Bases: object Hypervolume computation dimension sweep algorithm from [Fonseca2006]. Adapted from Simon Wessing’s implementation of the algorithm (Variant 3, Version 1.2) in [Fonseca2006] in PyMOO: https://github.com/msu-coinlab/pymoo/blob/master/pymoo/vendor/hv.py Maximization is assumed. TODO: write this in C++ for faster looping. Initialize hypervolume object. Parameters: ref_point (Tensor) – m-dim Tensor containing the reference point. property ref_point: Tensor Get reference point (for maximization). Returns: A m-dim tensor containing the reference point. compute(pareto_Y)[source] Compute hypervolume. Parameters: pareto_Y (Tensor) – A n x m-dim tensor of pareto optimal outcomes Returns: The hypervolume. Return type: float botorch.utils.multi_objective.hypervolume.sort_by_dimension(nodes, i)[source] Sorts the list of nodes in-place by the specified objective. Parameters: • nodes (List[Node]) – A list of Nodes • i (int) – The index of the objective to sort by Return type: None class botorch.utils.multi_objective.hypervolume.Node(m, dtype, device, data=None)[source] Bases: object Node in the MultiList data structure. Initialize MultiList. Parameters: • m (int) – The number of objectives • dtype (torch.dtype) – The dtype • device (torch.device) – The device • data (Optional[Tensor]) – The tensor data to be stored in this Node. class botorch.utils.multi_objective.hypervolume.MultiList(m, dtype, device)[source] Bases: object A special data structure used in hypervolume computation. It consists of several doubly linked lists that share common nodes. Every node has multiple predecessors and successors, one in every list. Parameters: • m (int) – number of doubly linked lists • dtype (torch.dtype) – the dtype • device (torch.device) – the device append(node, index)[source] Appends a node to the end of the list at the given index. Parameters: • node (Node) – the new node • index (int) – the index where the node should be appended. Return type: None extend(nodes, index)[source] Extends the list at the given index with the nodes. Parameters: • nodes (List[Node]) – list of nodes to append at the given index. • index (int) – the index where the nodes should be appended. Return type: None remove(node, index, bounds)[source] Removes and returns ‘node’ from all lists in [0, ‘index’]. Parameters: • node (Node) – The node to remove • index (int) – The upper bound on the range of indices • bounds (Tensor) – A 2 x m-dim tensor bounds on the objectives Return type: Node reinsert(node, index, bounds)[source] Re-inserts the node at its original position. Re-inserts the node at its original position in all lists in [0, ‘index’] before it was removed. This method assumes that the next and previous nodes of the node that is reinserted are in the list. Parameters: • node (Node) – The node • index (int) – The upper bound on the range of indices • bounds (Tensor) – A 2 x m-dim tensor bounds on the objectives Return type: None ## Non-dominated Partitionings¶ Algorithms for partitioning the non-dominated space into rectangles. References [Couckuyt2012] (1,2) I. Couckuyt, D. Deschrijver and T. Dhaene, “Towards Efficient Multiobjective Optimization: Multiobjective statistical criterions,” 2012 IEEE Congress on Evolutionary Computation, Brisbane, QLD, 2012, pp. 1-8. class botorch.utils.multi_objective.box_decompositions.non_dominated.NondominatedPartitioning(ref_point, Y=None, alpha=0.0)[source] Bases: BoxDecomposition A class for partitioning the non-dominated space into hyper-cells. Note: this assumes maximization. Internally, it multiplies outcomes by -1 and performs the decomposition under minimization. TODO: use maximization internally as well. Note: it is only feasible to use this algorithm to compute an exact decomposition of the non-dominated space for m<5 objectives (alpha=0.0). The alpha parameter can be increased to obtain an approximate partitioning faster. The alpha is a fraction of the total hypervolume encapsuling the entire Pareto set. When a hypercell’s volume divided by the total hypervolume is less than alpha, we discard the hypercell. See Figure 2 in [Couckuyt2012] for a visual representation. This PyTorch implementation of the binary partitioning algorithm ([Couckuyt2012]) is adapted from numpy/tensorflow implementation at: https://github.com/GPflow/GPflowOpt/blob/master/gpflowopt/pareto.py. TODO: replace this with a more efficient decomposition. E.g. https://link.springer.com/content/pdf/10.1007/s10898-019-00798-7.pdf Initialize NondominatedPartitioning. Parameters: • ref_point (Tensor) – A m-dim tensor containing the reference point. • Y (Optional[Tensor]) – A (batch_shape) x n x m-dim tensor. • alpha (float) – A thresold fraction of total volume used in an approximate decomposition. Example >>> bd = NondominatedPartitioning(ref_point, Y=Y1) get_hypercell_bounds()[source] Get the bounds of each hypercell in the decomposition. Parameters: ref_point – A (batch_shape) x m-dim tensor containing the reference point. Returns: A 2 x num_cells x m-dim tensor containing the lower and upper vertices bounding each hypercell. Return type: Tensor training: bool class botorch.utils.multi_objective.box_decompositions.non_dominated.FastNondominatedPartitioning(ref_point, Y=None)[source] Bases: FastPartitioning A class for partitioning the non-dominated space into hyper-cells. Note: this assumes maximization. Internally, it multiplies by -1 and performs the decomposition under minimization. This class is far more efficient than NondominatedPartitioning for exact box partitionings This class uses the two-step approach similar to that in [Yang2019], where: 1. first, Alg 1 from [Lacour17] is used to find the local lower bounds for the maximization problem 2. second, the local lower bounds are used as the Pareto frontier for the minimization problem, and [Lacour17] is applied again to partition the space dominated by that Pareto frontier. Initialize FastNondominatedPartitioning. Parameters: • ref_point (Tensor) – A m-dim tensor containing the reference point. • Y (Optional[Tensor]) – A (batch_shape) x n x m-dim tensor. Example >>> bd = FastNondominatedPartitioning(ref_point, Y=Y1) training: bool ## Pareto¶ botorch.utils.multi_objective.pareto.is_non_dominated(Y, deduplicate=True)[source] Computes the non-dominated front. Note: this assumes maximization. For small n, this method uses a highly parallel methodology that compares all pairs of points in Y. However, this is memory intensive and slow for large n. For large n (or if Y is larger than 5MB), this method will dispatch to a loop-based approach that is faster and has a lower memory footprint. Parameters: • Y (Tensor) – A (batch_shape) x n x m-dim tensor of outcomes. • deduplicate (bool) – A boolean indicating whether to only return unique points on the pareto frontier. Returns: A (batch_shape) x n-dim boolean tensor indicating whether each point is non-dominated. Return type: Tensor ## Scalarization¶ Helper utilities for constructing scalarizations. References [Knowles2005] (1,2) J. Knowles, “ParEGO: a hybrid algorithm with on-line landscape approximation for expensive multiobjective optimization problems,” in IEEE Transactions on Evolutionary Computation, vol. 10, no. 1, pp. 50-66, Feb. 2006. botorch.utils.multi_objective.scalarization.get_chebyshev_scalarization(weights, Y, alpha=0.05)[source] Construct an augmented Chebyshev scalarization. Augmented Chebyshev scalarization: objective(y) = min(w * y) + alpha * sum(w * y) Outcomes are first normalized to [0,1] for maximization (or [-1,0] for minimization) and then an augmented Chebyshev scalarization is applied. Note: this assumes maximization of the augmented Chebyshev scalarization. Minimizing/Maximizing an objective is supported by passing a negative/positive weight for that objective. To make all w * y’s have positive sign such that they are comparable when computing min(w * y), outcomes of minimization objectives are shifted from [0,1] to [-1,0]. See [Knowles2005] for details. This scalarization can be used with qExpectedImprovement to implement q-ParEGO as proposed in [Daulton2020qehvi]. Parameters: • weights (Tensor) – A m-dim tensor of weights. Positive for maximization and negative for minimization. • Y (Tensor) – A n x m-dim tensor of observed outcomes, which are used for scaling the outcomes to [0,1] or [-1,0]. • alpha (float) – Parameter governing the influence of the weighted sum term. The default value comes from [Knowles2005]. Returns: Transform function using the objective weights. Return type: Callable[[Tensor, Optional[Tensor]], Tensor] Example >>> weights = torch.tensor([0.75, -0.25]) >>> transform = get_aug_chebyshev_scalarization(weights, Y) ## Multivariate Gaussian Probabilities via Bivariate Conditioning¶ Bivariate conditioning algorithm for approximating Gaussian probabilities, see [Genz2016numerical] and [Trinh2015bivariate]. [Trinh2015bivariate] (1,2) G. Trinh and A. Genz. Bivariate conditioning approximations for multivariate normal probabilities. Statistics and Computing, 2015. A. Genz and G. Tring. Numerical Computation of Multivariate Normal Probabilities using Bivariate Conditioning. Monte Carlo and Quasi-Monte Carlo Methods, 2016. GJ. Gibson, CA Galsbey, and DA Elston. Monte Carlo evaluation of multivariate normal integrals and sensitivity to variate ordering. Advances in Numerical Methods and Applications. 1994. class botorch.utils.probability.mvnxpb.mvnxpbState(*args, **kwargs)[source] Bases: dict step: int perm: LongTensor bounds: Tensor piv_chol: PivotedCholesky plug_ins: Tensor log_prob: Tensor log_prob_extra: Optional[Tensor] class botorch.utils.probability.mvnxpb.MVNXPB(covariance_matrix, bounds)[source] Bases: object An algorithm for approximating Gaussian probabilities P(X in bounds), where X ~ N(0, covariance_matrix). Initializes an MVNXPB instance. Parameters: • covariance_matrix (Tensor) – Covariance matrices of shape batch_shape x [n, n]. • bounds (Tensor) – Tensor of lower and upper bounds, batch_shape x [n, 2]. These bounds are standardized internally and clipped to STANDARDIZED_RANGE. classmethod build(step, perm, bounds, piv_chol, plug_ins, log_prob, log_prob_extra=None)[source] Creates an MVNXPB instance from raw arguments. Unlike MVNXPB.__init__, this methods does not preprocess or copy terms. Parameters: • step (int) – Integer used to track the solver’s progress. • bounds (Tensor) – Tensor of lower and upper bounds, batch_shape x [n, 2]. • piv_chol (PivotedCholesky) – A PivotedCholesky instance for the system. • plug_ins (Tensor) – Tensor of plug-in estimators used to update lower and upper bounds on random variables that have yet to be integrated out. • log_prob (Tensor) – Tensor of log probabilities. • log_prob_extra (Optional[Tensor]) – Tensor of conditional log probabilities for the next random variable. Used when integrating over an odd number of random variables. • perm (Tensor) – Return type: MVNXPB solve(num_steps=None, eps=1e-10)[source] Runs the MVNXPB solver instance for a fixed number of steps. Calculates a bivariate conditional approximation to P(X in bounds), where X ~ N(0, Σ). For details, see [Genz2016numerical] or [Trinh2015bivariate]. Parameters: • num_steps (Optional[int]) – • eps (float) – Return type: Tensor select_pivot()[source] GGE variable prioritization strategy from [Gibson1994monte]. Returns the index of the random variable least likely to satisfy its bounds when conditioning on the previously integrated random variables X[:t - 1] attaining the values of plug-in estimators y[:t - 1]. Equivalently, argmin_{i = t, ..., n} P(X[i] \in bounds[i] | X[:t-1] = y[:t -1]), where t denotes the current step. Return type: Optional[LongTensor] pivot_(pivot)[source] Swap random variables at pivot and step positions. Parameters: pivot (LongTensor) – Return type: None concat(other, dim)[source] Parameters: • other (MVNXPB) – • dim (int) – Return type: MVNXPB expand(*sizes)[source] Parameters: sizes (int) – Return type: MVNXPB augment(covariance_matrix, bounds, cross_covariance_matrix, disable_pivoting=False, jitter=None, max_tries=None)[source] Augment an n-dimensional MVNXPB instance to include m additional random variables. Parameters: • covariance_matrix (Tensor) – • bounds (Tensor) – • cross_covariance_matrix (Tensor) – • disable_pivoting (bool) – • jitter (Optional[float]) – • max_tries (Optional[int]) – Return type: MVNXPB detach()[source] Return type: MVNXPB clone()[source] Return type: MVNXPB asdict()[source] Return type: mvnxpbState ## Truncated Multivariate Normal Distribution¶ class botorch.utils.probability.truncated_multivariate_normal.TruncatedMultivariateNormal(loc, covariance_matrix=None, precision_matrix=None, scale_tril=None, bounds=None, solver=None, sampler=None, validate_args=None)[source] Bases: MultivariateNormal Initializes an instance of a TruncatedMultivariateNormal distribution. Let x ~ N(0, K) be an n-dimensional Gaussian random vector. This class represents the distribution of the truncated Multivariate normal random vector x | a <= x <= b. Parameters: • loc (Tensor) – A mean vector for the distribution, batch_shape x event_shape. • covariance_matrix (Optional[Tensor]) – Covariance matrix distribution parameter. • precision_matrix (Optional[Tensor]) – Inverse covariance matrix distribution parameter. • scale_tril (Optional[Tensor]) – Lower triangular, square-root covariance matrix distribution parameter. • bounds (Tensor) – A batch_shape x event_shape x 2 tensor of strictly increasing bounds for x so that bounds[…, 0] < bounds[…, 1] everywhere. • solver (Optional[MVNXPB]) – A pre-solved MVNXPB instance used to approximate the log partition. • sampler (Optional[LinearEllipticalSliceSampler]) – A LinearEllipticalSliceSampler instance used for sample generation. • validate_args (Optional[bool]) – Optional argument to super().__init__. log_prob(value)[source] Approximates the true log probability. Parameters: value (Tensor) – Return type: Tensor rsample(sample_shape=torch.Size([]))[source] Draw samples from the Truncated Multivariate Normal. Parameters: sample_shape (Size) – The shape of the samples. Returns: The (sample_shape x batch_shape x event_shape) tensor of samples. Return type: Tensor property log_partition: Tensor property solver: MVNXPB property sampler: LinearEllipticalSliceSampler expand(batch_shape, _instance=None)[source] Returns a new distribution instance (or populates an existing instance provided by a derived class) with batch dimensions expanded to batch_shape. This method calls expand on the distribution’s parameters. As such, this does not allocate new memory for the expanded distribution instance. Additionally, this does not repeat any args checking or parameter broadcasting in __init__.py, when an instance is first created. Parameters: • batch_shape (torch.Size) – the desired expanded size. • _instance (Optional[TruncatedMultivariateNormal]) – new instance provided by subclasses that need to override .expand. Returns: New distribution instance with batch dimensions expanded to batch_size. Return type: TruncatedMultivariateNormal ## Unified Skew Normal Distribution¶ class botorch.utils.probability.unified_skew_normal.UnifiedSkewNormal(trunc, gauss, cross_covariance_matrix, validate_args=None)[source] Bases: Distribution Unified Skew Normal distribution of Y | a < X < b for jointly Gaussian random vectors X ∈ R^m and Y ∈ R^n. Batch shapes trunc.batch_shape and gauss.batch_shape must be broadcastable. Care should be taken when choosing trunc.batch_shape. When trunc is of lower batch dimensionality than gauss, the user should consider expanding trunc to hasten UnifiedSkewNormal.log_prob. In these cases, it is suggested that the user invoke trunc.solver before calling trunc.expand to avoid paying for multiple, identical solves. Parameters: • trunc (TruncatedMultivariateNormal) – Distribution of Z = (X | a < X < b) ∈ R^m. • gauss (MultivariateNormal) – Distribution of Y ∈ R^n. • cross_covariance_matrix (Union[Tensor, LinearOperator]) – Cross-covariance Cov(X, Y) ∈ R^{m x n}. • validate_args (Optional[bool]) – Optional argument to super().__init__. arg_constraints = {} log_prob(value)[source] Computes the log probability ln p(Y = value | a < X < b). Parameters: value (Tensor) – Return type: Tensor rsample(sample_shape=torch.Size([]))[source] Draw samples from the Unified Skew Normal. Parameters: sample_shape (Size) – The shape of the samples. Returns: The (sample_shape x batch_shape x event_shape) tensor of samples. Return type: Tensor expand(batch_shape, _instance=None)[source] Returns a new distribution instance (or populates an existing instance provided by a derived class) with batch dimensions expanded to batch_shape. This method calls expand on the distribution’s parameters. As such, this does not allocate new memory for the expanded distribution instance. Additionally, this does not repeat any args checking or parameter broadcasting in __init__.py, when an instance is first created. Parameters: • batch_shape (torch.Size) – the desired expanded size. • _instance (Optional[UnifiedSkewNormal]) – new instance provided by subclasses that need to override .expand. Returns: New distribution instance with batch dimensions expanded to batch_size. Return type: UnifiedSkewNormal property covariance_matrix: Tensor property scale_tril: Tensor ## Bivariate Normal Probabilities and Statistics¶ Methods for computing bivariate normal probabilities and statistics. [Genz2004bvnt] (1,2,3) A. Genz. Numerical computation of rectangular bivariate and trivariate normal and t probabilities. Statistics and Computing, 2004. B. Muthen. Moments of the censored and truncated bivariate normal distribution. British Journal of Mathematical and Statistical Psychology, 1990. botorch.utils.probability.bvn.bvn(r, xl, yl, xu, yu)[source] A function for computing bivariate normal probabilities. Calculates P(xl < x < xu, yl < y < yu) where x and y are bivariate normal with unit variance and correlation coefficient r. See Section 2.4 of [Genz2004bvnt]. This method uses a sign flip trick to improve numerical performance. Many of bvnus internal branches rely on evaluations Phi(-bound). For a < b < 0, the term Phi(-a) - Phi(-b) goes to zero faster than Phi(b) - Phi(a) because finfo(dtype).epsneg is typically much larger than finfo(dtype).tiny. In these cases, flipping the sign can prevent situations where bvnu(…) - bvnu(…) would otherwise be zero due to round-off error. Parameters: • r (Tensor) – Tensor of correlation coefficients. • xl (Tensor) – Tensor of lower bounds for x, same shape as r. • yl (Tensor) – Tensor of lower bounds for y, same shape as r. • xu (Tensor) – Tensor of upper bounds for x, same shape as r. • yu (Tensor) – Tensor of upper bounds for y, same shape as r. Returns: Tensor of probabilities P(xl < x < xu, yl < y < yu). Return type: Tensor botorch.utils.probability.bvn.bvnu(r, h, k)[source] Solves for P(x > h, y > k) where x and y are standard bivariate normal random variables with correlation coefficient r. In [Genz2004bvnt], this is (1) L(h, k, r) = P(x < -h, y < -k) = 1/(a 2pi) int_{h}^{infty} int_{k}^{infty} f(x, y, r) dy dx, where f(x, y, r) = e^{-1/(2a^2) (x^2 - 2rxy + y^2)} and a = (1 - r^2)^{1/2}. [Genz2004bvnt] report the following integation scheme incurs a maximum of 5e-16 error when run in double precision: if |r| >= 0.925, use a 20-point quadrature rule on a 5th order Taylor expansion; else, numerically integrate in polar coordinates using no more than 20 quadrature points. Parameters: • r (Tensor) – Tensor of correlation coefficients. • h (Tensor) – Tensor of negative upper bounds for x, same shape as r. • k (Tensor) – Tensor of negative upper bounds for y, same shape as r. Returns: A tensor of probabilities P(x > h, y > k). Return type: Tensor botorch.utils.probability.bvn.bvnmom(r, xl, yl, xu, yu, p=None)[source] Computes the expected values of truncated, bivariate normal random variables. Let x and y be a pair of standard bivariate normal random variables having correlation r. This function computes E([x,y] | [xl,yl] < [x,y] < [xu,yu]). Following [Muthen1990moments] equations (4) and (5), we have E(x | [xl, yl] < [x, y] < [xu, yu]) = Z^{-1} phi(xl) P(yl < y < yu | x=xl) - phi(xu) P(yl < y < yu | x=xu), where Z = P([xl, yl] < [x, y] < [xu, yu]) and phi is the standard normal PDF. Parameters: • r (Tensor) – Tensor of correlation coefficients. • xl (Tensor) – Tensor of lower bounds for x, same shape as r. • xu (Tensor) – Tensor of upper bounds for x, same shape as r. • yl (Tensor) – Tensor of lower bounds for y, same shape as r. • yu (Tensor) – Tensor of upper bounds for y, same shape as r. • p (Optional[Tensor]) – Tensor of probabilities P(xl < x < xu, yl < y < yu), same shape as r. Returns: E(x | [xl, yl] < [x, y] < [xu, yu]) and E(y | [xl, yl] < [x, y] < [xu, yu]). Return type: Tuple[Tensor, Tensor] ## Elliptic Slice Sampler with Linear Constraints¶ Linear Elliptical Slice Sampler. References A. Gessner, O. Kanjilal, and P. Hennig. Integrals over gaussians under linear domain constraints. AISTATS 2020. This implementation is based (with multiple changes / optimiations) on the following implementations based on the algorithm in [Gessner2020]: https://github.com/alpiges/LinConGauss https://github.com/wjmaddox/pytorch_ess class botorch.utils.probability.lin_ess.LinearEllipticalSliceSampler(inequality_constraints=None, bounds=None, interior_point=None, mean=None, covariance_matrix=None, covariance_root=None)[source] Bases: PolytopeSampler Linear Elliptical Slice Sampler. TODOs: - clean up docstrings - optimize computations (if possible) Maybe TODOs: - Support degenerate domains (with zero volume)? - Add batch support ? Initialize LinearEllipticalSliceSampler. Parameters: • inequality_constraints (Optional[Tuple[Tensor, Tensor]]) – Tensors (A, b) describing inequality constraints A @ x <= b, where A is an n_ineq_con x d-dim Tensor and b is an n_ineq_con x 1-dim Tensor, with n_ineq_con the number of inequalities and d the dimension of the sample space. If omitted, must provide bounds instead. • bounds (Optional[Tensor]) – A 2 x d-dim tensor of box bounds. If omitted, must provide inequality_constraints instead. • interior_point (Optional[Tensor]) – A d x 1-dim Tensor presenting a point in the (relative) interior of the polytope. If omitted, an interior point is determined automatically by solving a Linear Program. Note: It is crucial that the point lie in the interior of the feasible set (rather than on the boundary), otherwise the sampler will produce invalid samples. • mean (Optional[Tensor]) – The d x 1-dim mean of the MVN distribution (if omitted, use zero). • covariance_matrix (Optional[Tensor]) – The d x d-dim covariance matrix of the MVN distribution (if omitted, use the identity). • covariance_root (Optional[Tensor]) – A d x k-dim root of the covariance matrix such that covariance_root @ covariance_root.T = covariance_matrix. This sampler samples from a multivariante Normal N(mean, covariance_matrix) subject to linear domain constraints A x <= b (intersected with box bounds, if provided). draw(n=1)[source] Draw samples. Parameters: n (int) – The number of samples. Returns: A n x d-dim tensor of n samples. Return type: Tuple[Tensor, Tensor] step()[source] Take a step, return the new sample, update the internal state. Returns: A d x 1-dim sample from the domain. Return type: Tensor ## Linear Algebra Helpers¶ botorch.utils.probability.linalg.block_matrix_concat(blocks)[source] Parameters: blocks (Sequence[Sequence[Tensor]]) – Return type: Tensor botorch.utils.probability.linalg.augment_cholesky(Laa, Kbb, Kba=None, Lba=None, jitter=None)[source] Computes the Cholesky factor of a block matrix K = [[Kaa, Kab], [Kba, Kbb]] based on a precomputed Cholesky factor Kaa = Laa Laa^T. Parameters: • Laa (Tensor) – Cholesky factor of K’s upper left block. • Kbb (Tensor) – Lower-right block of K. • Kba (Optional[Tensor]) – Lower-left block of K. • Lba (Optional[Tensor]) – Precomputed solve Kba Laa^{-T}. • jitter (Optional[float]) – Optional nugget to be added to the diagonal of Kbb. Return type: Tensor class botorch.utils.probability.linalg.PivotedCholesky(step: 'int', tril: 'Tensor', perm: 'LongTensor', diag: 'Optional[Tensor]' = None, validate_init: 'InitVar[bool]' = True)[source] Bases: object` Parameters: • step (int) – • tril (Tensor) – • perm (LongTensor) – • diag (Optional[Tensor]) – • validate_init (InitVar[bool]) – step: int tril: Tensor perm: LongTensor diag: Optional[Tensor] = None validate_init: InitVar[bool] = True update_(eps=1e-10)[source] Performs a single matrix decomposition step. Parameters: eps (float) – Return type: None pivot_(pivot)[source] Parameters: pivot (LongTensor) – Return type: None expand(*sizes)[source] Parameters: sizes (int) – Return type: PivotedCholesky concat(other, dim=0)[source] Parameters: Return type: PivotedCholesky detach()[source] Return type: PivotedCholesky clone()[source] Return type: PivotedCholesky ## Probability Helpers¶ botorch.utils.probability.utils.case_dispatcher(out, cases=(), default=None)[source] Basic implementation of a tensorized switching case statement. Parameters: • out (Tensor) – Tensor to which case outcomes are written. • cases (Iterable[Tuple[Callable[[], BoolTensor], Callable[[BoolTensor], Tensor]]]) – Iterable of function pairs (pred, func), where mask=pred() specifies whether func is applicable for each entry in out. Note that cases are resolved first-come, first-serve. • default (Optional[Callable[[BoolTensor], Tensor]]) – Optional func to which all unclaimed entries of out are dispatched. Return type: Tensor botorch.utils.probability.utils.get_constants(values, device=None, dtype=None)[source] Returns scalar-valued Tensors containing each of the given constants. Used to expedite tensor operations involving scalar arithmetic. Note that the returned Tensors should not be modified in-place. Parameters: • values (Union[Number, Iterator[Number]]) – • device (Optional[device]) – • dtype (Optional[dtype]) – Return type: Union[Tensor, Tuple[Tensor, …]] botorch.utils.probability.utils.get_constants_like(values, ref)[source] Parameters: • values (Union[Number, Iterator[Number]]) – • ref (Tensor) – Return type: Union[Tensor, Iterator[Tensor]] botorch.utils.probability.utils.gen_positional_indices(shape, dim, device=None)[source] Parameters: • shape (Size) – • dim (int) – • device (Optional[device]) – Return type: Iterator[LongTensor] botorch.utils.probability.utils.build_positional_indices(shape, dim, device=None)[source] Parameters: • shape (Size) – • dim (int) – • device (Optional[device]) – Return type: LongTensor botorch.utils.probability.utils.leggauss(deg, **tkwargs)[source] Parameters: • deg (int) – • tkwargs (Any) – Return type: Tuple[Tensor, Tensor] botorch.utils.probability.utils.ndtr(x)[source] Standard normal CDF. Parameters: x (Tensor) – Return type: Tensor botorch.utils.probability.utils.phi(x)[source] Standard normal PDF. Parameters: x (Tensor) – Return type: Tensor botorch.utils.probability.utils.swap_along_dim_(values, i, j, dim, buffer=None)[source] Swaps Tensor slices in-place along dimension dim. When passed as Tensors, i (and j) should be dim-dimensional tensors with the same shape as values.shape[:dim]. The xception to this rule occurs when dim=0, in which case i (and j) should be (at most) one-dimensional when passed as a Tensor. Parameters: • values (Tensor) – Tensor whose values are to be swapped. • i (Union[int, LongTensor]) – Indices for slices along dimension dim. • j (Union[int, LongTensor]) – Indices for slices along dimension dim. • dim (int) – The dimension of values along which to swap slices. • buffer (Optional[Tensor]) – Optional buffer used internally to store copied values. Returns: The original values tensor. Return type: Tensor
2023-01-28 01:20:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5099730491638184, "perplexity": 11869.331212274288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499468.22/warc/CC-MAIN-20230127231443-20230128021443-00375.warc.gz"}
https://gamedev.stackexchange.com/questions/50479/premultiplied-alpha-and-alpha-testing
# Premultiplied Alpha And Alpha Testing I have a shader that is supposed to work with either alpha blending or alpha testing, but the color values being passed in are premultiplied alpha values. Is there an easy/standard way to have it produce the "correct" results for both alpha blending and alpha testing when using premultiplied alpha? For example, if the final result of my shader is RGBA(1,0,0,0.75) straight alpha and therefore RGBA(0.75,0,0,0.75) premultiplied alpha, the result should be remain RGBA(1,0,0,0.75) when alpha testing. Perhaps one option is "dividing out" the alpha channel if we are alpha testing, i.e. divide the R by A above so that we get 0.75/0.75 = 1.0, our original red value. But this becomes non-deterministic when the alpha goes to 0. Maybe I can clarify with an example: Say the back buffer is RGBA(1,1,1,1), and i am rendering with a partially transparent green color RGBA(0,1,0,0.75) (straight alpha). The color is passed in as premultiplied alpha as RGBA(0,0.75,0,0.75). When alpha blending, i blend using one + invsrcalpha and get a final color of RGBA(0.25,1,0.25,1). So my green channel stays at 100%, because the back buffer had green at 100%, and so did my original green color, so no way to get less than 100% on the green channel. When Alpha testing, there is no blending going on. Instead, my green color will be rendered straight to the back buffer as RGBA(0,0.75,0,1). But the color I wanted was RGBA(0,1,0,1). • How does a different value in the R channel affect alpha testing? Alpha is the same in both cases. Also don't forget that you can use clip() in the shader to get the same effect as the render state. – Adam Mar 5 '13 at 19:48 • Say I have a green tree alpha cutout. In the center, the tree might be RGBA(0,1,0,1), but if its antialised around the edges of the cutout, some pixels might be RGBA(0,0.75,0,0.75) in premultiplied alpha. However, since i'm alpha testing and not alpha blending, I want the green color to stay 100% green and not get dark around the edges. Maybe there's a fundamental flaw in the way i'm imagining that alpha cutouts and anti-aliasing should work? – default Mar 5 '13 at 21:52 • It shouldn't look dark if your alpha blending is set up right - you want the source blend factor to be one and not inverse source alpha. blogs.msdn.com/b/shawnhar/archive/2009/11/06/… explains the process well. – Adam Mar 5 '13 at 23:22 • Right, I've read that article. But that applies to alpha blending. I know that alpha blending will work right and not look dark, but when alpha testing, the alpha channel written will be 100%. So I have a whole tree that is 100% green except around the edges it will be 75% green but 100% opaque. – default Mar 5 '13 at 23:52 • You can't divide by A==0, but when A==0 you're not drawing anything anyway. So why does it matter? – Nevermind Mar 6 '13 at 4:44
2020-02-25 00:59:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43836215138435364, "perplexity": 2344.0774055045085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145989.45/warc/CC-MAIN-20200224224431-20200225014431-00403.warc.gz"}
https://www.ademcetinkaya.com/2023/01/ldr-lode-resources-ltd.html
Outlook: LODE RESOURCES LTD is assigned short-term Ba1 & long-term Ba1 estimated rating. Dominant Strategy : Sell Time series to forecast n: 27 Jan 2023 for (n+4 weeks) Methodology : Modular Neural Network (CNN Layer) ## Abstract LODE RESOURCES LTD prediction model is evaluated with Modular Neural Network (CNN Layer) and Wilcoxon Sign-Rank Test1,2,3,4 and it is concluded that the LDR stock is predictable in the short/long term. According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Sell ## Key Points 1. How do you decide buy or sell a stock? 2. Market Risk 3. Nash Equilibria ## LDR Target Price Prediction Modeling Methodology We consider LODE RESOURCES LTD Decision Process with Modular Neural Network (CNN Layer) where A is the set of discrete actions of LDR stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4 F(Wilcoxon Sign-Rank Test)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Modular Neural Network (CNN Layer)) X S(n):→ (n+4 weeks) $\stackrel{\to }{S}=\left({s}_{1},{s}_{2},{s}_{3}\right)$ n:Time series to forecast p:Price signals of LDR stock j:Nash equilibria (Neural Network) k:Dominated move a:Best response for target price For further technical information as per how our model work we invite you to visit the article below: How do AC Investment Research machine learning (predictive) algorithms actually work? ## LDR Stock Forecast (Buy or Sell) for (n+4 weeks) Sample Set: Neural Network Stock/Index: LDR LODE RESOURCES LTD Time series to forecast n: 27 Jan 2023 for (n+4 weeks) According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Sell X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.) Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.) Z axis (Grey to Black): *Technical Analysis% ## IFRS Reconciliation Adjustments for LODE RESOURCES LTD 1. For the purposes of applying the requirement in paragraph 5.7.7(a), credit risk is different from asset-specific performance risk. Asset-specific performance risk is not related to the risk that an entity will fail to discharge a particular obligation but instead it is related to the risk that a single asset or a group of assets will perform poorly (or not at all). 2. Sales that occur for other reasons, such as sales made to manage credit concentration risk (without an increase in the assets' credit risk), may also be consistent with a business model whose objective is to hold financial assets in order to collect contractual cash flows. In particular, such sales may be consistent with a business model whose objective is to hold financial assets in order to collect contractual cash flows if those sales are infrequent (even if significant in value) or insignificant in value both individually and in aggregate (even if frequent). If more than an infrequent number of such sales are made out of a portfolio and those sales are more than insignificant in value (either individually or in aggregate), the entity needs to assess whether and how such sales are consistent with an objective of collecting contractual cash flows. Whether a third party imposes the requirement to sell the financial assets, or that activity is at the entity's discretion, is not relevant to this assessment. An increase in the frequency or value of sales in a particular period is not necessarily inconsistent with an objective to hold financial assets in order to collect contractual cash flows, if an entity can explain the reasons for those sales and demonstrate why those sales do not reflect a change in the entity's business model. In addition, sales may be consistent with the objective of holding financial assets in order to collect contractual cash flows if the sales are made close to the maturity of the financial assets and the proceeds from the sales approximate the collection of the remaining contractual cash flows. 3. In the reporting period that includes the date of initial application of these amendments, an entity is not required to present the quantitative information required by paragraph 28(f) of IAS 8. 4. An entity has not retained control of a transferred asset if the transferee has the practical ability to sell the transferred asset. An entity has retained control of a transferred asset if the transferee does not have the practical ability to sell the transferred asset. A transferee has the practical ability to sell the transferred asset if it is traded in an active market because the transferee could repurchase the transferred asset in the market if it needs to return the asset to the entity. For example, a transferee may have the practical ability to sell a transferred asset if the transferred asset is subject to an option that allows the entity to repurchase it, but the transferee can readily obtain the transferred asset in the market if the option is exercised. A transferee does not have the practical ability to sell the transferred asset if the entity retains such an option and the transferee cannot readily obtain the transferred asset in the market if the entity exercises its option *International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS. ## Conclusions LODE RESOURCES LTD is assigned short-term Ba1 & long-term Ba1 estimated rating. LODE RESOURCES LTD prediction model is evaluated with Modular Neural Network (CNN Layer) and Wilcoxon Sign-Rank Test1,2,3,4 and it is concluded that the LDR stock is predictable in the short/long term. According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Sell ### LDR LODE RESOURCES LTD Financial Analysis* Rating Short-Term Long-Term Senior Outlook*Ba1Ba1 Income StatementBa2C Balance SheetCaa2Caa2 Leverage RatiosCaa2Baa2 Cash FlowB3B2 Rates of Return and ProfitabilityCaa2Caa2 *Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents. How does neural network examine financial reports and understand financial state of the company? ### Prediction Confidence Score Trust metric by Neural Network: 75 out of 100 with 644 signals. ## References 1. N. B ̈auerle and J. Ott. Markov decision processes with average-value-at-risk criteria. Mathematical Methods of Operations Research, 74(3):361–379, 2011 2. Dimakopoulou M, Athey S, Imbens G. 2017. Estimation considerations in contextual bandits. arXiv:1711.07077 [stat.ML] 3. Y. Le Tallec. Robust, risk-sensitive, and data-driven control of Markov decision processes. PhD thesis, Massachusetts Institute of Technology, 2007. 4. M. Petrik and D. Subramanian. An approximate solution method for large risk-averse Markov decision processes. In Proceedings of the 28th International Conference on Uncertainty in Artificial Intelligence, 2012. 5. M. J. Hausknecht. Cooperation and Communication in Multiagent Deep Reinforcement Learning. PhD thesis, The University of Texas at Austin, 2016 6. Armstrong, J. S. M. C. Grohman (1972), "A comparative study of methods for long-range market forecasting," Management Science, 19, 211–221. 7. Challen, D. W. A. J. Hagger (1983), Macroeconomic Systems: Construction, Validation and Applications. New York: St. Martin's Press. Frequently Asked QuestionsQ: What is the prediction methodology for LDR stock? A: LDR stock prediction methodology: We evaluate the prediction models Modular Neural Network (CNN Layer) and Wilcoxon Sign-Rank Test Q: Is LDR stock a buy or sell? A: The dominant strategy among neural network is to Sell LDR Stock. Q: Is LODE RESOURCES LTD stock a good investment? A: The consensus rating for LODE RESOURCES LTD is Sell and is assigned short-term Ba1 & long-term Ba1 estimated rating. Q: What is the consensus rating of LDR stock? A: The consensus rating for LDR is Sell. Q: What is the prediction period for LDR stock? A: The prediction period for LDR is (n+4 weeks)
2023-04-02 02:45:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39056098461151123, "perplexity": 4801.898377216908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950373.88/warc/CC-MAIN-20230402012805-20230402042805-00452.warc.gz"}
https://nrich.maths.org/public/topic.php?code=5005&cl=2&cldcmpid=8065
Resources tagged with: Properties of numbers Filter by: Content type: Age range: Challenge level: There are 60 results Broad Topics > Numbers and the Number System > Properties of numbers Age 11 to 14 Challenge Level: If you take a three by three square on a 1-10 addition square and multiply the diagonally opposite numbers together, what is the difference between these products. Why? Always, Sometimes or Never? Number Age 7 to 11 Challenge Level: Are these statements always true, sometimes true or never true? Special Numbers Age 11 to 14 Challenge Level: My two digit number is special because adding the sum of its digits to the product of its digits gives me my original number. What could my number be? Age 11 to 14 Challenge Level: Visitors to Earth from the distant planet of Zub-Zorna were amazed when they found out that when the digits in this multiplication were reversed, the answer was the same! Find a way to explain. . . . Take One Example Age 5 to 11 This article introduces the idea of generic proof for younger children and illustrates how one example can offer a proof of a general result through unpacking its underlying structure. Three Neighbours Age 7 to 11 Challenge Level: Look at three 'next door neighbours' amongst the counting numbers. Add them together. What do you notice? Magic Letters Age 11 to 14 Challenge Level: Charlie has made a Magic V. Can you use his example to make some more? And how about Magic Ls, Ns and Ws? Escape from the Castle Age 7 to 11 Challenge Level: Skippy and Anna are locked in a room in a large castle. The key to that room, and all the other rooms, is a number. The numbers are locked away in a problem. Can you help them to get out? Magic Crosses Age 7 to 14 Challenge Level: Can you find examples of magic crosses? Can you find all the possibilities? Unlocking the Case Age 7 to 11 Challenge Level: A case is found with a combination lock. There is one clue about the number needed to open the case. Can you find the number and open the case? Got it Article Age 7 to 14 This article gives you a few ideas for understanding the Got It! game and how you might find a winning strategy. Like Powers Age 11 to 14 Challenge Level: Investigate $1^n + 19^n + 20^n + 51^n + 57^n + 80^n + 82^n$ and $2^n + 12^n + 31^n + 40^n + 69^n + 71^n + 85^n$ for different values of n. Even So Age 11 to 14 Challenge Level: Find some triples of whole numbers a, b and c such that a^2 + b^2 + c^2 is a multiple of 4. Is it necessarily the case that a, b and c must all be even? If so, can you explain why? Cinema Problem Age 11 to 14 Challenge Level: A cinema has 100 seats. Show how it is possible to sell exactly 100 tickets and take exactly £100 if the prices are £10 for adults, 50p for pensioners and 10p for children. Oh! Hidden Inside? Age 11 to 14 Challenge Level: Find the number which has 8 divisors, such that the product of the divisors is 331776. Arrange the Digits Age 11 to 14 Challenge Level: Can you arrange the digits 1,2,3,4,5,6,7,8,9 into three 3-digit numbers such that their total is close to 1500? Age 7 to 14 Challenge Level: I added together some of my neighbours' house numbers. Can you explain the patterns I noticed? Slippy Numbers Age 11 to 14 Challenge Level: The number 10112359550561797752808988764044943820224719 is called a 'slippy number' because, when the last digit 9 is moved to the front, the new number produced is the slippy number multiplied by 9. The Patent Solution Age 11 to 14 Challenge Level: A combination mechanism for a safe comprises thirty-two tumblers numbered from one to thirty-two in such a way that the numbers in each wheel total 132... Could you open the safe? Chameleons Age 11 to 14 Challenge Level: Whenever two chameleons of different colours meet they change colour to the third colour. Describe the shortest sequence of meetings in which all the chameleons change to green if you start with 12. . . . Counting Factors Age 11 to 14 Challenge Level: Is there an efficient way to work out how many factors a large number has? Mini-max Age 11 to 14 Challenge Level: Consider all two digit numbers (10, 11, . . . ,99). In writing down all these numbers, which digits occur least often, and which occur most often ? What about three digit numbers, four digit numbers. . . . Times Right Age 11 to 16 Challenge Level: Using the digits 1, 2, 3, 4, 5, 6, 7 and 8, mulitply a two two digit numbers are multiplied to give a four digit number, so that the expression is correct. How many different solutions can you find? Unit Fractions Age 11 to 14 Challenge Level: Consider the equation 1/a + 1/b + 1/c = 1 where a, b and c are natural numbers and 0 < a < b < c. Prove that there is only one set of values which satisfy this equation. Summing Consecutive Numbers Age 11 to 14 Challenge Level: 15 = 7 + 8 and 10 = 1 + 2 + 3 + 4. Can you say which numbers can be expressed as the sum of two or more consecutive integers? Alphabet Soup Age 11 to 14 Challenge Level: This challenge is to make up YOUR OWN alphanumeric. Each letter represents a digit and where the same letter appears more than once it must represent the same digit each time. Which Numbers? (2) Age 7 to 11 Challenge Level: I am thinking of three sets of numbers less than 101. Can you find all the numbers in each set from these clues? Four Coloured Lights Age 11 to 14 Challenge Level: Imagine a machine with four coloured lights which respond to different rules. Can you find the smallest possible number which will make all four colours light up? Clever Carl Age 7 to 14 What would you do if your teacher asked you add all the numbers from 1 to 100? Find out how Carl Gauss responded when he was asked to do just that. Thirty Six Exactly Age 11 to 14 Challenge Level: The number 12 = 2^2 × 3 has 6 factors. What is the smallest natural number with exactly 36 factors? One to Eight Age 11 to 14 Challenge Level: Complete the following expressions so that each one gives a four digit number as the product of two two digit numbers and uses the digits 1 to 8 once and only once. Cogs Age 11 to 14 Challenge Level: A and B are two interlocking cogwheels having p teeth and q teeth respectively. One tooth on B is painted red. Find the values of p and q for which the red tooth on B contacts every gap on the. . . . Table Patterns Go Wild! Age 7 to 11 Challenge Level: Nearly all of us have made table patterns on hundred squares, that is 10 by 10 grids. This problem looks at the patterns on differently sized square grids. Prime Magic Age 7 to 16 Challenge Level: Place the numbers 1, 2, 3,..., 9 one on each square of a 3 by 3 grid so that all the rows and columns add up to a prime number. How many different solutions can you find? X Marks the Spot Age 11 to 14 Challenge Level: When the number x 1 x x x is multiplied by 417 this gives the answer 9 x x x 0 5 7. Find the missing digits, each of which is represented by an "x" . Satisfying Statements Age 11 to 14 Challenge Level: Can you find any two-digit numbers that satisfy all of these statements? Numbers Numbers Everywhere! Age 5 to 11 Bernard Bagnall recommends some primary school problems which use numbers from the environment around us, from clocks to house numbers. Which Numbers? (1) Age 7 to 11 Challenge Level: I am thinking of three sets of numbers less than 101. They are the red set, the green set and the blue set. Can you find all the numbers in the sets from these clues? Guess the Dominoes Age 7 to 14 Challenge Level: This task depends on learners sharing reasoning, listening to opinions, reflecting and pulling ideas together. Lesser Digits Age 11 to 14 Challenge Level: How many positive integers less than or equal to 4000 can be written down without using the digits 7, 8 or 9? Guess the Dominoes for Two Age 7 to 14 Challenge Level: Guess the Dominoes for child and adult. Work out which domino your partner has chosen by asking good questions. Light the Lights Again Age 7 to 11 Challenge Level: Each light in this interactivity turns on according to a rule. What happens when you enter different numbers? Can you find the smallest number that lights up all four lights? The Codabar Check Age 11 to 14 This article explains how credit card numbers are defined and the check digit serves to verify their accuracy. Sort Them Out (2) Age 7 to 11 Challenge Level: Can you each work out the number on your card? What do you notice? How could you sort the cards? 28 and It's Upward and Onward Age 7 to 11 Challenge Level: Can you find ways of joining cubes together so that 28 faces are visible? Elevenses Age 11 to 14 Challenge Level: How many pairs of numbers can you find that add up to a multiple of 11? Do you notice anything interesting about your results? Not a Polite Question Age 11 to 14 Short Challenge Level: When asked how old she was, the teacher replied: My age in years is not prime but odd and when reversed and added to my age you have a perfect square... Water Lilies Age 11 to 14 Challenge Level: There are some water lilies in a lake. The area that they cover doubles in size every day. After 17 days the whole lake is covered. How long did it take them to cover half the lake?
2020-07-11 14:39:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2793739140033722, "perplexity": 1139.01472705535}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655933254.67/warc/CC-MAIN-20200711130351-20200711160351-00510.warc.gz"}
https://math.libretexts.org/Bookshelves/Geometry/Euclidean_Plane_and_its_Relatives_(Petrunin)/01%3A_Preliminaries/1.10%3A_Congruent_triangles
# 1.10: Congruent triangles Our next goal is to give a rigorous meaning for (iv) on Section 1.1. To do this, we introduce the notion of congruent triangles so instead of “if we rotate or shift we will not see the difference” we say that for triangles, the side-angle-side congruence holds; that is, two triangles are congruent if they have two pairs of equal sides and the same angle measure between these sides. An ordered triple of distinct points in a metric space $$\mathcal{X}$$, say $$A, B, C$$, is called a triangle $$ABC$$ (briefly $$\triangle ABC$$). Note that the triangles $$ABC$$ and $$ACB$$ are considered as different. Two triangles $$A'B'C'$$ and $$ABC$$ are called congruent (it can be written as $$\triangle A'B'C' \cong \triangle ABC$$) if there is a motion $$f: \mathcal{X} \to \mathcal{X}$$ such that $$A' = f(A)$$, $$B' = f(B)$$ and $$C' = f(C)$$. Let $$\mathcal{X}$$ be a metric space, and $$f, g: \mathcal{X} \to \mathcal{X}$$ be two motions. Note that the inverse $$f^{-1}: \mathcal{X} \to \mathcal{X}$$, as well as the composition $$f \circ g: \mathcal{X} \to \mathcal{X}$$ are also motions. It follows that "$$\cong$$" is an equivalence relation; that is, any triangle congruent to itself, and the following two conditions hold: • If $$\triangle A'B'C' \cong \triangle ABC$$, then $$\triangle ABC \cong \triangle A'B'C'$$. • If $$\triangle A''B''C'' \cong \triangle A'B'C'$$ and $$\triangle A'B'C' \cong \triangle ABC$$, then $\triangle A''B''C'' \cong \triangle ABC.$ Note that if $$\triangle A'B'C' \cong \triangle ABC$$, then $$AB = A'B', BC = B'C'$$ and $$CA = C'A'$$. For a discrete metric, as well as some other metrics, the converse also holds. The following example shows that it does not hold in the Manhattan plane: ## Example $$\PageIndex{1}$$ Consider three points $$A = (0, 1), B = (1, 0)$$, and $$C = (-1, 0)$$ on the Manhattan plane $$(\mathbb{R}^2, d_1)$$. Note that $d_1 (A, B) = d_1 (A, C) = d_1 (B, C) = 2.$ On one hand, $$\triangle ABC \cong \triangle ACB.$$ Indeed, the map $$(x, y) \mapsto (-x, y)$$ is a motion of $$(\mathbb{R}^2, d_1)$$ that sends $$A \mapsto A, B \mapsto C$$, and $$C \mapsto B$$. On the other hand, $$\triangle ABC \not\cong \triangle BCA.$$ Indeed, arguing by contradiction, assume that $$\triangle ABC \cong \triangle BCA$$; that is, there is a motion $$f$$ of $$(\mathbb{R}^2, d_1)$$ that send $$A \mapsto B, B \mapsto C,$$ and $$C \mapsto A$$. We say that $$M$$ is a midpoint of $$A$$ and $$B$$ if $$d_1(A, M) = d_1(B, M) = \dfrac{1}{2} \cdot d_1(A, B).$$ Note that a point $$M$$ is a midpoint of $$A$$ and $$B$$ if and only if $$f(M)$$ is a midpoint of $$B$$ and $$C$$. The set of midpoints for $$A$$ and $$B$$ is infinite, it contains all points $$(t, t)$$ for $$t \in [0, 1]$$ (it is the gray segment on the picture above). On the other hand, the midpoint for $$B$$ and $$C$$ is unique (it is the black point on the picture). Thus, the map $$f$$ cannot be bijective — a contradiction. This page titled 1.10: Congruent triangles is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Anton Petrunin via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
2023-02-06 19:33:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9435176849365234, "perplexity": 141.7726136587313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500357.3/warc/CC-MAIN-20230206181343-20230206211343-00531.warc.gz"}
https://programs.team/segmented-transformation-of-gray-scale-transformation.html
# Segmented transformation of gray scale transformation ## 1. general ¶ In digital image processing, segmented transformation is to apply different transformation functions to different gray levels, so as to enhance the gray levels of interest and suppress the gray levels of interest The advantage of piecewise linear function is that it can stretch the gray details of features according to needs. Some important transformations can only be described and realized by piecewise function. The disadvantage is that it is difficult to determine the parameters Typical images of piecewise linear functions are as follows: In [1]: import numpy as np import matplotlib.pyplot as plt def func(value): if value <= 3: return 0 if value <= 6: return 2*(value-3) else: return 6 plt.figure(figsize=(6,4)) x = np.linspace(0, 8, 100) y = np.array([]) for v in x: y = np.append(y,np.linspace(func(v),func(v),1)) l=plt.plot(x,y,'b',label='value') plt.text(3.2, 0.2, r'$x_1,y_1$') plt.text(6, 5.7, r'$x_2,y_2$') plt.legend() plt.show() The general formula of piecewise linear function is as follows: $$f(x) = \left\{ \begin{matrix} \frac {y_1}{x_1}x \ \ \ \ \ \ \ \ \ \ \ ,\ x<x_1\\ \frac {y_2-y_1}{x_2-x_1}x+y_1 \ \ \ \ \ \ \ \ , \ x_1<x<x_2 \\ \frac {y_{max} \ \ - y_2}{x_{max}\ \ -x_2}x + y_2 \ \ \ , \ x>x_2 \\ \end{matrix} \right.$$ By using segment transformation, the gray value of an image in a certain interval can be enlarged, and the image contrast can be enhanced by concentrating the gray value in a certain interval ## 2. disadvantages of gamma transform ¶ Non segmented transformation (such as gamma transformation) will have an interval with a slope of about 1, which makes the contrast of the image in this interval almost unchanged after gamma change In [2]: from PIL import Image import numpy as np import matplotlib.pyplot as plt def gamma_trans(input, gamma=2, eps=0 ): return 255. * (((input + eps)/255.) ** gamma) gray_img = np.asarray(Image.open('./image/washed_out_pollen_image.tif').convert('L')) # Create a display body and divide it into four display areas fig = plt.figure() # Show original ax1.set_title("origin") ax1.imshow(gray_img, cmap='gray', vmin=0, vmax=255) ax1.axes.xaxis.set_visible(False) ax1.axes.yaxis.set_visible(False) # Display the gray distribution histogram of the original image ax2.grid(True, linestyle=':', linewidth=1) ax2.set_title('origin', fontsize=12) ax2.set_xlim(0, 255) # Set x-axis distribution range ax2.set_ylim(0, 0.15) # Set y-axis distribution range ax2.hist(gray_img.flatten(), bins=50,density=True,color='r',edgecolor='k') ax2.axes.xaxis.set_visible(False) ax2.axes.yaxis.set_visible(False) gamma_ = 2 # Execute on original drawing γ Transformation output = gamma_trans(gray_img, gamma_, 0.2) # display γ Transform result image ax3.set_title("result") ax3.imshow(output, cmap='gray',vmin=0,vmax=255) ax3.axes.xaxis.set_visible(False) ax3.axes.yaxis.set_visible(False) # display γ Gray distribution histogram of transformed image ax4.set_xlim(0, 255) # Set x-axis distribution range ax4.set_ylim(0, 0.15) # Set y-axis distribution range ax4.grid(True, linestyle=':', linewidth=1) ax4.set_title('result', fontsize=12) ax4.hist(output.flatten(),bins=50,density=True,color='r',edgecolor='k') ax4.axes.xaxis.set_visible(False) ax4.axes.yaxis.set_visible(False) plt.show() It can be seen that for an image whose gray value is concentrated in a certain part in the middle, the gamma change can hardly improve the contrast ## 3. segment transformation ¶ According to the piecewise function formula: $$f(x) = \left\{ \begin{matrix} \frac {y_1}{x_1}x \ \ \ \ \ \ \ \ \ \ \ ,\ x<x_1\\ \frac {y_2-y_1}{x_2-x_1}x+y_1 \ \ \ \ \ \ \ \ , \ x_1<x<x_2 \\ \frac {y_{max} \ \ - y_2}{x_{max}\ \ -x_2}x + y_2 \ \ \ , \ x>x_2 \\ \end{matrix} \right.$$ We can write the simplest and most direct piecewise transformation function: In [3]: # Three segment contrast stretch transformation, where x1,y1, X2 and Y2 are segment points, and X is the input matrix def three_linear_trans(x, x1,y1, x2,y2): # 1. check parameters to avoid denominator being 0 if x1 == x2 or x2 == 255: print("[INFO] x1=%d,x2=%d ->Calling this function must meet:x1≠x2 And x2≠255" % (x1, x2)) return None # 2. perform piecewise linear transformation out = np.zeros(x.shape) for i in range(x.shape[0]): for j in range(x.shape[1]): if x[i,j] < x1: out[i,j] = y1/x1*x[i,j] elif x1 <= x[i,j] <= x2: out[i,j] = (y2-y1)/(x2-x1)*(x[i,j]-x1)+y1 elif x[i,j] > x2: out[i,j] = (255-y2)/(255-x2)*(x[i,j]-x2)+y2 return out Segment the image just now: In [4]: # Read image in grayscale gray_img = np.asarray(Image.open('./image/washed_out_pollen_image.tif').convert('L')) # Create a display body and divide it into five display areas fig = plt.figure() # Display the original image and its gray distribution histogram ax1.set_title("origin", fontsize=8) ax1.imshow(gray_img,cmap='gray',vmin=0,vmax=255) ax3.grid(True, linestyle=':', linewidth=1) ax3.set_xlim(0, 255) # Set x-axis distribution range ax3.set_ylim(0, 0.15) # Set y-axis distribution range ax3.hist(gray_img.flatten(), bins=50, density=True, color='r', edgecolor='k') # Perform piecewise linear transformation x1,y1,x2,y2 = 90, 3, 140, 250 out = three_linear_trans(gray_img,x1,y1,x2,y2) # Display transform result image ax2.clear() ax2.set_title("result", fontsize=8) ax2.imshow(out,cmap='gray',vmin=0,vmax=255) # Display gray distribution histogram of transformation results ax4.clear() ax4.grid(True, linestyle=':', linewidth=1) ax4.set_xlim(0, 255) # Set x-axis distribution range ax4.set_ylim(0, 0.15) # Set y-axis distribution range ax4.hist(out.flatten(), bins=50, density=True, color='r', edgecolor='k') plt.show() It can be seen that the contrast is significantly enhanced, and the color distribution is no longer concentrated in a certain interval in the middle Of course, this expanded distribution of colors concentrated in a certain interval is also called histogram equalization, which means that the color values of the image are redistributed to make the distribution of color values more uniform ## 4. algorithm optimization ¶ Here, we make full use of Numpy's broadcast mechanism and parallel computing to optimize the segment transformation just now The calculation of each section of function is realized through three mask matrices, and the final transformation result is the sum of them: In [5]: # Three segment contrast stretch transformation, where x1,y1, X2 and Y2 are segment points, and X is the input matrix def three_linear_trans2(x, x1,y1, x2,y2): # 1. check parameters to avoid denominator being 0 if x1 == x2 or x2 == 255: print("[INFO] x1=%d,x2=%d ->Calling this function must meet:x1≠x2 And x2≠255" % (x1, x2)) return None # 2. perform piecewise linear transformation out = np.zeros(x.shape) m1=(x<x1) m2=(x1<=x)&(x<=x2) m3=(x>x2) out = (y1/x1*x)*m1 + ((y2-y1)/(x2-x1)*(x-x1)+y1)*m2 + ((255-y2)/(255-x2)*(x-x2)+y2)*m3 return out Execution time comparison: In [6]: import time gray_img = np.asarray(Image.open('./image/washed_out_pollen_image.tif').convert('L')) x1,y1,x2,y2 = 90, 3, 140, 250 ticks1 = time.time() out = three_linear_trans(gray_img,x1,y1,x2,y2) print("Elapsed time( s)by:", time.time()-ticks1) ticks2 = time.time() out = three_linear_trans2(gray_img,x1,y1,x2,y2) print("Elapsed time( s)by:", time.time()-ticks2) Elapsed time( s)by: 7.202014446258545 Elapsed time( s)by: 0.017905712127685547 You can see a significant improvement in execution efficiency ## 5. references ¶ Tags: Python Posted by radman08 on Thu, 30 Jun 2022 17:11:45 +0530
2022-09-30 22:29:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5226629376411438, "perplexity": 8821.516294775654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00755.warc.gz"}
http://mathhelpforum.com/trigonometry/207585-check-my-answer-finding-area-isoscels-triangle-print.html
# Check my answer on finding the area of an isoscels triangle? • November 14th 2012, 10:23 AM SoConfused123 Check my answer on finding the area of an isoscels triangle? Has a base of 6 cm, and has a semi-perimeter of 11cm. What is the area? Sq. Rt. 11(11-6)(11-6)(11-6) Sq. Rt. 11(5)(5)(5) sq. rt. 11 * sq. rt. 125 = 37.1 cm^2 • November 14th 2012, 10:27 AM skeeter Re: Check my answer on finding the area of an isoscels triangle? the triangle is not equilateral ... 2x + 6 = 22 x = 8 the sides are 8, 8, and 6 • November 14th 2012, 10:32 AM SoConfused123 Re: Check my answer on finding the area of an isoscels triangle? So the answer would be 22.2cm^2 ? • November 14th 2012, 01:17 PM $\text{area} = \sqrt{s\left(s-a\right)\left(s-b\right)\left(s-c\right)}$ $= \sqrt{11(3)(3)(5)}$ $= 3 * \sqrt{55}$
2015-03-30 17:19:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.589741587638855, "perplexity": 2721.174152172009}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131299496.98/warc/CC-MAIN-20150323172139-00179-ip-10-168-14-71.ec2.internal.warc.gz"}
https://blog.cryptographyengineering.com/2018/04/21/wonk-post-chosen-ciphertext-security-in-public-key-encryption-part-1/
# Wonk post: chosen ciphertext security in public-key encryption (Part 1) In general I try to limit this blog to posts that focus on generally-applicable techniques in cryptography. That is, I don’t focus on the deeply wonky. But this post is going to be an exception. Today, I’m going to talk about a topic that most “typical” implementers don’t — and shouldn’t — think about. Specifically: I’m going to talk about various techniques for making public key encryption schemes chosen ciphertext secure. I see this as the kind of post that would have saved me ages of reading when I was a grad student, so I figured it wouldn’t hurt to write it all down (even though this absolutely shouldn’t serve as a replacement for just reading the original papers!) ### Background: CCA(1/2) security Early (classical) ciphers used a relatively weak model of security, if they used one at all. That is, the typical security model for an encryption scheme was something like the following: 1. I generate an encryption key (or keypair for public-key encryption) 2. I give you the encryption of some message of my choice 3. You “win” if you can decrypt it This is obviously not a great model in the real world, for several reasons. First off, in some cases the attacker knows a lot about the message to be decrypted. For example: it may come from a small space (like a set of playing cards). For this reason we require a stronger definition like “semantic security” that assumes the attacker can choose the plaintext distribution, and can also obtain the encryption of messages of his/her own choice. I’ve written more about this here. More relevant to this post, another limitation of the above game is that — in some real-world examples — the attacker has even more power. That is: in addition to obtaining the encryption of chosen plaintexts, they may be able to convince the secret keyholder to decrypt chosen ciphertexts of their choice. The latter attack is called a chosen-ciphertext (CCA) attack. At first blush this seems like a really stupid model. If you can ask the keyholder to decrypt chosen ciphertexts, then isn’t the scheme just obviously broken? Can’t you just decrypt anything you want? The answer, it turns out, is that there are many real-life examples where the attacker has decryption capability, but the scheme isn’t obviously broken. For example: 1. Sometimes an attacker can decrypt a limited set of ciphertexts (for example, because someone leaves the decryption machine unattended at lunchtime.) The question then is whether they can learn enough from this access to decrypt other ciphertexts that are generated after she loses access to the decryption machine — for example, messages that are encrypted after the operator comes back from lunch. 2. Sometimes an attacker can submit any ciphertext she wants — but will only obtain a partial decryption of the ciphertext. For example, she might learn only a single bit of information such as “did this ciphertext decrypt correctly”. The question, then, is whether she can leverage this tiny amount of data to fully decrypt some ciphertext of her choosing. The first example is generally called a “non-adaptive” chosen ciphertext attack, or a CCA1 attack (and sometimes, historically, a “lunchtime” attack). There are a few encryption schemes that totally fall apart under this attack — the most famous textbook example is Rabin’s public key encryption scheme, which allows you to recover the full secret key from just a single chosen-ciphertext decryption. The more powerful second example is generally referred to as an “adaptive” chosen ciphertext attack, or a CCA2 attack. The term refers to the idea that the attacker can select the ciphertexts they try to decrypt based on seeing a specific ciphertext that they want to attack, and by seeing the answers to specific decryption queries. In this article we’re going to use the more powerful “adaptive” (CCA2) definition, because that subsumes the CCA1 definition. We’re also going to focus primarily on public-key encryption. With this in mind, here is the intuitive definition of the experiment we want a CCA2 public-key encryption scheme to be able to survive: 1. I generate an encryption keypair for a public-key scheme and give you the public key. 2. You can send me (sequentially and adaptively) many ciphertexts, which I will decrypt with my secret key. I’ll give you the result of each decryption. 3. Eventually you’ll send me a pair of messages (of equal length) $M_0, M_1$ and I’ll pick a bit $b$ at random, and return to you the encryption of $M_b$, which I will denote as $C^* \leftarrow {\sf Encrypt}(pk, M_b)$. 4. You’ll repeat step (2), sending me ciphertexts to decrypt. If you send me $C^*$ I’ll reject your attempt. But I’ll decrypt any other ciphertext you send me, even if it’s only slightly different from $C^*$. 5. The attacker outputs their guess $b'$. They “win” the game if $b'=b$. We say that our scheme is secure if the attacker wins only with a significantly greater probability than they would win with if they simply guessed $b'$ at random. Since they can win this game with probability 1/2 just by guessing randomly, that means we want (Probability attacker wins the game) – 1/2 to be “very small” (typically a negligible function of the security parameter). You should notice two things about this definition. First, it gives the attacker the full decryption of any ciphertext they send me. This is obviously much more powerful than just giving the attacker a single bit of information, as we mentioned in the example further above. But note that powerful is good. If our scheme can remain secure in this powerful experiment, then clearly it will be secure in a setting where the attacker gets strictly less information from each decryption query. The second thing you should notice is that we impose a single extra condition in step (4), namely that the attacker cannot ask us to decrypt $C^*$. We do this only to prevent the game from being “trivial” — if we did not impose this requirement, the attacker could always just hand us back $C^*$ to decrypt, and they would always learn the value of $b$. (Notice as well that we do not give the attacker the ability to request encryptions of chosen plaintexts. We don’t need to do that in the public key encryption version of this game, because we’re focusing exclusively on public-key encryption here — since the attacker has the public key, she can encrypt anything she wants without my help.) With definitions out of the way, let’s talk a bit about how we achieve CCA2 security in real schemes. ### A quick detour: symmetric encryption This post is mainly going to focus on public-key encryption, because that’s actually the problem that’s challenging and interesting to solve. It turns out that achieving CCA2 for symmetric-key encryption is really easy. Let me briefly explain why this is, and why the same ideas don’t work for public-key encryption. (To explain this, we’ll need to slightly tweak the CCA2 definition above to make it work in the symmetric setting. The changes here are small: we won’t give the attacker a public key in step (1), and at steps (2) and (4) we will allow the attacker to request the encryption of chosen plaintexts as well as the decryption.) The first observation is that many common encryption schemes — particularly, the widely-used cipher modes of operation like CBC and CTR — are semantically secure in a model where the attacker does not have the ability to decrypt chosen ciphertexts. However, these same schemes break completely in the CCA2 model. The simple reason for this is ciphertext malleability. Take CTR mode, which is particularly easy to mess with. Let’s say we’ve obtained a ciphertext $C^*$ at step (4) (recall that $C^*$ is the encryption of $M_b$), it’s trivially easy to “maul” the ciphertext — simply by flipping, say, a bit of the message (i.e., XORing it with “1”). This gives us a new ciphertext $C' = C^* \oplus 1$ that we are now allowed to submit for decryption. We are now allowed (by the rules of the game) to submit this ciphertext, and obtain $M_b \oplus 1$, which we can use to figure out $b$. (A related, but “real world” variant of this attack is Vaudenay’s Padding Oracle Attack, which breaks actual implementations of symmetric-key cryptosystems. Here’s one we did against Apple iMessage. Here’s an older one on XML encryption.) So how do we fix this problem? The straightforward observation is that we need to prevent the attacker from mauling the ciphertext $C^*$. The generic approach to doing this is to modify the encryption scheme so that it includes a Message Authentication Code (MAC) tag computed over every CTR-mode ciphertext. The key for this MAC scheme is generated by the encrypting party (me) and kept with the encryption key. When asked to decrypt a ciphertext, the decryptor first checks whether the MAC is valid. If it’s not, the decryption routine will output “ERROR”. Assuming an appropriate MAC scheme, the attacker can’t modify the ciphertext (including the MAC) without causing the decryption to fail and produce a useless result. So in short: in the symmetric encryption setting, the answer to CCA2 security is simply for the encrypting parties to authenticate each ciphertext using a secret authentication (MAC) key they generate. Since we’re talking about symmetric encryption, that extra (secret) authentication key can be generated and stored with the decryption key. (Some more efficient schemes make this all work with a single key, but that’s just an engineering convenience.) Everything works out fine. So now we get to the big question. ### CCA security is easy in symmetric encryption. Why can’t we just do the same thing for public-key encryption? As we saw above, it turns out that strong authenticated encryption is sufficient to get CCA(2) security in the world of symmetric encryption. Sadly, when you try this same idea generically in public key encryption, it doesn’t always work. There’s a short reason for this, and a long one. The short version is: it matters who is doing the encryption. Let’s focus on the critical difference. In the symmetric CCA2 game above, there is exactly one person who is able to (legitimately) encrypt ciphertexts. That person is me. To put it more clearly: the person who performs the legitimate encryption operations (and has the secret key) is also the same person who is performing decryption. Even if the encryptor and decryptor aren’t literally the same person, the encryptor still has to be honest. (To see why this has to be the case, remember that the encryptor has shared secret key! If that party was a bad guy, then the whole scheme would be broken, since they could just output the secret key to the bad guys.) And once you’ve made the stipulation that the encryptor is honest, then you’re almost all the way there. It suffices simply to add some kind of authentication (a MAC or a signature) to any ciphertext she encrypts. At that point the decryptor only needs to determine whether any given ciphertexts actually came from the (honest) encryptor, and avoid decrypting the bad ones. You’re done. Public key encryption (PKE) fundamentally breaks all these assumptions. In a public-key encryption scheme, the main idea is that anyone can encrypt a message to you, once they get a copy of your public key. The encryption algorithm may sometimes be run by good, honest people. But it can also be run by malicious people. It can be run by parties who are adversarial. The decryptor has to be able to deal with all of those cases. One can’t simply assume that the “real” encryptor is honest. Let me give a concrete example of how this can hurt you. A couple of years ago I wrote a post about flaws in Apple iMessage, which (at the time) used simple authenticated (public key) encryption scheme. The basic iMessage encryption algorithm used public key encryption (actually a combination of RSA with some AES thrown in for efficiency) so that anyone could encrypt a message to my key. For authenticity, it required that every message be signed with an ECDSA signature by the sender. When I received a message, I would look up the sender’s public key and first make sure the signature was valid. This would prevent bad guys from tampering with the message in flight — e.g., executing nasty stuff like adaptive chosen ciphertext attacks. If you squint a little, this is almost exactly a direct translation of the symmetric crypto approach we discussed above. We’re simply swapping the MAC for a digital signature. The problems with this scheme start to become apparent when we consider that there might be multiple people sending me ciphertexts. Let’s say the adversary is on the communication path and intercepts a signed message from you to me. They want to change (i.e., maul) the message so that they can execute some kind of clever attack. Well, it turns out this is simple. They simply rip off the honest signature and replace it one they make themselves: The new message is identical, but now appears to come from a different person (the attacker). Since the attacker has their own signing key, they can maul the encrypted message as much as they want, and sign new versions of that message. If you plug this attack into (a version) of the public-key CCA2 game up top, you see they’ll win quite easily. All they have to do is modify the challenge ciphertext $C^*$ at step (4) to be signed with their own signing key, then they can change it by munging with the CTR mode encryption, and request the decryption of that ciphertext. Of course if I only accept messages from signed by some original (guaranteed-to-be-honest) sender, this scheme might work out fine. But that’s not the point of public key encryption. In a real public-key scheme — like the one Apple iMessage was trying to build — I should be able to (safely) decrypt messages from anyone, and in that setting this naive scheme breaks down pretty badly. Whew. Ok, this post has gotten a bit long, and so far I haven’t actually gotten to the various “tricks” for adding chosen ciphertext security to real public key encryption schemes. That will have to wait until the next post, to come shortly. ## 3 thoughts on “Wonk post: chosen ciphertext security in public-key encryption (Part 1)” 1. Ordinon says: Oh I see you just had to use “she” when describing an attacker. I thought “she” was only used if describing a positive action? (I.e. you’d never say “if a pedophile wants to find a child, *she* usually looks in the playground”. Or do you think it’s ok to just go around decrypting things that others encrypted, even if it means breaking into people’s computers? You truly are all low life rotten dirtbags. Get a clue. 2. Alice says: > Eventually you’ll send me a pair of messages (of equal length) M_0, M_1 and I’ll pick a bit b at random, and return to you the encryption of M_b. The attacker outputs their guess b’. They “win” the game if b’=b. But if this is public-key encryption, can’t the attacker just encrypt both messages and see which one matches? 1. Good observation. If the encryption algorithm is not randomized, the attack you describe absolutely would work. The attacker could encrypt both messages, then she would just compare those encryptions to the ciphertext she receives after submitting M0, M1. In fact, what this means is that any scheme that meets the IND-CCA(1/2) definition (or even the weaker IND-CPA definition) *has* to be randomized. What this means is that the encryption algorithm takes in some random bits and uses these to generate the ciphertext. This blocks the attack you describe, because now there are *many* possible ciphertexts for every given message, and you can’t run the attack described above.
2018-12-14 15:53:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 20, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5588968992233276, "perplexity": 1102.5111744667586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825916.52/warc/CC-MAIN-20181214140721-20181214162221-00088.warc.gz"}
https://en.m.wikipedia.org/wiki/Steffensen%27s_method
# Steffensen's method In numerical analysis, Steffensen's method is a root-finding technique named after Johan Frederik Steffensen which is similar to Newton's method. Steffensen's method also achieves quadratic convergence, but without using derivatives as Newton's method does. ## Simple description The simplest form of the formula for Steffensen's method occurs when it is used to find the zeros, or roots, of a function ${\displaystyle ~f~;}$  that is: To find the value ${\displaystyle ~x_{\star }~}$  that satisfies ${\displaystyle ~f(x_{\star })=0~.}$  Near the solution ${\displaystyle ~x_{\star }~,}$  the function ${\displaystyle ~f~}$  is supposed to approximately satisfy ${\displaystyle ~-1  this condition makes ${\displaystyle ~f~}$  adequate as a correction-function for ${\displaystyle ~x~}$  for finding its own solution, although it is not required to work efficiently. For some functions, Steffensen's method can work even if this condition is not met, but in such a case, the starting value ${\displaystyle ~x_{0}~}$  must be very close to the actual solution ${\displaystyle ~x_{\star }~,}$  and convergence to the solution may be slow. Given an adequate starting value ${\displaystyle ~x_{0}~,}$  a sequence of values ${\displaystyle ~x_{0},\,x_{1},\,x_{2},\dots ,\,x_{n},\,\dots ~}$  can be generated using the formula below. When it works, each value in the sequence is much closer to the solution ${\displaystyle ~x_{\star }~}$  than the prior value. The value ${\displaystyle ~x_{n}~}$  from the current step generates the value ${\displaystyle ~x_{n+1}~}$  for the next step, via this formula:[1] ${\displaystyle x_{n+1}=x_{n}-{\frac {\,f(x_{n})\,}{g(x_{n})}}~}$ for ${\displaystyle n=0,1,2,3,...~;}$  where the slope function ${\displaystyle ~g(x)~}$  is a composite of the original function ${\displaystyle ~f~}$  given by the following formula: ${\displaystyle g(x)={\frac {\,f{\bigl (}x+f(x){\bigr )}\,}{f(x)}}-1~}$ or perhaps more clearly, ${\displaystyle g(x)={\frac {\,f(x+h)-f(x)\,}{h}}~}$ where ${\displaystyle ~h=f(x)~}$  is a step-size between the last iteration point, x, and an auxiliary point located at ${\displaystyle ~x+h~.}$ The function ${\displaystyle ~g~}$  is the average value for the slope of the function ${\displaystyle f}$  between the last sequence point ${\displaystyle ~\left(x,y\right)={\bigl (}x_{n},\,f\left(x_{n}\right){\bigr )}~}$  and the auxiliary point ${\displaystyle ~{\bigl (}x,y{\bigr )}={\bigl (}x_{n}+h,\,f\left(x_{n}+h\right)\,{\bigr )}~,}$  with the step ${\displaystyle ~h=f(x_{n})~.}$  It is also called the first-order divided difference of ${\displaystyle ~f~}$  between those two points. It is only for the purpose of finding ${\displaystyle ~h~}$  for this auxiliary point that the value of the function ${\displaystyle ~f~}$  must be an adequate correction to get closer to its own solution, and for that reason fulfill the requirement that ${\displaystyle ~-1  For all other parts of the calculation, Steffensen's method only requires the function ${\displaystyle ~f~}$  to be continuous and to actually have a nearby solution.[1] Several modest modifications of the step ${\displaystyle ~h~}$  in the slope calculation ${\displaystyle ~g~}$  exist to accommodate functions ${\displaystyle ~f~}$  that do not quite meet the requirement. The main advantage of Steffensen's method is that it has quadratic convergence[1] like Newton's method – that is, both methods find roots to an equation ${\displaystyle ~f~}$  just as 'quickly'. In this case quickly means that for both methods, the number of correct digits in the answer doubles with each step. But the formula for Newton's method requires evaluation of the function's derivative ${\displaystyle ~f'~}$  as well as the function ${\displaystyle ~f~,}$  while Steffensen's method only requires ${\displaystyle ~f~}$  itself. This is important when the derivative is not easily or efficiently available. The price for the quick convergence is the double function evaluation: Both ${\displaystyle ~f(x_{n})~}$  and ${\displaystyle ~f(x_{n}+h)~}$  must be calculated, which might be time-consuming if ${\displaystyle ~f~}$  is a complicated function. For comparison, the secant method needs only one function evaluation per step. The secant method increases the number of correct digits by "only" a factor of roughly 1.6 per step, but one can do twice as many steps of the secant method within a given time. Since the secant method can carry out twice as many steps in the same time as Steffensen's method,[a] when both algorithms succeed, the secant method actually converges faster than Steffensen's method in practical use: The secant method achieves a factor of about (1.6)2 ≈ 2.6 times as many digits for every two steps (two function evaluations), compared to Steffensen's factor of 2 for every one step (two function evaluations). Similar to most other iterative root-finding algorithms, the crucial weakness in Steffensen's method is the choice of the starting value ${\displaystyle ~x_{0}~.}$  If the value of ${\displaystyle ~x_{0}~}$  is not 'close enough' to the actual solution ${\displaystyle ~x_{\star }~,}$  the method may fail and the sequence of values ${\displaystyle ~x_{0},\,x_{1},\,x_{2},\,x_{3},\,\dots ~}$  may either flip-flop between two extremes, or diverge to infinity, or both. ## Derivation using Aitken's delta-squared process The version of Steffensen's method implemented in the MATLAB code shown below can be found using the Aitken's delta-squared process for accelerating convergence of a sequence. To compare the following formulae to the formulae in the section above, notice that ${\displaystyle x_{n}=p\,-\,p_{n}~.}$  This method assumes starting with a linearly convergent sequence and increases the rate of convergence of that sequence. If the signs of ${\displaystyle ~p_{n},\,p_{n+1},\,p_{n+2}~}$  agree and ${\displaystyle ~p_{n}~}$  is 'sufficiently close' to the desired limit of the sequence ${\displaystyle ~p~,}$  we can assume the following: ${\displaystyle {\frac {\,p_{n+1}-p\,}{\,p_{n}-p\,}}~\approx ~{\frac {\,p_{n+2}-p\,}{\,p_{n+1}-p\,}}}$ then ${\displaystyle (p_{n+1}-p)^{2}~\approx ~(\,p_{n+2}-p\,)\,(\,p_{n}-p\,)~}$ so ${\displaystyle p_{n+1}^{2}-2\,p_{n+1}\,p+p^{2}~\approx ~p_{n+2}\;p_{n}-(\,p_{n}+p_{n+2}\,)\,p+p^{2}}$ and hence ${\displaystyle (\,p_{n+2}-2\,p_{n+1}+p_{n}\,)\,p~\approx ~p_{n+2}\,p_{n}-p_{n+1}^{2}~.}$ Solving for the desired limit of the sequence ${\displaystyle ~p~}$  gives: ${\displaystyle p~\approx ~{\frac {\,p_{n+2}\,p_{n}-p_{n+1}^{2}\,}{\,p_{n+2}-2\,p_{n+1}+p_{n}\,}}~=~{\frac {\,p_{n}^{2}+p_{n}\,p_{n+2}+2\,p_{n}\,p_{n+1}-2\,p_{n}\,p_{n+1}-p_{n}^{2}-p_{n+1}^{2}\,}{\,p_{n+2}-2\,p_{n+1}+p_{n}\,}}}$ ${\displaystyle =~{\frac {\,(\,p_{n}^{2}+p_{n}\,p_{n+2}-2\,p_{n}\,p_{n+1}\,)-(\,p_{n}^{2}-2\,p_{n}\,p_{n+1}+p_{n+1}^{2}\,)\,}{\,p_{n+2}-2\,p_{n+1}+p_{n}\,}}}$ ${\displaystyle =~p_{n}-{\frac {\,(\,p_{n+1}-p_{n})^{2}\,}{\,p_{n+2}-2\,p_{n+1}+p_{n}\,}}~,}$ which results in the more rapidly convergent sequence: ${\displaystyle p~\approx ~p_{n+3}~=~p_{n}-{\frac {\,(\,p_{n+1}-p_{n}\,)^{2}\,}{\,p_{n+2}-2\,p_{n+1}+p_{n}\,}}~.}$ ## Code example ### In Matlab Here is the source for an implementation of Steffensen's Method in MATLAB. function Steffensen(f,p0,tol) % This function takes as inputs: a fixed point iteration function, f, % and initial guess to the fixed point, p0, and a tolerance, tol. % The fixed point iteration function is assumed to be input as an % inline function. % This function will calculate and return the fixed point, p, % that makes the expression f(x) = p true to within the desired % tolerance, tol. format compact % This shortens the output. format long % This prints more decimal places. for i=1:1000 % get ready to do a large, but finite, number of iterations. % This is so that if the method fails to converge, we won't % be stuck in an infinite loop. p1=f(p0); % calculate the next two guesses for the fixed point. p2=f(p1); p=p0-(p1-p0)^2/(p2-2*p1+p0) % use Aitken's delta squared method to % find a better approximation to p0. if abs(p-p0)<tol % test to see if we are within tolerance. break % if we are, stop the iterations, we have our answer. end p0=p; % update p0 for the next iteration. end if abs(p-p0)>tol % If we fail to meet the tolerance, we output a % message of failure. 'failed to converge in 1000 iterations.' end ### In Python Here is the source for an implementation of Steffensen's Method in Python. from typing import Callable, Iterator Func = Callable[[float], float] def g(f: Func, x: float, fx: float) -> Func: """First-order divided difference function. Arguments: f: Function input to g x: Point at which to evaluate g fx: Function f evaluated at x """ return lambda x: f(x + fx) / fx - 1 def steff(f: Func, x: float) -> Iterator[float]: """Steffenson algorithm for finding roots. This recursive generator yields the x_{n+1} value first then, when the generator iterates, it yields x_{n+2} from the next level of recursion. Arguments: f: Function whose root we are searching for x: Starting value upon first call, each level n that the function recurses x is x_n """ while True: fx = f(x) gx = g(f, x, fx)(x) if gx == 0: break else: x = x - fx / gx # Update to x_{n+1} yield x # Yield value ## Generalization Steffensen's method can also be used to find an input ${\displaystyle ~x=x_{\star }~}$  for a different kind of function ${\displaystyle ~F~}$  that produces output the same as its input: ${\displaystyle ~x_{\star }=F(x_{\star })~}$  for the special value ${\displaystyle ~x_{\star }~.}$  Solutions like ${\displaystyle ~x_{\star }~}$  are called fixed points. Many of these functions can be used to find their own solutions by repeatedly recycling the result back as input, but the rate of convergence can be slow, or the function can fail to converge at all, depending on the individual function. Steffensen's method accelerates this convergence, to make it quadratic. For orientation, the root function ${\displaystyle ~f~}$  and the fixed-point functions are simply related by ${\displaystyle ~F(x)=x+\varepsilon \,f(x)~,}$  where ${\displaystyle ~\varepsilon ~}$  is some scalar constant small enough in magnitude to make ${\displaystyle ~F~}$  stable under iteration, but large enough for the non-linearity of the function ${\displaystyle ~f~}$  to be appreciable. All issues of a more general Banach space vs. basic real numbers being momentarily ignored for the sake of the comparison. This method for finding fixed points of a real-valued function has been generalised for functions ${\displaystyle ~F:X\to X~}$  that map a Banach space ${\displaystyle ~X~}$  onto itself or even more generally ${\displaystyle ~F:X\to Y~}$  that map from one Banach space ${\displaystyle ~X~}$  into another Banach space ${\displaystyle ~Y~.}$  The generalized method assumes that a family of bounded linear operators ${\displaystyle ~\{\;L(u,v):u,v\in X\;\}~}$  associated with ${\displaystyle ~u~}$  and ${\displaystyle ~v~}$  can be devised that (locally) satisfies that condition.[2] ${\displaystyle F\left(u\right)-F\left(v\right)=L\left(u,v\right)\,{\bigl [}\,u-v\,{\bigr ]}~.\qquad \qquad \qquad \qquad ~}$ eqn. (𝄋) If division is possible in the Banach space, the linear operator ${\displaystyle ~L~}$  can be obtained from ${\displaystyle L\left(u,v\right)={\bigl [}\,F\left(u\right)-F\left(v\right)\,{\bigr ]}{\bigl [}\,u-v\,{\bigr ]}^{-1}~,}$ Which may provide some insight. (The quotient form is shown for orientation; it is not required per se.) Expressed in this way, the linear operator ${\displaystyle ~L~}$  can be more easily seen to be an elaborate version of the divided difference ${\displaystyle ~g~}$  discussed in the first section, above. Note that the division is not necessary; the only requirement is that the operator ${\displaystyle ~L~}$  satisfy the equation marked with the segno, (𝄋). For the basic real number function ${\displaystyle ~f~}$ , given in the first section, the function simply takes in and puts out real numbers. There, the function ${\displaystyle ~g~}$  is a divided difference. In the generalized form here, the operator ${\displaystyle ~L~}$  is the analogue of a divided difference for use in the Banach space. The operator ${\displaystyle ~L~}$  is roughly equivalent to a matrix whose entries are all functions of vector arguments ${\displaystyle ~u~}$  and ${\displaystyle ~v~}$ . Steffensen's method is then very similar to the Newton's method, except that it uses the divided difference ${\displaystyle ~L{\bigl (}\,F\left(x\right),\,x\,{\bigr )}~}$  instead of the derivative ${\displaystyle ~F'(x)~.}$  Note that for fixed point functions, ${\displaystyle ~F~}$  and for linear operators ${\displaystyle ~L~}$  meeting the marked (𝄋) condition, ${\displaystyle ~F'(x)\approx L{\bigl (}\,F\left(x\right),\,x\,{\bigr )}\approx I~,}$  where ${\displaystyle ~I~}$  is the identity operator. The generalized iteration formula is given by ${\displaystyle x_{n+1}=x_{n}+{\Bigl [}I-L{\bigl (}F\left(x_{n}\right),x_{n}{\bigr )}{\Bigr ]}^{-1}{\Bigl [}F\left(x_{n}\right)-x_{n}{\Bigr ]}~,}$ for ${\displaystyle ~n=1,\,2,\,3,\,...~.}$ If the operator ${\displaystyle ~L~}$  satisfies ${\displaystyle {\Bigl \|}L\left(u,v\right)-L\left(x,y\right){\Bigr \|}\leq k{\biggl (}{\Bigl \|}u-x{\Bigr \|}+{\Bigr \|}v-y{\Bigr \|}{\biggr )}~}$ for some constant ${\displaystyle ~k~,}$  then the method converges quadratically to a fixed point of ${\displaystyle F}$  if the initial approximation ${\displaystyle ~x_{0}~}$  is 'sufficiently close' to the desired solution ${\displaystyle ~x_{\star }~,}$  that satisfies ${\displaystyle ~x_{\star }=F(x_{\star })~.}$ ## Notes 1. ^ Because ${\displaystyle ~f(x_{n}+h)~}$  requires the prior calculation of ${\displaystyle ~h=f(x_{n})~,}$  the two evaluations must be done sequentially – the algorithm per se cannot be made faster by running the function evaluations in parallel. This is yet another disadvantage of Steffensen's method. ## References 1. ^ a b c Dahlquist, Germund; Björck, Åke (1974). Numerical Methods. Translated by Anderson, Ned. Englewood Cliffs, NJ: Prentice Hall. pp. 230–231. 2. ^ Johnson, L.W.; Scholz, D.R. (June 1968). "On Steffensen's method". SIAM Journal on Numerical Analysis. 5 (2): 296–302. doi:10.1137/0705026. JSTOR 2949443.
2022-06-28 12:36:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 107, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.868828535079956, "perplexity": 1554.3418387646116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103516990.28/warc/CC-MAIN-20220628111602-20220628141602-00065.warc.gz"}
http://mathhelpforum.com/calculus/66903-find-x-coordinate.html
1. ## find x-coordinate Hi: I am asked to find the x-coordinate of B from y=12x⋀½ - 2x⋀3/2 I am sure this requires finding the turning points by differentiation dy/dx on the equation which gives 6x⋀-1/2 - 3x⋀1/2. But how do I factorize this to find the x-coordinate? Or is my thinking wrong? Thanks ⋀ - means raised to the power of 2. Originally Posted by lemonz Hi: I am asked to find the x-coordinate of B from y=12x⋀½ - 2x⋀3/2 I am sure this requires finding the turning points by differentiation dy/dx on the equation which gives 6x⋀-1/2 - 3x⋀1/2. But how do I factorize this to find the x-coordinate? Or is my thinking wrong? Thanks ⋀ - means raised to the power of What's B meant to be? Turning point? $\frac{6}{\sqrt{x}} - 3 \sqrt{x} = 0 \Rightarrow \frac{2}{\sqrt{x}} = \sqrt{x}$ Multiply both sides by $\sqrt{x}$: $\Rightarrow 2 = x$. 3. ## re: indices Hi: Thanks: yes, B is a max. turning point. I am still having trouble getting x=2 from the given equation - out of standard index form. Your answer was correct. X does = 2. Could you show me the interim steps, please? regards 4. Originally Posted by lemonz Hi: Thanks: yes, B is a max. turning point. I am still having trouble getting x=2 from the given equation - out of standard index form. Your answer was correct. X does = 2. Could you show me the interim steps, please? regards Mr. Fantastic did show the intermediate steps... $y=12x^{1/2} - 2x^{3/2}$ $\frac{dy}{dx} = 6x^{-1/2} - 3x^{1/2} = 0$ And here is his work, following immediately from where you left off: $\frac{6}{\sqrt{x}} - 3 \sqrt{x} = 0$ $\frac{2}{\sqrt{x}} = \sqrt{x}$ $2=x$ Which part did you miss? Hi: I don't understand how Mr.Fantastic got from 6 to 2 with minus of 3 thanks 6. $\frac{6}{\sqrt{x}} - 3 \sqrt{x} = 0$ $3\left(\frac{2}{\sqrt{x}} - \sqrt{x}\right) = 0$ $\frac{2}{\sqrt{x}} - \sqrt{x} = 0$ $\frac{2}{\sqrt{x}} = \sqrt{x}$ $2=x$ 7. Originally Posted by skeeter $\frac{6}{\sqrt{x}} - 3 \sqrt{x} = 0$ $3\left(\frac{2}{\sqrt{x}} - \sqrt{x}\right) = 0$ $\frac{2}{\sqrt{x}} - \sqrt{x} = 0$ $\frac{2}{\sqrt{x}} = \sqrt{x}$ $2=x$ My mistake for assuming a calculus student will know how to divide out a common factor (of 3 in this case). 8. Sad to say, but my experience has been such that I will never assume anything about a calculus student's algebra (or geometry) skills. 9. Honestly, after moving on to more advanced math, I tend to lose some of the earlier skills that I had. For example, I can no longer multiply two-digit numbers in my head. I can't even count properly anymore. All I do these days is count from 1 to n, or worse yet, enumerate from 1 to aleph-0 LOL Math is never trivial. Hindsight is always 20-20, but learning math for the first time can be quite difficult. I can't tell you how many times I asked my friends a question and got smacked by with "Dude, that's trivial! You can solve by inspection - you don't see it? Nooo?" 10. ## Calculus students Sorry. I'm not a calculus student. I'm just taking my school exams. I didn't realize you had to be a calculus student in order to post calculus based questions in this forum. My mistake. 11. Originally Posted by lemonz Sorry. I'm not a calculus student. I'm just taking my school exams. I didn't realize you had to be a calculus student in order to post calculus based questions in this forum. My mistake. If you're learning about differentiation and its applications then you're studying calculus and so you're a calculus student. There are certain mathematical skills that a calculus student is assumed to have. One of those skills is algebraic skills such as dividing out a common factor in an equation. That is what was being discussed, not whether or not you were allowed to post a calculus question. Since the calculus forum is obviously a forum for asking questions about calculus the question you asked was quite rightly posted in this forum. If you are preparing for exams it might be wise to revise and consolidate the background algebraic skills the examiner will assume that you have. In the meanwhile your question has been answered and it's probably best that the thread is closed.
2017-04-30 01:35:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6976784467697144, "perplexity": 853.0550806038561}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123635.74/warc/CC-MAIN-20170423031203-00334-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/generalized-version-of-work-energy-theorem.889008/
# B Generalized version of work-energy theorem Tags: 1. Oct 13, 2016 ### donaldparida I know that for rigid bodies only the work-energy theorem states that the net work done on the body equals the change in kinetic energy of the body since a rigid body has no internal degrees of freedom and hence no other forms of energy such as potential energy. Is there a most generalized form of work energy theorem that is valid for rigid as well as non rigid bodies and for conservative as well as non-conservative force? 2. Oct 13, 2016 ### Staff: Mentor This might not be what you're looking for, but it applies to rigid or non-rigid bodies, since it deals with center of mass quantities: $$F_{net}\Delta x_{cm}=\Delta (\frac{1}{2}m v_{cm}^2)$$ 3. Oct 13, 2016 ### Shreyas Samudra No We can apply work energy theorem for a rigid body or a point particle substituting the body(center of mass) in former case(applying work energy theorem for rigid body) we may have terms like rational kinetic energy,... But in the later one(applying work energy theorem for centre of mass of the rigid body) we may have terms for work done due to those forces which did zero work on the rigid body(due to no movement of point of contact) Willy nilly we get the same result It all matters on what you choose your system to be What might be potential energy in one case can be work done by external force in another case Last edited: Oct 13, 2016 4. Oct 13, 2016 ### Staff: Mentor Yes! (The theorem is a consequence of Newton's 2nd law.) One can certainly apply conservation of energy, but I thought the question was asking for something different. Exactly! Those terms have the appearance of work, but do not reflect actual work done (in the first law of thermo sense). Nonetheless, the theorem is valid. 5. Oct 14, 2016 ### Shreyas Samudra Hey What happened?? 6. Oct 14, 2016 ### donaldparida @Doc Al, I think that your equation will work out for rigid and non-rigid objects but why is there no account of change in other forms of energy(potential energy, heat energy) in the equation? Another question:Does Fnet include the non conservative forces as well or only the conservative forces(is Fnet the resultant of the net conservative force and the net non-conservative force)? 7. Oct 14, 2016 ### Staff: Mentor Don't confuse this equation with the much more general conservation of energy. Fnet includes all forces acting on the body, whether conservative or not. 8. Oct 15, 2016 ### donaldparida Could you please elaborate. 9. Oct 15, 2016 ### Shreyas Samudra Video 1 (work energy theorem-18:00 to the end) Video 2 (work energy theorem-full video) He' a great prof. (simply awesome) BUT YOU HAVE TO UNDERSTAND INDIAN ACCENT !! 10. Oct 15, 2016 ### Staff: Mentor Sure. The equation I gave is derived from Newton's 2nd law, using the properties of the center of mass. The left hand side looks like a work term, but is not. (It's sometimes called center of mass work or pseudowork.) The work terms used in conservation of energy equations require the forces to be combined with the displacement of their point of application, not the displacement of the center of mass. And conservation equations can have all sorts of internal energy quantities. 11. Oct 15, 2016 ### donaldparida So these equalities are valid for point particles, rigid bodies and non-rigid bodies, Right? ΔK.E. = Wnet = Fnet . dcentre of mass = (Fnet conservative + Fnet non-conservative + Fnet external) . dcentre of mass = Wnet conservative + Wnet non-conservative + Wnet external And the potential energy is included in Wnet conservative. Correct?But is there any proof that Wnet conservative=-ΔP.E.? Also i could not understand one thing: do conservative forces change the K.E. and non-conservative forces change the P.E. or the opposite or do both the types of forces can change both the types of energy and what type of energy can external forces change. Last edited: Oct 15, 2016 12. Oct 16, 2016 ### Staff: Mentor The center of mass equation that I gave is valid for all of these. Careful about setting the center of mass "work" term equal to the total change in energy. Don't confuse that equation with the more general conservation of energy. All forces contribute to the change in the KE of the center of mass. Here's what's tricky, and where the center of mass equation is useful. Imagine jumping into the air. Your KE obviously has changed, despite no external work being done on you. The center of mass equation still holds. 13. Oct 16, 2016 ### donaldparida @Doc Al , I know that the work done by conservative forces is independent of the path taken, i.e., the work done by conservative forces in moving an object from an initial point to a particular final point is the same for different paths taken. Thus the potential energy of the object at the particular fixed point is same for different paths taken to reach the particular final point from the initial point and so it is wise and useful to use the concept of potential energy in case of the work done by conservative forces. In case of non-conservative forces some difficulty arises because non-conservatives forces are path dependent. The work done by non-conservative forces is dependent on the path taken,i.e., the work done by non-conservative forces in moving an object from an initial point to a particular final point is different for different paths taken. Thus the potential energy of the object at the particular final point is different for different paths taken to reach the particular final point from the initial point and so it is not wise and useful to use of the concept of potential energy in case of the work done by non-conservative forces. Also mechanical energy is conserved in case of conservative forces while in case of non-conservative forces mechanical energy is not conserved. So far correct.Right? Now i recently understood the reason why W net conservative = -ΔP.E.(which had been troubling me for long). The work done by conservative forces have been defined like this, so that the law of conservation of mechanical energy is complied to when no non-conservatives are acting on the body(In which case the law of conservation of mechanical is not complied to).Correct? Also isn't it correct that both conservative forces and non-conservatives increase the potential energy of the body(although the concept of potential energy is not used for non-conservative forces) but we let the conservative forces account for all the changes in potential energy? So in conclusion, All the types of forces(conservative, non-conservative and external forces) increase both the kinetic and potential energy of the body but we let the only the conservative force account for all the change in the potential energy.Right? 14. Oct 16, 2016 ### Staff: Mentor Potential energy is only defined for conservative forces. True. Not really. A change in potential energy can only involve conservative forces. If you think otherwise, perhaps you can give an example of what you mean. Last edited: Oct 16, 2016 15. Oct 16, 2016 ### donaldparida For example:When a non-conservative force moves an object from one point to another point there is indeed an energy change of the body and do you think of this: And what do you think of this:The work done by conservative forces have been defined like this, so that the law of conservation of mechanical energy is complied to when no non-conservatives are acting on the body(In which case the law of conservation of mechanical is not complied to) 16. Oct 16, 2016 ### Staff: Mentor Right. But if there is any change in PE it is due to the action of conservative forces. (By definition, essentially.) Sounds OK to me. 17. Oct 17, 2016 ### donaldparida @Doc Al,If non-conservative forces do not change potential energy then why is it equal to ΔM.E.(ΔK.E.+ΔP.E.)? 18. Oct 17, 2016 ### donaldparida @Doc Al ,In addition to the above question can you also tell that whether the work done by forces on the point of application equal to the work done on the center of mass of a body(Can we substitute the work done by the forces on the point of application by the center of mass)? Last edited: Oct 17, 2016 19. Oct 17, 2016 ### Staff: Mentor Perhaps we're getting lost in semantics. Can a non-conservative force change the the PE of a system? Sure. But there must be conservative forces acting--else there would be no PE at all. 20. Oct 17, 2016 ### Staff: Mentor No, definitely not. The work done by forces on the point of application is the "real" work. A force can produce center of mass "work" yet do zero work against the point of contact. (Consider the example of jumping that I gave before.) And you can also have forces that do real work, yet no center of mass "work".
2017-12-17 12:25:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4944590926170349, "perplexity": 619.8962907637833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948595858.79/warc/CC-MAIN-20171217113308-20171217135308-00505.warc.gz"}
https://www.physicsoverflow.org/5677/there-phenomenological-motivation-ramond-neveu-swarz-models
# Was there some phenomenological motivation for Ramond and Neveu-Swarz models? + 2 like - 0 dislike 49 views This remark from Lubos puzzles me Ramond's string - and Neveu-Schwarz string - wasn't really an "origin of string theory". String theory had "origin" as bosonic string theory which has no fermions. All SUSY/fermioncs strings are "new". How to fit this view, which seems to be the common one, with the fact that the R and NS papers date from 1971, before the concept of supersymmetry, long before superstrings themselves, and only three years later than the Veneziano model, and almost simultaneous (less than two years) with the idea of a String interpretation of the model? Were the R-NS models just a hep-th proposal, without any link to, nor motivation from, phenomenology? It could be so. I can not find any clear reference to baryonic regge trajectories in the early literature, and I can not find any duality diagram involving fermions, before 1971. But on other hand, Susskind 1970 abstracts keep speaking of "model of hadrons", not just "mesons", when exposing the idea of strings. So perhaps I am just being sloppy in my spires searches. This post imported from StackExchange Physics at 2014-03-07 14:32 (UCT), posted by SE-user arivero The Ramond proposal was inspired by the desire to incorporate fermions, but there were stringent constraints on this, and he had to develop both a string picture and supersymmetry. Lubos is not a historian, and doesn't claim to be. This post imported from StackExchange Physics at 2014-03-07 14:32 (UCT), posted by SE-user Ron Maimon The string picture of NS and R models dates from 1974. dx.doi.org/10.1016/0550-3213(74)90127-8 This post imported from StackExchange Physics at 2014-03-07 14:32 (UCT), posted by SE-user Mitchell Porter +Mitchell Porter, I am not sure if this goes against my line of thought, as it proves that as soon as the NS-R model did appear, an effort was done to incorporate it into the framework of string interactions. The 1974 paper is actually a follow-up to Mandelstam 1973 "Interacting String Picture Of Dual Resonance Models", isn't it? This post imported from StackExchange Physics at 2014-03-07 14:32 (UCT), posted by SE-user arivero Ramond's "string picture" is not as sophisticated as Mandelstam's. Mandelstam is really finding a complete dynamical system. Ramond's paper introduced an "internal time" coordinate which was either periodic or an interval, I forget, which allowed him to expand states in a heuristic creation operators. He did not describe dynamics using this picture, but used it to get Fermion Regge trajectories using what we would call supersymmetry generators, the F operators which complete the super-virasoro algebra, which cancel ghosts. This post imported from StackExchange Physics at 2014-03-07 14:32 (UCT), posted by SE-user Ron Maimon Ron, I would like to discuss your views on Chew's (and Mandelstam's) relevance for string theory, but have no way to contact you... I am not aware of any deductive path to superstrings that doesn't begin by positing an extended object. The path from the bootstrap, via DHS duality and the Veneziano amplitude, to string theory, was inductive, and yet isn't the bootstrap ultimately supposed to be deductive? This post imported from StackExchange Physics at 2014-03-07 14:32 (UCT), posted by SE-user Mitchell Porter Now maybe there is a deductive path to the string which just looks at the S-matrix (or its analogues in (anti) de Sitter space). It's a worthy topic of investigation, but no such derivation presently exists. Meanwhile, elsewhere I've seen you suggest that the holographic principle is string theory's counterpart of the equivalence principle. Again, an interesting idea, but I'd like to see how it's supposed to work in detail. This post imported from StackExchange Physics at 2014-03-07 14:32 (UCT), posted by SE-user Mitchell Porter @Mitchell--- The point of the comments I make about Chew/Mandelstam is to correct the historical injustice of their marginalization. There is no full deductive path to string theory, period. All derivations assume that some sort of string action produces the entire S-matrix spectrum, so that a worldsheet field theory gives the whole physics. This assumption is correct, but is not justified only by the reasoning that the string is extended. This post imported from StackExchange Physics at 2014-03-07 14:32 (UCT), posted by SE-user Ron Maimon + 4 like - 0 dislike The statement that bosonic strings came first and Fermionic strings came later is not exactly correct as history. Fermionic strings came almost simultaneously, when Ramond discovered the two dimensional super-conformal algebra in 1971. Ramond style string theories did not have space-time supersymmetry (or rather, they did, but the GSO projection which was required to extract the physical spectrum was not discovered until 1976, and the proof that this projection actually leaves a sensible theory did not come until the Green-Schwarz formulation was developed in the early 1980s). The Neveu Schwarz paper analyzes bosonic oscillations of a fermionic string, and was motivated by exploring all consistent bootstraps to find something that would work for the light mesons. The problems at the time was that a bosonic tachyon was interpreted as the experimentally known instability of the pion vacuum, so it was considered essential for a good theory. The Neveu-Schwarz sector, without the GSO projection, contains such a tachyon. Now we know that this means that the theory is sick, but back then, it was considered a good sign. The Ramond fermions were then interpreted as bare baryons, to be dressed with the pion condensation, and this interpretation is also wrong, since baryons have a three-quark symmetry structure. The Neveu-Schwarz sector was interpreted as mesons, but they also had a tachyon (which is GSO odd and vanishes), and nothing looks like the QCD spectrum, not with the crude tools available then. The inconsistency of Ramond-Neveu-Schwarz strings was expressed most simply by Edward Witten in the early eighties: the closed string sector of a fermionic string contains massless spin-3/2 particles, so it must couple to some space-time supercurrent in order to make sense. The graviton and spin 3/2 gravitinos must make a sensible supergravity theory. The development of supergravity was initiated to a large degree by string theory, since Scherk immediately began to investigate supergravities after GSO. He probably understood even then that the low-energy limit of superstrings would have to be some sort of supergravity. So it is most fair to say that the development of superstrings and of bosonic strings went hand in hand, but the full perturbation series for the bosonic string was completed earlier, while a full perturbation theory of the fermionic string had to wait until the early eighties. This post imported from StackExchange Physics at 2014-03-07 14:32 (UCT), posted by SE-user Ron Maimon answered Sep 4, 2011 by (7,495 points) Yep, GSO projection is a main ingredient, and it comes later! As for interpretations, there are also some early ones where they try to look at the Ramond sector as if they where quarks, and this view seems to travel along with the "baryon view" during the 71-74 period. This post imported from StackExchange Physics at 2014-03-07 14:32 (UCT), posted by SE-user arivero Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar\varnothing$sicsOverflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
2017-05-28 01:05:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6613961458206177, "perplexity": 1781.1538883278793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609404.11/warc/CC-MAIN-20170528004908-20170528024908-00362.warc.gz"}
https://meta.wikimedia.org/wiki/User:OrenBochman/WGT/Attrition
# User:OrenBochman/WGT/Attrition ## Edit Wars We can model edit wars using a war of attrition model, this is a model originally formulated by John Maynard Smith,[1] a mixed evolutionary stable strategy (ESS) was determined by Bishop & Cannings.[2] Strategically, the game is an auction, in which the prize goes to the player with the highest bid, and each player pays the loser's low bid (making it an all-pay sealed-bid second-price auction). In the context of an edit war it is best to consider the dynamic formulation: Two players are involved in a dispute: • The value of the object to each player is ${\displaystyle v_{i}>0}$. • Time is modeled as a continuous variable which starts at zero and runs indefinitely. • Each player chooses when to concede the object to the other player. • In the case of a tie, each player receives ${\displaystyle v_{i}/2}$ utility. • Time is valuable, each player uses one unit of utility per period of time. The evolutionary stable strategy is a mixed ESS, in which the probability of persisting for a length of time t is: ${\displaystyle p(t)={\frac {1}{V}}e^{(-t/V)}}$ As is the case with ESS - it does not guarantee the win; rather it is the optimal balance of risk and reward. The outcome of any particular game cannot be predicted as the random factor of the opponent's bid is too unpredictable. That no pure persistence time is an ESS can be demonstrated simply by considering a putative ESS bid of x, which will be beaten by a bid of x+${\displaystyle \delta }$. ### Interpretation Wikipedia has often been described as a space where content of articles is decided by the most persistent editor. However in a war of attrition - all players must pay a price to participate and the winner can end up with a Pyrrhic victory. Some open question for this model: 1. What is the utility assigned to articles by editors. 2. What factors affect this utility. (Considering a block, page protection, admin action, sanction may result from a content dispute, and that it may spread to other articles.) ## References 1. Maynard Smith, J. (1974) Theory of games and the evolution of animal contests. Journal of Theoretical Biology 47: 209-221. 2. Bishop, D.T. & Cannings, C. (1978) A generalized war of attrition. Journal of Theoretical Biology 70: 85-124.
2021-04-10 20:10:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4647391438484192, "perplexity": 1610.5432822346556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038057476.6/warc/CC-MAIN-20210410181215-20210410211215-00192.warc.gz"}
http://moinakg.wordpress.com/2012/12/13/linear-algebra-meets-data-compression/
# Linear Algebra meets Data Compression I recently introduced another kind of data encoding into Pcompress that leverages a common technique from linear algebra called Matrix Transpose. There are multiple ways of transposing a matrix. The simplest, conceptually is a transformation where columns of the source matrix become rows in the target and vice versa. In math terms if A is the source matrix and A” is the target then transposition boils down to $A[m X n] -> A"[n X m]$ where m and n are dimensions of the 2D matrices. Now where does this fit into the seemingly far removed world of data compression? Let us consider a dataset which is a sequence of 32-bit numbers. In my previous post on the Adaptive Run-Length Delta Encoding feature I mentioned how it detects sequences of numbers in data which are in simple arithmetic progression. Once an arithmetic progression is found it can be very easily encoded. Now let us assume that we do have a series of numbers but they are not in a simple incrementing series. So Delta Encoding will not be useful there. As an example let us consider the following series of numbers: 4567, 3479, 9345. Let us also assume that they are stored in the data in big-endian row-major representation. We choose big-endian since that is the common platform-independent way of storing numeric data values. So if we look at the byte stream it will be thus (in hex): 00  00  11  D7  00  00  0D  97  00  00  24  81. Notice that there are a bunch of zeroes but they are mixed up with non-zero bytes. We can look at this data as a 4×3 matrix. 4-byte integers totalling 3 in number: $A[4X3] = \left| \begin{array}{cccc} 00 & 00 & 11 & D7 \\ 00 & 00 & 0D & 97 \\ 00 & 00 & 24 & 81 \end{array} \right|$. Now looking at the matrix do you see what I see? The zeroes are aligned up in columns. Since the storage in memory is row-major the zeroes are mixed up with the other bytes. Now let us check out the transposed matrix: $A"[3X4] = \left| \begin{array}{ccc} 00 & 00 & 00 \\ 00 & 00 & 00 \\ 11 & 0D & 24 \\ D7 & 97 & 81 \end{array} \right|$. If we store this in memory the byte sequence becomes: 00  00  00  00  00  00  11  0D  24  D7  97  81. Voila, all the zeroes have lined up. Any compression algorithm on the planet will now be able to collapse that sequence of zeroes. This example shows zeroes, but in fact, it can be any repeating byte value. This technique have been used in other places. For example in HDF5 where they unassumingly call it Shuffle. In Pcompress it is used in 2 places. Once within the Delta2 encoding phase and secondly to encode the Deduplication Index table. So compression ratios are improved across the board. To recover the data it is a simple matter of effecting the inverse transpose.
2014-07-23 21:59:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5820938944816589, "perplexity": 854.9090601333432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997883468.51/warc/CC-MAIN-20140722025803-00101-ip-10-33-131-23.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-algebra/119831-determinant-adujagate.html
# Math Help - determinant and adujagate If A is 3x3 matrix and detA = 2 i understand how to do stuff with determinants if i had a fuction before but this don't really understand and i know i cant add the 2 determinants, thanks. p.s. I tried to write it in latex but the ^-1 wouldn't work it would only put the negative sign in superscript alright so i know that A^-1 = adj(A)det(A)^-1 so by this det(A^-1 + 4(A^-1)(detA)) since detA = 2 det(A^-1 + 8A^-1) det(9A^-1) since I = A(A^-1) since A has an inverse we know its invertible and this is true 9Det(I) ------- = 9/2 Det(A) can someone verify that my algebra is correct cus I'm not too sure 2. Originally Posted by treetheta If A is 3x3 matrix and detA = 2 i understand how to do stuff with determinants if i had a fuction before but this don't really understand and i know i cant add the 2 determinants, thanks. p.s. I tried to write it in latex but the ^-1 wouldn't work it would only put the negative sign in superscript alright so i know that A^-1 = adj(A)det(A)^-1 so by this det(A^-1 + 4(A^-1)(detA)) since detA = 2 det(A^-1 + 8A^-1) det(9A^-1) since I = A(A^-1) since A has an inverse we know its invertible and this is true 9Det(I) ------- = 9/2 Det(A) can someone verify that my algebra is correct cus I'm not too sure You are right if you meant "det(A^-1 + 4Adj A)"
2014-07-31 22:42:10
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9126495122909546, "perplexity": 1450.583249660825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273676.54/warc/CC-MAIN-20140728011753-00445-ip-10-146-231-18.ec2.internal.warc.gz"}
http://pantodon.shinshu-u.ac.jp/conferences/EACAT4/Park.html
Jongil Park ## A classification of numerical Campedelli surfaces video: comming soon. abstract: In order to classify complex surfaces of general type with $$p_g=0$$ and $$K^2=2$$ (Such surfaces are usually called numerical Campedelli surfaces), it seems to be natural to classify them first up to their topological types. It has been known by M. Reid and G. Xiao that the algebraic fundamental group $$\pi_1^{\text{alg}}$$ of a numerical Campedelli surface is a finite group of order $$\leq 9$$. Furthermore the topological fundamental groups $$\pi_1$$ for any numerical Campedelli surfaces are also of order $$\leq 9$$ in as far as they have been determined. Hence it is a natural conjecture that $$|\pi_1| \leq 9$$ for all numerical Campedelli surfaces. Conversely one may ask whether every group of order $$\leq 9$$ occurs as the topological fundamental group or as the algebraic fundamental group of a numerical Campedelli surface. It has been proved that the dihedral groups $$D_3$$ of order $$6$$ or $$D_4$$ of order $$8$$ cannot be fundamental groups of numerical Campedelli surfaces. Furthermore, it has also been known that all other groups of order $$\leq 9$$, except $$D_3$$, $$D_4$$, $$\mathbb{Z}/4\mathbb{Z}$$, $$\mathbb{Z}/6\mathbb{Z}$$, occur as the topological fundamental groups of numerical Campedelli surfaces. Unlike the case of topological fundamental group, there is also a known numerical Campedelli surface with $$H_1=\mathbb{Z}/6\mathbb{Z}$$ (in fact $$\pi_1^{\text{alg}}=\mathbb{Z}/6\mathbb{Z}$$). Therefore all abelian groups of order $$\leq 9$$ except $$\mathbb{Z}/4\mathbb{Z}$$ occur as the first homology groups (and algebraic fundamental groups) of numerical Campedelli surfaces. Nevertheless, the question on the existence of numerical Campedelli surfaces with a given topological type was completely open for $$\mathbb{Z}/4\mathbb{Z}$$. Recently Heesang Park, Dongsoo Shin and myself constructed a new minimal complex surface of general type with $$p_g=0$$, $$K^2=2$$ and $$H_1=\mathbb{Z}/4\mathbb{Z}$$ (in fact $$\pi_1^{\text{alg}}=\mathbb{Z}/4\mathbb{Z}$$) using a rational blow-down surgery and a $$\mathbb{Q}$$-Gorenstein smoothing theory, so that the existence question for numerical Campedelli surfaces with all possible algebraic fundamental groups are settled down. In this talk I'd like to review how to construct such a numerical Campedelli surface.
2018-05-23 07:26:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7954436540603638, "perplexity": 261.7717524862703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865456.57/warc/CC-MAIN-20180523063435-20180523083435-00321.warc.gz"}
http://www.researchgate.net/publication/232410136_Periodic_heat_conduction_in_a_solid_homogeneous_finite_cylinder
Article # Periodic heat conduction in a solid homogeneous finite cylinder • ##### G. E. Cossali Facoltà di Ingegneria, Università di Bergamo, via Marconi 6, 24044 Dalmine (BG), Italy International Journal of Thermal Sciences (Impact Factor: 2.56). 01/2009; 48(4):722-732. DOI: 10.1016/j.ijthermalsci.2008.05.009 ABSTRACT Analytic solution of the steady periodic, non-necessarily harmonic, heat conduction in a homogeneous cylinder of finite length and radius is given in term of Fourier transform of the fluctuating temperature field. The solutions are found for quite general boundary conditions (first, second and third kind on each surface) with the sole restriction of uniformity on the lateral surface and radial symmetry on the bases. The thermal quadrupole formalism is used to obtain a compact form of the solution that can be, with some exception, straightforwardly extended to multi-slab composite cylinders. The limiting cases of infinite thickness and infinite radius are also considered and solved. 0 Bookmarks · 101 Views • ##### Article: Analysis of Transient Heat Conduction in a Hollow Cylinder Using Duhamel Theorem [Hide abstract] ABSTRACT: The objective of this paper is to derive the mathematical model of two-dimensional heat conduction at the inner and outer surfaces of a hollow cylinder which are subjected to a time-dependent periodic boundary condition. The substance is assumed to be homogenous and isotropic with time-independent thermal properties. Duhamel’s theorem is used to solve the problem for the periodic boundary condition which is decomposed by Fourier series. In this paper, the effects of the temperature oscillation frequency on the boundaries, the variation of the hollow cylinder thickness, the length of the cylinder, the thermophysical properties at ambient conditions, and the cylinder involved in some dimensionless numbers are studied. The obtained temperature distribution has two main characteristics: the dimensionless amplitude ( $A$ ) and the dimensionless phase difference ( $\varphi$ ). These results are shown with respect to Biot and Fourier and some other important dimensionless numbers. International Journal of Thermophysics 34(2). · 0.62 Impact Factor • Source ##### Article: Analysis of Transient Heat Conduction in a Hollow Sphere Using Duhamel Theorem [Hide abstract] ABSTRACT: In this article, analytical modeling of two-dimensional heat conduction in a hollow sphere is presented. The hollow sphere is subjected to time-dependent periodic boundary conditions at the inner and outer surfaces. The Duhamel theorem is employed to solve the problem where the periodic and time-dependent terms in the boundary conditions are considered. In the analysis, the thermophysical properties of the material are assumed to be isotropic and homogenous. Moreover, the effects of the temperature oscillation frequency, the thickness variation of the hollow sphere, and thermophysical properties of the sphere are studied. The temperature distribution obtained here contains two characteristics, the dimensionless amplitude (A) and the dimensionless phase difference ( ${\varphi}$ ). Moreover, the obtained results are shown with respect to Biot and Fourier numbers. Comparison between the present results and the findings from a previous study for a hollow sphere subjected to the reference harmonic state show good agreement. International Journal of Thermophysics 33(1). · 0.62 Impact Factor • Source ##### Article: Thermal quadrupole method with internal heat sources [Hide abstract] ABSTRACT: A new method based on the thermal quadrupoles technique for heat transfer modelling in multilayered slabs with heat sources is proposed. Classical thermal quadrupoles use hyperbolic functions and numerical problems occur according to the argument value that depends on thermophysical and geometrical properties as well as characteristics times. We propose a new formulation based on exponential function with negative argument. Using this formulation in the classical equivalent impedance network allows to compute efficiently the thermal behaviour of multilayered slabs with internal heat sources whatever the time and the thermophysical properties. This approach is applied in order to simulate heat transfer in three different multilayered materials with heat sources. These simulations show the capability of such a methodology to simulate time and space multiscale heat diffusion problems. International Journal of Thermal Sciences - INT J THERM SCI. 01/2011; ## Similar Publications • ##### An effective thermal conductivity model for unsaturated compacted bentonites with consideration of bimodal shape of pore size distribution YiFeng Chen, Min Wang, Song Zhou, Ran Hu, ChuangBing Zhou
2015-02-01 08:39:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.842289388179779, "perplexity": 1030.8235568616228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422120453043.42/warc/CC-MAIN-20150124172733-00197-ip-10-180-212-252.ec2.internal.warc.gz"}
https://www.gamedev.net/forums/topic/513418-program-organisation-classes/
Public Group Program organisation, classes... This topic is 3695 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. Recommended Posts I am starting a rather large project in VC++ and since I come from VB background I would like to ask for something before I mess it up. 1. Is it wise to put every portion of the code in separate classes since many objects will have one instance in the programm? 2. How do I declare global veriables. I see in c++ everything is organised in classes and variables I declare in one class does not exist in another. Some functions would need to pass 20 variables from one class to another and I thought I would be better to be accessable on public level. 3. If I have large array, like bitmap pixels, is it much slower to pass it as whole instead using pointers? Thanks Share on other sites Quote: 1. Is it wise to put every portion of the code in separate classes since many objects will have one instance in the programm? I personally recommend it. If there is only going to be a single instance, consider using singletons. However, in most cases, limiting objects to a single instance is not a good idea and is almost never needed. Thus, I recommend not limiting your objects to a single instance unless it is absolutely necessary. Quote: 2. How do I declare global veriables. I see in c++ everything is organised in classes and variables I declare in one class does not exist in another. Some functions would need to pass 20 variables from one class to another and I thought I would be better to be accessable on public level. Just define it inside of a source file outside of any classes or functions: int _globalVar=0; In the header file declare it extern: extern int _globalVar; Now any file that includes that header file can use _globalVar. On a design note, if you need to pass 20 variables to a function consider if there is any cohesion between them. If there is, it may be more wise to split the code into smaller function units or classes. I havn't seen any case that actually required any globals. If you need to use a global, you can always declare it as a static data member of an object. In most cases this is not needed though. Quote: 3. If I have large array, like bitmap pixels, is it much slower to pass it as whole instead using pointers? Yes. Within C++ you can also use references instead if you do not need the functionality of pointers. ie, "pass by reference". Share on other sites Thanks Crypter for the great tips! Share on other sites Quote: Original post by jakovnI am starting a rather large project in VC++ and since I come from VB background I would like to ask for something before I mess it up. You will mess it up. Everyone does. Your objective should not be to get a good design up from the start. Instead, your objective should be to keep your code as easily modifiable as possible, so you can change it when you inevitably notice that your design is bad. Quote: 1. Is it wise to put every portion of the code in separate classes since many objects will have one instance in the programm? No, it is not wise. Some code does not rely on state at all (consider, for instance, a square root function) and does not belong in a class. Classes are used to create instances. Instances are used to represent state : the member variables represent the current state internally, and the member functions allow access to some aspects of the current state and transition to another state. Before you can create a class, you must define what the state it represents should be. Quote: 2. How do I declare global veriables. I see in c++ everything is organised in classes and variables I declare in one class does not exist in another. Some functions would need to pass 20 variables from one class to another and I thought I would be better to be accessable on public level. There's a high probability that those 20 variables are, as Crypter mentioned, part of the state of an object. Represent that object with a class. Quote: 3. If I have large array, like bitmap pixels, is it much slower to pass it as whole instead using pointers? Usually, yes. Note that passing it as references is either faster or safer. Quote: Original post by CrypterIf there is only going to be a single instance, consider using singletons. My goodness. If creating more than one instance will wreak unbelievable havoc upon your program and there is no possible way of working around this issue without spending a few months working on it, then make it a singleton. Otherwise, don't spend the additional effort and just keep a normal class. Share on other sites Quote: Original post by jakovn3. If I have large array, like bitmap pixels, is it much slower to pass it as whole instead using pointers?Thanks In general passing large classes (those with a large number of instance variables) is slower and should be passed by constant reference. Arrays already "decay" to pointers when you pass them around as such need no extra pointer. void somefunc(int* A){} will work perfectly fine when called with: int Ar[20]; .... somefunc(Ar); 1. 1 2. 2 Rutin 19 3. 3 khawk 15 4. 4 5. 5 A4L 13 • 13 • 26 • 10 • 11 • 44 • Forum Statistics • Total Topics 633744 • Total Posts 3013652 ×
2018-12-17 11:57:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2338571548461914, "perplexity": 1154.478021508457}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828507.57/warc/CC-MAIN-20181217113255-20181217135255-00035.warc.gz"}
http://rpg.stackexchange.com/questions/15021/how-are-conflicts-in-the-end-of-turn-sequence-resolved-e-g-with-bond-of-pursui/15023
# How are conflicts in the end of turn sequence resolved, e.g. with Bond of Pursuit? How are conflicts in the End of Turn sequence resolved? The specific example we had last night was resolving whether the Avenger could Shift using the Bond of Pursuit [ddi] end-of-turn effect, when he had an Immobilised status imposed by the same target which lasted until the end of the target turn, i.e. resolving the ... until the end of the targets turn, he was immobilised, so he can't shift ... versus the ... but at the end of the targets turn he can shift Bond of Pursuit Hit ... If the target doesn’t end its next turn adjacent to you, you can shift a number of squares equal to 1 + your Dexterity modifier as a free action, and you must end that shift closer to the target. Immobilized When a creature is immobilized, it can’t move, unless it teleports or is pulled, pushed, or slid. My understanding is both the start-of-turn and end-of-turn sequences are under the creatures control, so e.g. the order in the start-of-turn of applying ongoing damage and regeneration is up to the creature. - All effects at the start (or end) of a creature's turn are resolved in the order that creature decides. As @Ananisapta found out: from Rules Compendium, page 199. The creature can choose the order in which things happen at the end of its turn. For instance, if the creature has saving throws to make and is subjected to an affect that damages it at the end of its turn, the creature can choose to take the damage and then make the saving throws or the other way around. I haven't the reference at hand at the moment, but it seems to me that this rule was present since Player's Handbook printing. - Rules Compendium, 199 - "The creature can choose the order in which things happen at the end of its turn. For instance, if the creature has saving throws to make and is subjected to an affect that damages it at the end of its turn, the creature can choose to take the damage and then make the saving throws or the other way around." – Ananisapta Jun 14 '12 at 12:20 Thanks, @Ananisapta! I'm going to embed this in the answer. – Erik Burigo Jun 14 '12 at 12:24 If I correctly intended the spirit of SE sites, ownership of the answer is a fluid thing. Feel free to edit my answers in order to give the questioner and future readers the best written and referenced response. It's better improving an answer than duplicating it. – Erik Burigo Jun 14 '12 at 12:36 It just seemed a little impolite to go and do that, at least without telling you first XD. – Ananisapta Jun 14 '12 at 12:40 @Ananisapta if you've got a relevant quote that's missing from our answers and you'd rather edit than write a new one you are always free to do so. SE is intended to be collaboratively edited. It's no coincidence that all these posts are editable – wax eagle Jun 14 '12 at 13:19 Here's how I see it. 1. The creature's turn ends. The Avenger is no longer immobilized. 2. The avenger checks, now that the turn has ended to see whether the creature is adjacent. If it's not he can shift as he is no longer immobilized. The phrasing here is "ends it turn" not an on "end of turn" so it happens (IMO) after the turn ends, after the EoT effects clear. - And then what happens when two or three different effects on "ends it turn" trigger? – Erik Burigo Jun 14 '12 at 11:53 then that would be the choice of the creature. But the point here is that the two effects have different qualifications here. one is End of turn (a specific period), and one is when the turn ends (more nebulous). – wax eagle Jun 14 '12 at 12:00 We agree that the wording "ends its turn" is hazy. My interpretation, however, is that it falls in the normal End of Turn sequence of events for the simple reason that I haven't found any other "turn end" definition in the rules. – Erik Burigo Jun 14 '12 at 12:02 One point of interest - Rules Compendium, pg 198 "A creature can't take any actions at the end of its turn. However, other creatures might have abilities that they can use at the end of that creature's turn." This suggests that upon the end of the turn of the creature doing the immobilising, it stops doing the action that caused the immobilisation, and THEN the Avenger gets to do his thing. – Ananisapta Jun 14 '12 at 12:24 Before getting into the gritty details, it isn't actually specified that the pursuit action itself happens at the end of the target's turn, just that if the target doesn't end its turn adjacent, the Avenger can use a free action to shift. That is, it doesn't explicitly say it must shift right then, just that it has been enabled to do so. So there is technically nothing stopping the Avenger just waiting a while, perhaps until the turn of the next creature, and using the free action then, thereby bypassing the entire issue. It probably isn't in the spirit of the rules, though. – Ananisapta Jun 14 '12 at 12:51
2016-07-26 00:46:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5655868053436279, "perplexity": 1288.5188488400838}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824499.16/warc/CC-MAIN-20160723071024-00080-ip-10-185-27-174.ec2.internal.warc.gz"}
http://mcqseries.com/stability-theory-1-1/
Home / Stability Theory – 1 – 1 # Stability Theory – 1 – 1 1. The loop transfer function of a feedback control system is given by $\dpi{100} G(s) H(s) = k/{s (s + 2) (s^2 + 2s + 2)}$ Number of asymptotes of its root loci is : (a) 1 (b) 2 (c) 3 (d) 4 Explanation No answer description available for this question. Let us discuss. 2. A unity feedback system has $\dpi{100} G(s) = k/{s (s + 1) (s + 2)}$ In the root-locus, the break-away point occurs between : (a) s = 0 and -1 (b) s = -1 and -∞ (c) s = -1 and -2 (d) s = -2 and -∞ Explanation No answer description available for this question. Let us discuss. 3. If the characteristic equation of a closed-loop system is $\dpi{100} 1+{k/s(s + l)(s + 2)} = 0$ the centroid of the asymptotes in root-locus will be : (a) zero (b) 2 (c) – 1 (d) – 2 Explanation No answer description available for this question. Let us discuss. 4. The intersection of root locus branches with the imaginary axis can be determined by the use of : (a) nyquist criterion (b) polar plot (c) routh’s criterion (d) none of the above Explanation No answer description available for this question. Let us discuss. 5. The characteristic equation of a feedback control system is $2s^4 + s^3 + 3s^2 + 5s + 10 = 0$ The number of roots in the right half of s-plane are : (a) zero (b) 1 (c) 2 (d) 3
2019-01-23 05:25:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7363071441650391, "perplexity": 856.2893109342979}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583897417.81/warc/CC-MAIN-20190123044447-20190123070447-00067.warc.gz"}
https://aif.centre-mersenne.org/item/AIF_2012__62_1_393_0/
# ANNALES DE L'INSTITUT FOURIER Local rigidity of aspherical three-manifolds Annales de l'Institut Fourier, Volume 62 (2012) no. 1, p. 393-416 In this paper we construct, for each aspherical oriented $3$-manifold $M$, a $2$-dimensional class in the ${l}_{1}$-homology of $M$ whose norm combined with the Gromov simplicial volume of $M$ gives a characterization of those nonzero degree maps from $M$ to $N$ which are homotopic to a covering map. As an application we characterize those degree one maps which are homotopic to a homeomorphism in term of isometries between the bounded cohomology groups of $M$ and $N$. Dans ce papier nous construisons, pour chaque variété de dimension trois close orientable et asphérique $M$, une classe d’homologie ${l}_{1}$ de dimension deux dans $M$ dont la norme permet avec le volume simplicial de $M$ de caractériser les applications de degré non-nul de $M$ dans $N$ qui sont homotopes à un revêtement. Comme conséquence, nous donnons un critère d’homéomorphisme pour les applications de degré un en terme d’isométries entre les groupes de cohomologie bornée de $M$ et $N$. DOI : https://doi.org/10.5802/aif.2708 Classification:  57M50,  51H20 Keywords: Aspherical $3$-manifolds, bounded cohomology, ${l}_{1}$-homology, non-zero degree maps, topological rigidity. @article{AIF_2012__62_1_393_0, author = {Derbez, Pierre}, title = {Local rigidity of aspherical three-manifolds}, journal = {Annales de l'Institut Fourier}, publisher = {Association des Annales de l'institut Fourier}, volume = {62}, number = {1}, year = {2012}, pages = {393-416}, doi = {10.5802/aif.2708}, zbl = {1255.57016}, mrnumber = {2986274}, language = {en}, url = {https://aif.centre-mersenne.org/item/AIF_2012__62_1_393_0} } Derbez, Pierre. Local rigidity of aspherical three-manifolds. Annales de l'Institut Fourier, Volume 62 (2012) no. 1, pp. 393-416. doi : 10.5802/aif.2708. https://aif.centre-mersenne.org/item/AIF_2012__62_1_393_0/ [1] Agol, I. Lower bounds on volumes of hyperbolic Haken 3-manifolds (arXiv:math/9906182 (1999)) [2] Barges, J.; Ghys, E. Surfaces et cohomologie bornée, Ergodic Theory Dyn. Syst., Tome 92 (1988) no. 3, pp. 509-526 | MR 939473 | Zbl 0641.55015 [3] Boileau, M.; Wang, S. Non-zero degree maps and surface bundles over ${\mathbf{S}}^{1}$, J. Differential Geom., Tome 43 (1996) no. 4, pp. 789-806 | MR 1412685 | Zbl 0868.57029 [4] Brooks, R.; Goldman, W. The Godbillon-Vey invariant of a transversely homogeneous foliation, Trans. Amer. Math. Soc., Tome 286 (1984) no. 2, pp. 651-664 | Article | MR 760978 | Zbl 0548.57016 [5] Brooks, R.; Goldman, W. Volumes in Seifert space, Duke Math. J., Tome 51 (1984) no. 3, pp. 529-545 | Article | MR 757951 | Zbl 0546.57003 [6] Derbez, P. Topological rigidity and Gromov simplicial volume, Comment. Math. Helv., Tome 85 (2010) no. 1, pp. 1-37 | Article | MR 2563679 [7] Derbez, P.; Wang, S. Finiteness of mapping degrees and $\text{PSL}\left(2,\mathbf{R}\right)$-volume on graph manifolds, Algebraic and Geometric Topology, Tome 9 (2009), pp. 1727-1749 | Article | MR 2539193 [8] Fujiwara, K.; Soma, T. Bounded classes in the cohomology of manifolds, Dedicated to John Stallings on the occasion of his 65th birthday. Geom. Dedicata, Tome 92 (2002), pp. 73-85 | MR 1934011 [9] Gromov, M. Volume and bounded cohomology, Inst. Hautes Études Sci. Publ. Math. (1982) no. 56, pp. 5-99 | Numdam | MR 686042 | Zbl 0516.53046 [10] Hempel, J. $3$-manifolds, Princeton Univ. Press, Ann. of Math. Studies, Tome 86 (1976) | MR 415619 | Zbl 0345.57001 [11] Jaco, W.; Shalen, P.B. Seifert fibered space in 3-manifolds, Mem. Amer. Math. Soc., Ann. of Math. Studies, Tome 21 (1979) | MR 539411 | Zbl 0415.57005 [12] Johannson, K. Homotopy equivalences of 3-manifolds with boundaries, Springer, Berlin, Lecture Notes in Mathematics, Tome 761 (1979) | MR 551744 | Zbl 0412.57007 [13] Kreck, M.; Lück, W. Topological rigidity for non-aspherical manifolds (arXiv:math.KT/0509238, to appear in Quarterly Journal of Pure and Applied mathematics) [14] Leeb, B. $3$-manifolds with(out) metrics of nonpositive curvature, Invent. Math., Tome 122 (1995) no. 2, pp. 277-289 | Article | MR 1358977 | Zbl 0840.53031 [15] Luecke, J.; Wu, Y. Relative Euler number and finite covers of graph manifolds, Geometric topology (Athens, GA, 1993), Amer. Math. Soc., Providence, RI (AMS/IP Stud. Adv. Math, 2.1.) (1997), pp. 80-103 | MR 1470722 | Zbl 0889.57024 [16] Matsumoto, S.; Morita, S. Bounded cohomology of certain groups of homeomorphisms, Proc. Amer. Math. Soc., Tome 94 (1985) no. 3, pp. 359-544 | Article | MR 787909 | Zbl 0536.57023 [17] Neumann, D. A. $3$-manifolds fibering over ${\mathbf{S}}^{1}$, Proc. Am. Math. Soc., Tome 58 (1979), pp. 353-356 | Article | MR 413105 | Zbl 0334.57003 [18] Neumann, W. A calculus for plumbing applied to the topology of complex surface singularities and degenerating complex curves, Trans. Am. Math. Soc., Tome 268 (1981), pp. 299-343 | Article | MR 632532 | Zbl 0546.57002 [19] Rong, Y. Maps between Seifert fibered spaces of infinite ${\pi }_{1}$, Pacific J. Math., Tome 160 (1993), pp. 143-154 | MR 1227509 | Zbl 0815.57002 [20] Soma, T. A rigidity theorem for Haken manifolds, Math. Proc. Cambridge Philos. Soc., Tome 118 (1995) no. 1, pp. 141-160 | Article | MR 1329465 | Zbl 0852.57010 [21] Thurston, W. Three dimensional manifolds, Kleinian groups and hyperbolic geometry, Bull. Amer. Math. Soc. (N.S.), Tome 6 (1982) no. 3, pp. 357-381 | Article | MR 648524 | Zbl 0496.57005 [22] Thurston, W. A norm for the homology of 3-manifolds, Mem. Amer. Math. Soc., Tome 59 (1986) no. 339, p. i-vi and 99–130 | MR 823443 | Zbl 0585.57006 [23] Wang, S. The existence of maps of nonzero degree between aspherical 3-manifolds, Math. Z., Tome 208 (1991) no. 1, pp. 147-160 | Article | MR 1125739 | Zbl 0737.57006 [24] Wang, S. The ${\pi }_{1}$-injectivity of self-maps of nonzero degree on $3$-manifolds, Math. Ann., Tome 297 (1993) no. 1, pp. 171-189 | Article | MR 1238414 | Zbl 0793.57008 [25] Wang, S.; Zhou, Q. Any $3$-manifold $1$-dominates at most finitely many geometric $3$-manifolds, Math. Ann., Tome 322 (2002) no. 3, pp. 525-535 | Article | MR 1895705
2020-01-24 00:45:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 32, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5453997254371643, "perplexity": 2713.472220887831}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250614086.44/warc/CC-MAIN-20200123221108-20200124010108-00180.warc.gz"}
http://cms.math.ca/10.4153/CMB-1998-012-5
location:  Publications → journals → CMB Abstract view An answer to a question of Kegel on sums of rings Published:1998-03-01 Printed: Mar 1998 • A. V. Kelarev Format: HTML LaTeX MathJax PDF PostScript Abstract We construct a ring $R$ which is a sum of two subrings $A$ and $B$ such that the Levitzki radical of $R$ does not contain any of the hyperannihilators of $A$ and $B$. This answers an open question asked by Kegel in 1964. MSC Classifications: 16N40 - Nil and nilpotent radicals, sets, ideals, rings 16N60 - Prime and semiprime rings [See also 16D60, 16U10]
2014-09-01 07:49:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7511085271835327, "perplexity": 3323.5643952786727}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535917463.1/warc/CC-MAIN-20140901014517-00420-ip-10-180-136-8.ec2.internal.warc.gz"}
http://vnbk.minoronzitti.it/unit-2-review-answers-math.html
Check yourself on the meaning of these words. 2-7 Practice ANSWERS. 23 in packet. Test will take place the week of November 13th!. You may refer to this page any time during the test. You can view last year's pages here. Bridge to College Math. Griffin A thesis submitted to the Department of Education of The College at Brockport, State University. Counting from 476 to 600 shown, numbers underlined where bundled to make a larger unit 2. Lesson #7 - Formulas. Algebra TEST REVIEW Key. Unit 2 Review. Free Algebra 2 worksheets created with Infinite Algebra 2. Powered by Create your own unique website with customizable templates. pdf / -- I'll write free-form comments when assessing students. Leave any comments or questions below. You may use the Word Bank above. 01=16+131?+511+63 10. Calculate the force needed to accelerate a 2,300 kg mass at 5. algebra 1 simplifying expressions. 0 Review Questions for Unit 2 Answer Key 1. Find the slope of the line that passes through the points (4,10. HW Systems of Equations - Answers for. Unit A Homework Helper Answer Key 8. 4 Mental Math: Estimate Sums and Differences PW10 2. Adding and subtracting negative numbers 2. Dig out Art 6 Book Bundle today and discover Art 6 Book Bundle savings online!. ca, replacing the "(at sign)" with a "@". Study for tomorrow's EXAM 4! Friday, 11/4/16: We took Exam 4 today. Everyday Mathematics Online. In order to construct a building that will last into the future, a strong foundation is a prerequisite. 0-1 correct. Answer the following, which are similar in structure to problems on state and federal assessments. 10x = x2 + 25 11. Turngren, Minnesota Literacy Council, 2014 p. Note: A copy of the standards for this unit should be given to the students with discussion to be held throughout the unit concerning their meaning and relation to the learning tasks of the day. x2 + 6x+8=o. Like any other functions, the transformations are the same, vertical stretch or compression, reflection over the x axis, horizontal shift, and vertical shift. 5a worksheet. algebra 2 transformations worksheet. 257 271 Converting Fractions to Decimals Additional Fraction Resources(Multiplication/Division) FOR REVIEW / REMEDIATION: 7th grade should be adding negatives to each operation. Look at the pattern below: $4 xx 10^1 = 40$ $4 xx 10^2 = 400$ $4 xx 10^3 = 4000$ Which statement best summarizes this pattern? The exponent shows the number of places the decimal moves to the left. How many pounds does each friend get? A 20 _2 5 pounds B 8 2_ 5. † Be sure to answer ALL the questions. Search this site. Essential Skills for Unit 2 include: > area of circles and composite figures, angle measures, parallel lines, and the Pythagorean Theorem; >> homework solutions for pg142-143 #11a, #12b, and #5a. Math 3 Unit 2 Quiz #1 Review Name Solve each of the following. Review Problems for the Math 2 Unit 2 Test on Parallel Lines and Angle Pairs. Unit 1 Packet Honors Common Core Math 2 10 Unit 1 Transformations with Coordinates Review Part 1: Graph the pre-image and image on the graph below. Thought-provoking third grade worksheets cover the four basic operations up to six exciting digits as well as new topics such as graphing and data analysis, probability, fractions, decimals, and more. Write only one answer for each question. Use math to solve these problems: c. Unit 2 Similarity, Congruence, and Proof. To Kill a Mockingbird What role does Miss Maudie play in the lives of the children? Why do you think her character is important. 1 (Part 2) Secant and Cosecant Graphs - Module 19. Review of Algebra I quiz that tests what you know. 01 102 + 30. 7th Grade Math UNIT 5 NOTES. Use proportional reasoning in order to model and solve real world problems. 14 10 2 = 32. There is also space for. This deceptively simple question requires many answers because computing has re-invented itself every decade or so (Figure 1. Test 3 10/25 2. Good morning and welcome to Cooper Tire & Rubber Company's Third Quarter 2019 Earnings Call and Webcast. Algebra 1 Unit 2 Review Answers - unit 2 review algebra 2 leave any ments or questions below all ments will be approved before they are posted. Check yourself on the meaning of these words. Gee's Yearlong Math 1 classes. After taking premature retirement, I learnt software programming. Unit 2 Review. Some math books provide answers to the odd numbered questions. Basic Electrical Formulas. 1 Which of the following will always result in an answer greater than 1? A multiplying a whole number by a negative fraction that is less than 1. Unit2UNIT 2 eoc-1ztp51p. Exercises (Module AP Review, Take the Test:. The exponent shows the number of places the decimal moves to the right. With a login provided by your child's teacher, access resources to help your child with homework or brush up on your math skills. Check yourself on the meaning of these words. Ms Wong: Home Math 9 Math 9S MKTG 11/12 Yearbook 11/12 Mid-unit Review p. Physics 12 Review Assignments. Unit 2 Rational Expressions - Series and Sequences. Math Department MCR3U > MDM4U Unit 2 - Rational Expressions Day 2 - Factoring Review - Part 1. act practice test. prep for quiz on lessons 8-10. Operations with positive fractions were done in earlier grades. congruent F. Unit 2 Section 1 Homework (on questions 1 - 4, find the value of the function when x = 5 and when x = 13) Unit 2 Section 2 looks at the transformation of the graphs of exponential functions. All comments will be approved before they are posted. Understanding Everyday Mathematics for Parents. 1 - Practice 1. 2 (Part 1) Writing Equations of Transformed Tangents - Mod 19. 1) Read through the notes and examples for each skill also using your entrance tasks as a reference. I use it to check the weather, share thoughts with family members (text messages), communicate with professional colleagues (email and Twitter), stay up to. What is formula mass? Formula mass is the term primarily used for ionic substances. ADDING AND SUBTRACTING WHOLE NUMBERS. vertex angle and base angles Find each requested value. As is stated in the Common Core State Standards in the Algebra introduction: An equation can often be solved by successively deducing from it one or more simpler equations. ) Mathematics II EOCT O VERVIEW OF THE EOCT. Please note that this site will be removed by June 30, 2017 as part of a continuous effort to provide you with the most relevant and up to date content. It is part of an elementary school mathematics curriculum developed by the University of Chicago School Mathematics Project. Test will take place the week of November 13th!. 7th Grade MAFS Spiral Review Packet-Answer Key 2 Mobile: 555 MAFS. 2: Define appropriate quantities for the purpose of descriptive modeling. 11 by 26 to get $44,254. Thank to Mathematics Advisor Caroline Clissold for this valuable article on end points. They learn the math. Format parallels the format of the chapter review/test in the student book TE points out Common Errors TE traces Response to Intervention (RtI) path Location: : Go Math Student Edition approximately half way through each chapter Answer Key: Go Math Teacher Edition (TE) corresponding Chapter Manual Chapter Tests (span 24-30 questions). Click on "7th grade Math vocabulary answers" and/or "Chapters" to go back to Math page. Clea attends music lessons every 2 weeks, dance class every 3 weeks, Write each ratio as a unit rate. ASK NOW About Slader. pdf: File Size: 1518 kb:. 1 Which of the following will always result in an answer greater than 1? A multiplying a whole number by a negative fraction that is less than 1. 7 True False 3. org/Page/2434 Name: Date: !! 1! Math%2%Final%Exam%-%Skills%Review. , by using visual fraction models and equations to represent the problem. Cooper Tire & Rubber Company (NYSE:CTB) Q3 2019 Earnings Conference Call October 28, 2019 10:00 AM ET Company Participants Jerry Bialek - VP & Treasurer Brad Hughes - President and CEO Chris. With a login provided by your child's teacher, access resources to help your child with homework or brush up on your math skills. Welcome! Review for Unit 2 Test - SOLUTIONS. Bridges in Mathematics Grade 5 Home Connections Volumes 1 & 2 Bridges in Mathematics Grade 5 Teacher Masters Answer Key Bridges in Mathematics Grade 5 Student Book Answer Key Bridges in Mathematics Grade 5 Home Connections Answer Key Bridges in Mathematics Grade 5 Components & Manipulatives Bridges Educator Site Work Place Games & Activities. 7th Grade MAFS Spiral Review Packet-Answer Key 2 Mobile: 555 MAFS. The student interprets the ratio 7/22 to mean seven students who prefer to do homework before school out of the whole class of 22 students. REVIEW FOR UNIT 2 TEST, ANSWER KEY Due No Due Date Points 0; Review key. Unit 2 Transformation of a Parent Function & Quadratics Assessment. Here's the Unit 2 Packet. 6) 2013 Absolute Value Test Review. 15 10 This answer should then be converted to speeds must be expressed in the same unit. Honors Math 2. What is the. Answers are located on pages 115-119. Unit 2- Com aring Bits and Pieces- Name Equivalent Fractions Strategies: Fraction strips Common denominator Use of Benchmarks -8 11 Ratios: Three ways to write: Test Review 11 11 Group Ms. All comments will be approved before they are posted. Even and Odd Functions can be determined in 2 ways. pls mention points like a,b,c,d in eac part which easy to understand. Circle answers. Evidence of Learning:. Distribute 2. Algebra Questions with Answers for Grade 9. Common Core Georgia Performance Standards: Curriculum Map Unit 1 Unit 2 Unit 3 Unit 4 Unit 5 Unit 6 Unit 7 Unit 8 Order of Operations and Whole Numbers Decimals Multiplying and Dividing with Decimals Adding, Subtracting, Multiplying, and Dividing Fractions Geometry and the Coordinate Plane 2D Figures Volume and Measurement Show What We Know. Course Description Links to books Review for Mastery & Exam Unit 6 Unit 5 Unit 1 Test 1 Review HW Answers. ares of the appropriate unit fraction side lengths, and show that the area is the same as would be found by multiplying the side lengths. Bridges in Mathematics Grade 5 Home Connections Volumes 1 & 2 Bridges in Mathematics Grade 5 Teacher Masters Answer Key Bridges in Mathematics Grade 5 Student Book Answer Key Bridges in Mathematics Grade 5 Home Connections Answer Key Bridges in Mathematics Grade 5 Components & Manipulatives Bridges Educator Site Work Place Games & Activities. Unit 2 REVIEW – Functions and Limits 2) The hours you stay awake is a function of the number of Monster drinks you have in the evening. 1 - Practice 1. Convert the following, showing work: a. The graph shown at the right shows the price of x yards of carpeting. 168 words in 3 minutes 4. Write –3 x2(–2 x2 – 5 x3) in standard form. Algebra 1 Unit 2 Review Answers – detailed examples to help you understand algebra concepts step 2 access the two free online courses if you need more direct video instruction with a lot of practice problems and detailed answer keys check out the pre algebra refresher course and the very first. Unit 2 Review Sheet Answer Key. algebra 1 unit 2 review. Below, you will find a parent video, link to online materials, and an overview of those resources. show all work. Writes both unit rates correctly: 3 to 1 and 1 3 to 1 or 0. Summary • Physical properties are characteristics of a substance that can be observed or measured without changing what the substance is. HW: Unit 2 questions, check answers to both unit 1 and unit 2. Unit 2 continued: Postulates and Diagrams HW. Unit 2 Exam;. 3 Horizontal and vertical lines. Unit 2: Linear and Exponential Relationships In earlier grades, students define, evaluate, and compare functions, and use them to model relationships between quantities. 6th Grade Math Common Core Warm-Up Program Teacher Introduction (p. 2 NYS COMMON CORE MATHEMATICS CURRICULUM 2•Lesson 4 Answer Key 3 Problem Set 1. Test and Worksheet Generators for Math Teachers. Math 2 Unit Test Unit Test Review Math 2 Exam Review Math 2 Syllabus Extra Practice Math 2 Math 2 Unit 1 Transformations Quadratics Quadratics B Exponents, Rationals, and Radicals Geometry Trigonometry Probability Things to Remember Need Help? Math 1 Equations & Inequalities Math 1 Unit 2 Linear Functions Math 1 Unit 3 Systems of Equations. Questions on solving linear and quadratic equations, simplifying expressions including expressions with fractions, finding slopes of lines are included. 1 homework Unit 2 Test 1 Review answers. 1) For example, if a person walks 1/2 mile in each 1/4 hour, compute the unit rate as the complex fraction 1/2 / 1/4 miles per hour, equivalently 2 miles per hour. Unit Overview In this unit you will study a variety of ways to solve 282 SpringBoard® Mathematics with MeaningTM Algebra 1 Write your answers on notebook paper. Integrated Math 2  DAILY WORK BoPs: Review answers from 2. Note: A copy of the standards for this unit should be given to the students with discussion to be held throughout the unit concerning their meaning and relation to the learning tasks of the day. Welcome to Big Ideas Math! Let's get you registered. I explained my expectations for games to the class before our first game of the year and I review them prior to beginning each game. Define the following terms: a. 71 2 c of beans 12. Online Resources - Foundations of Algebra. Do you want to know what taking the Mathematics portion of the IAR Assessment is like? A practice test for each grade is available below for you to use to familiarize yourself with the kinds of items and format used for the tests. 2 NYS COMMON CORE MATHEMATICS CURRICULUM 2•Lesson 4 Answer Key 3 Problem Set 1. ANS: Any power with exponent 0 is equal to 1. Th ey wondered if adding. Easier way: look at a graph. asked by Taylor_girly on January 22, 2019; math please help. Social Studies. EVERYDAY MATHEMATICS—3rd Grade Unit 3 Challenge Review *ANSWER KEY* Possible answer: 5 X 6 and 6 X 5 are turn-around facts. Welcome to MathHomeworkAnswers. pdf View Download: Check your answers to your review! Study for your test on. 21 3 tsp of mustard seeds b. 4 of 8 Section 4. Learn more about the EM curriculum and how to assist your child. Slader is an independent website supported by millions of students and contributors from all across the globe. Follow these simple steps to find online resources for your book. This is our overview for Unit 2. If you need more practice, return to the lesson quizzes to get immediate feedback. Ms Wong: Home Math 9 Math 9S MKTG 11/12 Yearbook 11/12 Mid-unit Review p. Name a point and a scale factor for a dilation that movesthe larger figure to the smaller one. A complete set of Class Notes, Handouts, Worksheets, PowerPoint Presentations, and Practice Tests. Integrated Math 2. For the worksheet used. Summary • Physical properties are characteristics of a substance that can be observed or measured without changing what the substance is. Rising&8th&Grade& On0Level&Math&Review& DIRECTIONS:+ This+is+your+Rising+8th+Grade+On;Level+Math+Review+that+you+will+need+to+be+prepared+to+turn+into+your+ 8th. Define the following terms: a. Use a double number line diagram to model and answer the following question: What is the total cost of your bill. 8th Grade Math; Unit 2 Lesson 1 Part 1 Key Standards addressed in this Lesson: CC8. Math 2 Unit 1 Test Review. The online math tests and quizzes on finding points and angles on the unit circle. Honors Math 2. » Math Specific Tips. 85 #1 – 12 Pg. Many of these are repeats from last year's unit on linear functions. Wednesday 4/26. Choose the best response for each of the following questions. Below is a list of websites in support of the Math in Focus program. 3/28: CRCT review has begun! As an 8th grade math team, we are working together to review material students have learned this year. Mathematics (K-12) » Math Home Grade 8 Unit 2. Lesson #7 - Formulas. You may review your work in this session, but do not work on any other session. Unit 1; Algebra 2/Trig Review; Unit 1 Videos; Unit 2 Videos; SAT Math Prep; Unit 4 Review Packet Answers: Unit 4 Review Packet Answers. Choose the best response for each of the following questions. Mean (Averages) Worksheets. Finally, write an algebraic rule for the transformation. As high schoolers review the work of historical mathematicians and the. Topic 2 - Functions and equations (24 hours) The aims of this topic are to explore the notion of a function as a unifying theme in mathematics, and to apply functional methods to a variety of mathematical situations. This unit reached the key ideas of subspaces — a higher level of linear algebra. You may work problems in your test booklet or on scratch paper, but you must mark your answer on your answer sheet. It also provides a way for students and tutors to get paid and make money answering homework questions. Look at the pattern below: $4 xx 10^1 = 40$ $4 xx 10^2 = 400$ $4 xx 10^3 = 4000$ Which statement best summarizes this pattern? The exponent shows the number of places the decimal moves to the left. The student interprets the ratio 7/22 to mean seven students who prefer to do homework before school out of the whole class of 22 students. Use the Pythagorean Theorem Converse to classify each triangle as acute, obtuse, straight. We have made some important updates to Pearson SuccessNet! Please see the Feature Summary for more details. 6 page 75 #1-19 odd , worksheet, worksheet answers,notes 1 LE 2. Math 1 ; Math 1 - Unit 1 Downloads Unit 2 Review ANSWERS. Find the slope of the line that passes through the points (4,10. 3 Adding/Multiplying. † The Mathematics Reference Sheet is provided in the back of the test booklet. 7th Grade MAFS Spiral Review Packet-Answer Key 2 Mobile: 555 MAFS. Unit 1 - Algebra Review. Your class may believe that geometry is a trial, but they don't know how right they are. This school district does not discriminate on the basis of race, color, national origin, age, religion, political affiliation, gender, mental or physical disability, sexual orientation, parental or marital status, or any other basis protected by federal, state, or local law, ordinance or regulation, in its educational program(s) or employment. You will also gain a deeper insight into Mathematics, get to practice using your new skills with lots of examples and questions, and. 3 Distance on the Plane. The test has six questions. You must mark all your answers in pen only and no scratch outs are allowed. Online Quizzes All current and previous quizzes in Math 1431 are now opened until November 4th. Please report broken links to Professor Hansen via e-mail: mhansen at american. •A 1-page chapter review(13 in all) for each chapter. What would the end behavior be for a function with a degree of 6 and a positive leading coefficient? 11. Free printable worksheets for conversions between measuring units: both metric and customary systems. Powered by Create your own unique website with customizable templates. Understanding Everyday Mathematics for Parents. MS MOON'S MATH CLASSES. Course Description Links to books Review for Mastery & Exam Unit 6 Unit 5 Unit 1 Test 1 Review HW Answers. Then, check your work by looking at the solution steps and the answer. 2-7 Practice ANSWERS. Click the link to the right to purchase supplemental materials for second grade Everyday Math (3rd Edition) Units 1 -12. 2 23 3 582 **All questions in this section are possible samples of an Equation Editor technology-enhanced question. † Be sure to answer ALL the questions. Multiplying and dividing negative numbers 3. This document is here in case you'd like another. Unit 2 Math Test Checklist ANSWER KEY Here you can find skills and sample problems that will be on the upcoming unit test. Wednesday 4/26. Algebra IA Mrs Rashid s Math Class from algebra 1 unit 2 review answers , source:mrsrashid. It also provides a way for students and tutors to get paid and make money answering homework questions. A multiple choice test has five choices. Choose a section # covering particular topics. What is the name of the dialog box that opens by default when you pick the New button on the Quick Access toolbar Answer: Select template 3. Answers to Chapter 2 Review 1. Solve the math problems and use the answers to complete the crossword puzzles. k Z lAtlAlj ZrOiwgvhMtise wrLevsvewrVvpewdd. LOGIN New to Big Ideas Math? LOG IN. Math 7th Unit 2 Study Guide. Write only one answer for each question. Use the answer key to check yourself. Its area is multiplied by k2 45 8 8. pdf FREE PDF DOWNLOAD Metric conversions & US customary unit conversion …. 3) 10x 53° 12. Learn more about the EM curriculum and how to assist your child. Mathematics Practice Test for Ninth Graders Answer Key Question No. Individual Profile of Progress. This picture Algebra 1 Unit 2 Review Answers @ Samples for Primary Math U S Edition preceding will be branded with: algebra 1 curriculum,algebra 1 grade level,algebra 1 intro to functions,algebra 1 january,algebra 1 lipke,algebra 1 mcgraw hill,algebra 1 pretest,algebra 1 regentes,algebra 1 saxon math,algebra 1 substitution, submitted by simply Janet Natalie in 2019-03-16 05:03:51. 113 #1-37 odd, notes 1 , notes 2, notes 3, worksheet , worksheet answers. 2-7 Linear Regression. Showing top 8 worksheets in the category - Math 2 Unit 1 Test Review. Created Date: 20130116174535Z. Choose the best response for each of the following questions. We know what it’s like to get stuck on a homework problem. Meiosis is cellular division that results in daughter cells with half of the number. You MAY use a calculator for this session. pdf View Download: Check your answers to your review! Study for your test on. Math Boxes 2 10 Home Link 2 10: Unit 3 Family Letter Materials Math Journal 1, p. So what are you going to learn here? You will learn about Numbers, Polynomials, Inequalities, Sequences and Sums, many types of Functions, and how to solve them. Math 8: Unit 2 Review - Equations and Expressions. 4 Explain the relationships among the roots of an equation, the zeros of the corresponding function, and the x-intercepts of the graph of the function. The mean mark on this. Haunted Corn Maze REVIEW ACTIVITY KEY. In order to know if the book has any answers provided to these questions a person will need to know what the exact math book is. Course Description Links to books Review for Mastery & Exam Unit 6 Unit 5 Unit 1 Test 1 Review HW Answers. 7 True False 3. Test 3 10/25 2. Graphing Trig Functions Practice Worksheet With Answers Students will practice graphing sine and cosine curves : a) identify period and amplitude based on equation or on the graph b) write equation from graph c) write. page 117; Review Example #3 – addressed typographical errors in exponents Unit 5 (Quadratic Functions) page 141; Review Example #3 – the point (1, –12) has been added to the graph in the solution pages 151–152; Review Example #2 – the value of variable a has been updated from 2 to –2 in lines 3 and 4 of the. integer: one of the set of whole numbers and their opposites. 1) Delta Math Do the Proportion Review Assignment until you get 100. For the worksheet used. List the opposite integers. Th ey wondered if adding. equiangular E. 54 b)$1423. Lesson 2 - flipped HW for Sec 2. Quarter 1 Unit 2. Interim Review Study Guide Answers Unit 2: Transformations. Unit 3 - Proofs. If you are having trouble with any of these concepts or problems, see your instructor or get help in the Math Lab. Part 2 How to organise the Springboard 7 teaching units to form a course linked to the main Year 7 programme ; Part 3 Units of work - 15 units to form a two-term programme, with teaching notes, teaching materials, pupils' materials and corresponding answer sheets. 3 Adding/Multiplying. 55 for the quarter or \$2. REVIEW FOR UNIT 2 TEST, ANSWER KEY Due No Due Date Points 0; Review key. matrix algebra equations. math il review unit 2 test solve for the missing measures using any method: trig ratios or pythagorean theorem. Honors Algebra 2 Algebra 2 Prep Parents & Guardians Unit 9 Homework Answer Key. Grade 6 Math Unit 2 Review - Answer key 1) Using either repeated division or a factor tree, write the prime factorization of following numbers: a) 28 Factor trees: Prime factorization: 28 = 2 x 2 x 7 Prime factorization: 28 = 2 x 2 x 7 Repeated division: (Keep dividing by a prime number unit your final answer is a prime number. 3 of 8 Section 1. 5 Linear Models 2. printable self esteem worksheets. All comments will be approved before they are posted. TestPrepReview. Which answer choice illustrates your understanding of velocity? F. This site has over 60 animated lessons and hundreds of practice questions all for use with Ontario curriculum. Leave any comments or questions below. Click on "7th grade Math vocabulary answers" and/or "Chapters" to go back to Math page. Unit 2- Com aring Bits and Pieces- Name Equivalent Fractions Strategies: Fraction strips Common denominator Use of Benchmarks -8 11 Ratios: Three ways to write: Test Review 11 11 Group Ms. 24-25 odds only 3rd and 7th p. IB Math SL Review Videos by Topic: IB Math SL Algebra Review - Topic 1 (Sequences, Series, Logs, Binomial Expansion) IB Math SL Functions Review - Topic 2 IB Math SL Trigonometry Review - Topic 3 IB Math SL Vectors and Matrices Review - Topic 4 & 5 IB Math SL Statistics and Probability Review - Topic 6 IB Math SL Calculus Review - Topic 7 (part. Just look at the unit circle above and you will see that between 0 and $$\pi$$ there are in fact two angles for which sine would be $$\frac{1}{2}$$ and this is not what we want. Test 3 10/25 2. 3 Slope and Rate of Change 2. integers d. HOME ABOUT Unit 1 Unit 2 Unit 3 Unit 4 Unit 5 Unit 6. Its area is multiplied by 4. • To help you decide which math course is best for you. Online Quizzes All current and previous quizzes in Math 1431 are now opened until November 4th. Grade 6 Mathematics Module 2 Grade 6 Module 2: Arithmetic Operations Including Division of Fractions In Module 1, students used their existing understanding of multiplication and division as they began their study of ratios and rates. A Math 8 Unit in Scientific Notation Aligned to the New York State Common Core and Learning Standards by Jessica K. Unit 2 Outline (with dates and assignments) A copy of this outline will be provided in class. 7 Complete the handout How do we use Algebra (This was completed as a class, so check with a classmate that your answers are correct). Worksheets are customizable and randomly generated. Free printable worksheets for conversions between measuring units: both metric and customary systems. By approaching what we've learned from new directions, the questions in this exam review session test the depth of your understanding. Are you looking for a 6th grade math test that you can take online? The following links provide free access to different tests and quizzes suitable for 6th grade students. 2 Multiplying Fractions with Area Model • Unit 2. The mean mark on this. Unit 5 Links: Polynomials; Unit 6: Quadratics; Unit 7: Data Analysis; Unit 8: Intro to Geometry; Unit 9: EOC Review; Helpful Handouts - Math 1; Class Guidelines, Course Topics, Supplies Needed - Math 1; Foundations of Algebra. There is no need to download any app for these activities. File Size: 4145 kb: File Type: pdf: Download File. The Investigations in this Unit will help you understand how to:.
2020-01-25 08:04:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.208657905459404, "perplexity": 2328.8021195090405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251671078.88/warc/CC-MAIN-20200125071430-20200125100430-00480.warc.gz"}
https://cs.stackexchange.com/tags/randomized-algorithms/hot
# Tag Info 49 For any reasonable definition of perfect, the mechanism you describe is not a perfect random number generator. Non-repeating isn't enough. The decimal number $0.101001000100001\dots$ is non-repeating but it's a terrible generator of random digits, since the answer is "always" zero, occasionally one, and never anything else. We don't actually know if every ... 29 It is cryptographically useless because an adversary can predict every single digit. It is also very time consuming. 18 The most obvious disadvantage is the unnecessary complexity of PRNG algorithms based on irrational numbers. They require much more computations per generated digit than, say, an LCG; and this complexity typically grows as you go further in the sequence. Calculating 256 bits of π at the two-quadrillionth bit took 23 days on 1000 computers (back in 2010) - a ... 17 if there are any practical applications of this algorithm in the domain of computer science besides being a theoretical improvement The application of this algorithm is trivial - you use it whenever you want to compute a median of a set of data (array in other words). This data may come from different domains: astronomical observations, social science, ... 17 You seem to have misunderstood what the key is. In the context of symmetric encryption, the key is a shared secret: something that is known to both the sender and receiver. For OTP, the key is the entire pad and, if two people wish to encrypt some message using OTP, they must ensure beforehand that they have a long enough pad to do that. For your proposed ... 13 Median filtering is common in reduction of certain types of noise in image processing. Especially salt and pepper noise. It works by picking out the median value in each color channel in each local neighbourhood of the image and replacing it with it. How large these neighbourhoods are can vary. Popular filter sizes (neighbourhoods) are for example 3x3 and ... 12 Vor's answer gives the standard definition. Let me try to explain the difference a bit more intuitively. Let $M$ be a bounded error probabilistic polynomial-time algorithm for a language $L$ that answers correctly with probability at least $p\geq\frac{1}{2}+\delta$. Let $x$ be the input and $n$ the size of the input. What distinguishes an arbitrary $\... 12 Your argument appears to be perfectly valid (Fisher-Yates does indeed require$\log (n!)$bits of randomness), the discrepancy comes in by making different assumptions about the complexity of the random number generation. You're assuming generating a random number between$0$and$n$takes$O(\log n)$. But, when saying that the Fisher-Yates shuffle is$O(n)... 12 The formal, unambiguous way to state this is “terminates with probability 1” or “terminates almost surely”. In probability theory, “almost” means “with probability 1”. For a probabilistic Turing machine, termination is defined as “terminates always” (i.e. whatever the random sequence is), not as “terminates with probability 1”. This definition makes ... 12 No, it's not possible. Suppose the bias of the coin is $1/3$, and suppose you could guarantee termination. Then there would be some $n$ such that this always terminates after $n$ coin flips. Let $S$ denote the set of flip-sequences that causes your algorithm to output 0 (so that $\overline{S}$ is the set of flip-sequences that causes your algorithm to ... 11 The algorithm works, but to understand why, you need to know basic probability theory. The idea is to prove by induction that at step $t$, the currently selected algorithm is uniform among the first $t$ elements. This is clearly the case when $t=1$. Assume now the induction hypothesis for time $t$, and consider what happens at time $t+1$. With probability $1/... 11 Now to make a more efficient One-Time-Pad you'd use a pseudo-random number generator No, no and once again no. I'm concerned that this is what you're being taught. The absolutely fundamental concept of a one time pad and the notion of mathematically provable perfect secrecy is that the pad material is truly random. And it must never ever be reused, even ... 11 Your process is a textbook example of a branching process. Starting with one$E$, we have an expected$3/2$many$F$s,$9/4$many$T$s, and so$9/8$many remaining$E$s in expectation. Since$9/8 > 1$, it is not surprising that your process often failed to terminate. To gain more information, we need to know the exact distribution of the number of$E$-... 10 Computing medians is particularly important in randomized algorithms. Quite often, we have an approximation algorithm that, with probability at least$\tfrac34$, gives an answer within a factor of$1\pm\epsilon$of the true answer$A$. Of course, in reality, we want to get an almost-correct answer with much higher probability than$\tfrac34$. So we ... 9 There is a simple$O(n)$algorithm using the technique of reservoir sampling. Keep a currently selected element$x$(initially, none). Go over all bits in the file in order. When seeing the$m$th zero, put it in$x$with probability$1/m$. You can show (exercise) that the final contents of$x$is a uniformly random zero from the file. If you are allowed ... 9 Complexity theory is a mathematical theory which aims at addressing one shortcoming of computability theory, namely, it takes into account the use of resources. While it is true that in its early days it aimed to capture the notion of "practical computation" (even particular flavors such as parallel computation, supposedly captured by NC), it has since ... 9 Suppose that the array has length$n$. Since you are making$n$random choices of numbers from 1 to$n$, the probability to obtain any specific permutation is of the form$A/n^n$, for some integer$A$. Therefore your algorithm could work only if$n^n/n!$is an integer, which is only the case when$n \leq 2$. More concretely, if you shuffle the array$1,2,3$,... 9 There is some research in this area. In The Effect of Restarts on the Efficiency of Clause Learning Jinbo Huang shows empirically that restarts improve a solver's performance over suites of both satisfiable and unsatisfiable SAT instances. The theoretical justification for the speedup is that in CDCL solvers a restart allows the search to benefit from ... 8 Following up on a comment by sdcvvc I checked out example 11.7 in Computational Complexity: A Modern Approach by Arora and Barak (Draft of 2008). There, the authors describe a "PCP algorithm" for the problem Graph Non-Isomorphism (GNI): Input: Two graphs$G_0$and$G_1$with$n$vertices each. Output: Accept if and only if$G_0$and$G_1$are not ... 8 The two terms randomized algorithms and probabilistic algorithms are used in two different contexts. Randomized algorithms are algorithms that use randomness, in contradistinction with deterministic algorithms that do not. Probabilistic algorithms, for example probabilistic algorithms for primality testing, are algorithms that use randomness and could make ... 8 You made a crucial change to the question. I've updated my answer to respond to the new question; I'll keep my original answer below for posterity as well. To answer the latest iteration of the question: If the problem you really want to solve is a decision problem, and you've shown that it is NP-complete, then you might be in a tough spot. Here are some ... 8 Yes, the constant is entirely arbitrary. Call an algorithm a$p$-algorithm if: When the correct answer is NO, it always answers NO. When the correct answer is YES, it answers YES with probability at least$p$. Note that the probability in the YES case is only over the algorithm's coin tosses. Given a$p$-algorithm and a constant parameter$m$, we ... 8 There are implementations of Quicksort (the partitioning algorithm, specifically) which deal badly with duplicates. No matter how much you randomize -- shuffling the input, random choice of the pivot, random choice of the pivot with sampling, ... -- if all entries are the same (or there are only constantly many distinct values), these bad implementations ... 8 (updated after many people pointed out that random number generator is not the same thing as a single normal sequence) If you ask whether you can get a normal sequence out of$\pi$(i.e., all numbers appear uniformly), then there are several answers on mathoverflow. For example, the answer about Distribution of the digits of Pi says: ...it is believed ... 7 The approach to quantum computing that you are referring to is something called the adiabatic model. It's actually based on continuous time evolution of quantum systems, rather than on discretized-time models (i.e. no gates). It turns out that it is possible to approximate continuous time evolution with a discrete set of gates, and so the algorithm can be ... 7 Let's suppose that we have a polynomial time randomized algorithm$A$for a decision problem$\Pi$with the property that $$\mathrm{Pr}[A \text{ is right}] > 1/2$$ for all inputs. (In particular, independent of the input length$n$.) We can use Chernoff bounds to show something stronger, namely that for any fixed$c$, there is a polynomial time, ... 7 We can! However, the PCP theorem does not say that we can solve the problem using$\log n$bits of randomness. It says that we can verify a solution with$\log n$bits. Recall that a standard definition of NP is the class of languages for which there is a deterministic, polynomial-time verifier: For each$x$in the language, there is a certificate$y$so ... 7 There are two notions of expected running time here. Given a randomized algorithm, its running time depends on the random coin tosses. The expected running time is the expectation of the running time with respect to the coin tosses. This quantity depends on the input. For example, quicksort with a random pivot has expected running time$\Theta(n\log n)$. ... 7 The question "Does it terminate?" isn't well-formed. You need to define more precisely what you mean by that question before it can be answered. Is it guaranteed to terminate on all possible execution traces? No. There exists an execution trace where it never terminates. Does it terminate with probability one? Yes. If you flip a coin until you first ... 7$\DeclareMathOperator{\rank}{rank}$Given a set$S$of$n = 2k$elements, the algorithm is aimed at finding the median of$S$in linear time, with high probability. The idea is to find two elements$a,b \in S$with$\rank(a) \leq k \leq \rank(b)$and$\Delta = \rank(b) - \rank(a)$as small as possible. Given such elements, we can find the median in time$O(n +... Only top voted, non community-wiki answers of a minimum length are eligible
2021-04-13 16:27:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9352489709854126, "perplexity": 614.7635877139035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038073437.35/warc/CC-MAIN-20210413152520-20210413182520-00325.warc.gz"}
https://physics.stackexchange.com/questions/399846/is-light-intangible-to-other-light-and-how-does-all-the-intersecting-light-exis
# Is Light intangible to other Light? And how does all the intersecting light exist in space? I was thinking of how light actually gets into my eyes, and thought about my light bulb shining rays to every part of my bedroom wall, and reflecting them towards me. but then i realized, i could be at many different locations and still see all that light, so it must either be bouncing off in all directions, or the light that shined elsewhere has bounced around the room to end up reflecting off the wall in all other directions. so i can view the entire wall from bazillions of atom offsets of spaces in my room. but lets just concentrate on two atom positions directly on my index fingers. the entire wall is shining directly into those two atoms. so light is passing through other light that is heading towards another direction. just with this simple scenario of the wall shining towards two positions, in my imagination, two pyramids of light colliding with each other, which is hard to imagine how that works, let alone the bazillions more intersecting, constant flowing of light that reveals the slightest of dust in the air on sunny days, from all places in your room, and how it can travel out into space, like the stars do towards earth from light years away, just concentrating into smaller and smaller amounts of atoms space. it really breaks my mind at how light even works. so i guess my question is, pretty much, how does light travel? • Nice description! You should take a look at water waves and see how they easily pass through one another without getting messed up. Of course, when the waves are really intense then they can disturb each other. (And the same thing can happen with light when the intensity is extreme, especially when the frequency is also very high.) – PM 2Ring Apr 14 '18 at 22:18 • A very thoughtful post. You have good observation and reasoning skills. – garyp Apr 15 '18 at 2:34 As the other answers state, both in the classical mathematical modeling of the behavior of light and in the quantum mechanical where classical light is composed of a superposition of photons, there is no interaction of light, to first order. The italics to emphasize that in the underlying quantum mechanical frame there exists a photon photon interaction/scattering in higher orders ,with a very very small small probability in the visible frequencies. The diagram is a shorthand for an integral which allows to calculate the probability of photon photon (two incoming squiggles) scattering off each other (two outgoing squiggles) at energies of visible light. The four electromagnetic vertices make the contribution so small , it can be ignored for visible light frequencies. The diagrams go on to higher orders in a converging series expansion with diminishing contributions from higher orders, because each vertex gives a $(1/137)^{1/2}$ multiplicative contribution to the final value of the diagram, and the above diagram goes to the fourth power, so already the probability of scattering falls by $\sim 10^{-5}$. Higher orders for visible light means diagrams with more vertices with even smaller contribution. The electromagnetic spectrum has higher energy photons though, up to gamma rays, and the probability of photons scattering goes up with energy, as students of physics will find when they reach a quantum mechanical level course. There are even proposals for gamma gamma colliders. • I'm really glad someone brought this up. Thanks! I would mention though that "To first order" is kind of an incomplete phrase. To first order in what? The phrase "To n$^\text{th}$ order" means nothing without stating what thing we are treating as a small parameter :-) – DanielSank Apr 15 '18 at 3:18 • @DanielSank clarified – anna v Apr 15 '18 at 3:55 • I cannot express my gratitude enough for all the in depth information everyone has given me. (so quickly too) I have a lot to learn from these concepts you've all exposed here. I cannot fathom how smart you people truly are. Thank You! – Puddle Apr 15 '18 at 17:21 Light is an electromagnetic wave and Maxwell's theory of electromagnetism explains very well how electromagnetic waves propagate in space by the mutual concatenation of time-varying electric and magnetic fields described by Maxwell's equations. Thus it is well understood how light travels in time and space. Furthermore, Maxwell's equations lead to linear wave equations for the electromagnetic fields. This means that different solutions of them, i.e. different light waves, can be superimposed in space without influencing each other. This might explain what you probably meant by "Is light intangible to other light?" Let's neglect a specific phenomenon known as interference for now, which is important for special cases, not like a normal light bulb or starlight. Let's also ignore gravitational effects. "Is light intangible to other light?" Yes. Two, or more, photons can occupy the same place at the same time. Many photons, or electromagnetic (light) waves, can pass right through each other and not affect each other. So light can intersect with no problem. Like PM 2Ring commented, water waves do the same thing. Sound waves do too. If you are at a party and everyone is talking, the sound waves from their voices are overlapping while traveling to your, and everyone's, ears, but they do not bounce off or destroy each other. It may seem like they really mess each other up, but that's because your ears cannot focus like your eyes. If you had a really high quality directional microphone you could point it at someone in a crowd and hear them much better. (Also there are some buildings with rooms (maybe the ceiling) shaped like an ellipse or ellipsoid that if you stand at one focal point, you can hear someone talking softly at the other focal point, even if the room is full of people taking. So back to light. The light from your lamp hits all the points on the walls, and from each point some reflects off in all directions and passes through light from other points on the walls to your eyes and fingers or whatever. It travels in straight lines and does not get deflected by other light it passes through.
2019-10-21 01:06:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6187933683395386, "perplexity": 551.7884472271876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987750110.78/warc/CC-MAIN-20191020233245-20191021020745-00311.warc.gz"}
https://en.wikipedia.org/wiki/Debts
# Debt (Redirected from Debts) For other uses, see Debt (disambiguation). Not to be confused with Debit. Payday loan businesses lend money to customers, who then owe a debt to the payday loan company. Debt refers to something that is owed or due either physically or metaphorically. In the physical sense, the parties to debt are lenders (those who give) and borrowers (those who receive). In the metaphorical sense, debt refers to a moral obligation not based on physical value (e.g.: debt of gratitude). Debt was the first form of commerce (barter system) documented in human history and existed about 2,900 years prior to the invention of coinage. Today there are many examples of lenders of monetary debt that include sovereign nations, banks, credit card companies, payday loan providers, individuals, etc., who in many instances subject their borrowers to contractual terms that designate the amount and timing of repayments of the debt and that frequently include the payment of principal and interest.[1] ## History Main article: History of money Anthropologist David Graeber in his award winning book [2][3] Debt: The First 5000 Years writes that trade started with debt (the promise to pay later for already handed over goods) around 3500 BC with coinage not being invented until about 2,900 years later around 600 BC. Graeber further points out that: • The word freedom originally meant freedom from debt. • Traditional democracy and constitutional government were created largely in reaction to rebellions over debt. • The great traditions of religion and philosophy began by rejecting the idea that debt is the basis of morality. • The concept of investment capital originated in medieval Chinese Buddhism. • Free market populism originated in medieval Islam with the creation of extensive debt systems based solely on trust.[4][5][6] ### Etymology The word debt is based on the Latin word debitum (meaning: something owed/thing owed) whose past participle debere means owe/to owe. In Old French this Latin word was changed to dete and in Middle English to dette with its modern-day French and English spelling being debt. ## Terms ### Interest Interest is the fee charged by the creditor to the debtor. Interest is generally calculated as a percentage of the principal sum per year, which percentage is known as an interest rate, and is generally paid periodically at intervals, such as monthly or semi-annually. Many conventions on how interest is calculated exist – see day count convention for some – while a standard convention is the annual percentage rate (APR), widely used and required by regulation in the United States and United Kingdom, though there are different forms of APR. Interest rates may be fixed or floating. In floating-rate structures, the rate of interest that the borrower pays during each time period is tied to a benchmark such as LIBOR or, in the case of inflation-indexed bonds, inflation. For some loans, the amount actually loaned to the debtor is less than the principal sum to be repaid. This may be because upfront fees or points are charged, or because the loan has been structured to be sharia-compliant. The additional principal due at the end of the term has the same economic effect as a higher interest rate. This is sometimes referred to as a banker's dozen, a play on "baker's dozen" – owe twelve (a dozen), receive a loan of eleven (a banker's dozen). Note that the effective interest rate is not equal to the discount: if one borrows \$10 and must repay \$11, then this is (\$11–\$10)/\$10 = 10 percent interest; however, if one borrows \$9 and must repay \$10, then this is (\$10–\$9)/\$9 = 11-1/9 percent interest.[7] ### Repayment There are three main ways repayment may be structured: the entire principal balance may be due at the maturity of the loan; the entire principal balance may be amortized over the term of the loan; or the loan may partially amortize during its term, with the remaining principal due as a "balloon payment" at maturity. Amortization structures are common in mortgages and credit cards. ### Collateral and recourse A debt obligation is considered secured if creditors have recourse to specific collateral. Collateral may include claims on tax receipts (in the case of a government), specific assets (in the case of a company) or a home (in the case of a consumer). Unsecured debt comprises financial obligations for which creditors do not have recourse to the assets of the borrower to satisfy their claims. ## Types of borrowers ### Individuals Common types of debt owed by individuals and households include mortgage loans, car loans, and credit card debt. For individuals, debt is a means of using anticipated income and future purchasing power in the present before it has actually been earned. Commonly, people in industrialized nations use consumer debt to purchase houses, cars and other things too expensive to buy with cash on hand. People are more likely to spend more and get into debt when they use credit cards vs. cash for buying products and services.[8][9][10][11][12] This is primarily because of the transparency effect and consumer’s “pain of paying.” [10][13] The transparency effect refers to the fact that the further you are from cash (as in a credit card or another form of payment), the less transparent it is and the less you remember how much you spent.[13] The less transparent or further away from cash, the form of payment employed is, the less an individual feels the “pain of paying” and thus is likely to spend more.[10] Furthermore, the differing physical appearance/form that credit cards have from cash may cause them to be viewed as “monopoly” money vs. real money, luring individuals to spend more money than they would if they only had cash available.[11][14] Besides these more formal debts, private individuals also lend informally to other people, mostly relatives or friends. One reason for such informal debts is that many people, in particular those who are poor, have no access to affordable credit. Such debts can cause problems when they are not paid back according to expectations of the lending household. In 2011, 8 percent of people in the European Union reported their households has been in arrears, that is, unable to pay as scheduled "payments related to informal loans from friends or relatives not living in your household".[15] A company may use various kinds of debt to finance its operations as a part of its overall corporate finance strategy. A term loan is the simplest form of corporate debt. It consists of an agreement to lend a fixed amount of money, called the principal sum or principal, for a fixed period of time, with this amount to be repaid by a certain date. In commercial loans interest, calculated as a percentage of the principal sum per year, will also have to be paid by that date, or may be paid periodically in the interval, such as annually or monthly. Such loans are also colloquially called "bullet loans", particularly if there is only a single payment at the end – the "bullet" – without a "stream" of interest payments during the life of the loan. A syndicated loan is a loan that is granted to companies that wish to borrow more money than any single lender is prepared to risk in a single loan. A syndicated loan is provided by a group of lenders and is structured, arranged, and administered by one or several commercial banks or investment banks known as arrangers. Loan syndication is a risk management tool that allows the lead banks underwriting the debt to reduce their risk and free up lending capacity. A company may also issue bonds, which are debt securities. Bonds have a fixed lifetime, usually a number of years; with long-term bonds, lasting over 30 years, being less common. At the end of the bond's life the money should be repaid in full. Interest may be added to the end payment, or can be paid in regular installments (known as coupons) during the life of the bond. A letter of credit or LC can also be the source of payment for a transaction, meaning that redeeming the letter of credit will pay an exporter. Letters of credit are used primarily in international trade transactions of significant value, for deals between a supplier in one country and a customer in another. They are also used in the land development process to ensure that approved public facilities (streets, sidewalks, stormwater ponds, etc.) will be built. The parties to a letter of credit are usually a beneficiary who is to receive the money, the issuing bank of whom the applicant is a client, and the advising bank of whom the beneficiary is a client. Almost all letters of credit are irrevocable, i.e., cannot be amended or canceled without prior agreement of the beneficiary, the issuing bank and the confirming bank, if any. In executing a transaction, letters of credit incorporate functions common to giros and traveler's cheque. Typically, the documents a beneficiary has to present in order to receive payment include a commercial invoice, bill of lading, and a document proving the shipment was insured against loss or damage in transit. However, the list and form of documents is open to imagination and negotiation and might contain requirements to present documents issued by a neutral third party evidencing the quality of the goods shipped, or their place of origin. Companies also use debt in many ways to leverage the investment made in their assets, "leveraging" the return on their equity. This leverage, the proportion of debt to equity, is considered important in determining the riskiness of an investment; the more debt per equity, the riskier. ### Governments Governments issue debt to pay for ongoing expenses as well as major capital projects. Government debt may be issued by sovereign states as well as by local governments, sometimes known as municipalities. The overall level of indebtedness by a government is typically shown as a ratio of debt-to-GDP. This ratio helps to assess the speed of changes in government indebtedness and the size of the debt due. ## Default Main article: Debt relief Debtors of every type default on their debt from time to time, with various consequences depending on the terms of the debt and the law governing default in the relevant jurisdiction. Riskier borrowers must generally pay higher rates of interest to compensate lenders for taking on the additional risk of default. Debt investors assess the risk of default prior to making a loan, for example through credit scores and corporate and sovereign ratings. Individuals and companies may go into bankruptcy if they are unable to satisfy their debts. Traditions in some cultures demand that this be done on a regular (often annual) basis, in order to prevent systemic inequities between groups in society, or anyone becoming a specialist in holding debt and coercing repayment. An example is the Biblical Jubilee year, described in the Book of Leviticus. Under English law, when the creditor is deceived into relinquishing the debt, this is a crime under the Theft Act 1978. International Third World debt has reached the scale that many economists are convinced that debt cancellation is the only way to restore global equity in relations with the developing nations.[citation needed] ## Debt markets ### Market interest rates Main article: Bond valuation ### Loans versus bonds Bonds are debt securities, tradeable on a bond market. A country's regulatory structure determines what qualifies as a security. For example, in North America, each security is uniquely identified by a CUSIP for trading and settlement purposes. In contrast, loans are not securities and do not have CUSIPs (or the equivalent). Loans may be sold or acquired in certain circumstances, as when a bank syndicates a loan. Loans can be turned into securities through the securitization process. In a securitization, a company sells a pool of assets to a securitization trust, and the securitization trust finances its purchase of the assets by selling securities to the market. For example, a trust may own a pool of home mortgages, and be financed by residential mortgage-backed securities. In this case, the asset-backed trust is a debt issuer of residential mortgage-backed securities. ### Role of central banks Central banks, such as the U.S. Federal Reserve System, play a key role in the debt markets. Debt is normally denominated in a particular currency, and so changes in the valuation of that currency can change the effective size of the debt. This can happen due to inflation or deflation, so it can happen even though the borrower and the lender are using the same currency. ### Role of rating agencies Specific bond debts owed by both governments and private corporations are rated by rating agencies, such as Moody's, Standard & Poor's, Fitch Ratings, and A. M. Best. The government or company itself will also be given its own separate rating. These agencies assess the ability of the debtor to honor his obligations and accordingly give him or her a credit rating. Moody's uses the letters Aaa Aa A Baa Ba B Caa Ca C, where ratings Aa-Caa are qualified by numbers 1-3. S&P and other rating agencies have slightly different systems using capital letters and +/- qualifiers. A change in ratings can strongly affect a company, since its cost of refinancing depends on its creditworthiness. Bonds below Baa/BBB (Moody's/S&P) are considered junk or high-risk bonds. Their high risk of default (approximately 1.6 percent for Ba) is compensated by higher interest payments. Bad Debt is a loan that can not (partially or fully) be repaid by the debtor. The debtor is said to default on his debt. These types of debt are frequently repackaged and sold below face value. Buying junk bonds is seen as a risky but potentially profitable investment. ## Criticisms Main article: Criticism of debt Some argue against debt as an instrument and institution, on a personal, family, social, corporate and governmental level. Islam forbids lending with interest even today. In hard times, the cost of servicing debt can grow beyond the debtor's ability to pay, due to either external events (income loss) or internal difficulties (poor management of resources). Debt will increase through time if it is not repaid faster than it grows through interest. This effect may be termed usury, while the term "usury" in other contexts refers only to an excessive rate of interest, in excess of a reasonable profit for the risk accepted. In international legal thought, odious debt is debt that is incurred by a regime for purposes that do not serve the interest of the state. Such debts are thus considered by this doctrine to be personal debts of the regime that incurred them and not debts of the state. Excessive debt accumulation has been blamed for exacerbating economic problems. For example, before the Great Depression, the debt-to-GDP ratio was very high. Economic agents were heavily indebted. This excess of debt, equivalent to excessive expectations on future returns, accompanied asset bubbles on the stock markets. When expectations corrected, deflation and a credit crunch followed. Deflation effectively made debt more expensive and, as Fisher explained, this reinforced deflation again, because, in order to reduce their debt level, economic agents reduced their consumption and investment. The reduction in demand reduced business activity and caused further unemployment. In a more direct sense, more bankruptcies also occurred due both to increased debt cost caused by deflation and the reduced demand. At the household level, debts can also have detrimental effects. In particular when households make spending decisions assuming income to increase, or remain stable, for the years to come. When households take on credit based on this assumption, life events can easily change indebtedness into over-indebtedness. Such life events include unexpected unemployment, relationship break-up, leaving the parental home, business failure, illness, or home repairs. Over-indebtedness has severe social consequences, such as financial hardship, poor physical and mental health,[16] family stress, stigma, difficulty obtaining employment, exclusion from basic financial services (European Commission, 2009), work accidents and industrial disease, a strain on social relations (Carpentier and Van den Bosch, 2008), absenteeism at work and lack of organisational commitment (Kim et al., 2003), feeling of insecurity, and relational tensions.[17] ## Levels and flows Main article: Debt levels and flows Global debt underwriting grew 4.3 percent year-over-year to US\$5.19 trillion during 2004. It is expected to rise in the coming years if the spending habits of millions of people worldwide continue the way they do. ## References 1. ^ "Debt Definition". Investopedia. Retrieved 16 May 2012. 2. ^ http://www.mhpbooks.com/books/debt/ 3. ^ http://www.spectator.co.uk/2012/11/books-of-the-year-12/ 4. ^ Graeber, David (September 8, 2011). "9 Things You Didn't Know About The History Of Debt". The Huffington Post. Retrieved April 23, 2016. 5. ^ Graeber, David (July 2011). Debt: The First 5000 Years. Melville House Publishing. ISBN 9781933633862. 6. ^ "What is Debt? – An Interview with Economic Anthropologist David Graeber". Naked Capitalism. Retrieved 23 April 2016. 7. ^ Formally, a discount of d% results in effective interest of ${\displaystyle d/(1-d)\%.}$ 8. ^ Chatterjee, P., & Rose, R. L. (2012). Do payment mechanisms change the way consumers perceive products? Journal of Consumer Research, 38(6), 1129–1139. 9. ^ Pettit, N. C., & Sivanathan, N. (2011). The plastic trap. Social Psychological and Personality Science, 2(2), 146-153. 10. ^ a b c Prelec, D. & Loewenstein, G. (1998). The red and the black: Mental accounting of savings and debt. Marketing Science, 17(1), 4-28. 11. ^ a b Raghubir, P. & Srivastava, J. (2008), Monopoly money: The effect of payment coupling and form on spending behavior. Journal of Experimental Psychology: Applied, 14 (3), 213–25. 12. ^ Soman, D. (2003). The effect of payment transparency on consumption: Quasi experiments from the field. Marketing Letters, 14, 173–183. 13. ^ a b Soman, D. (2003). The effect of payment transparency on consumption: Quasi experiments from the field. Marketing Letters, 14, 173–183. 14. ^ Chatterjee, P., & Rose, R. L. (2012). Do payment mechanisms change the way consumers perceive products? Journal of Consumer Research, 38(6), 1129–1139. 15. ^ "Household over-indebtedness in the EU: The role of informal debts." (PDF). eurofound.europa.eu. Publications Office of the European Union, Luxembourg. 2013. Retrieved 19 April 2016. 16. ^ Fitch; et al. (2011). "The relationship between debt and mental health: a systematic review.". Mental Health Review Journal. 16 (4): 153–166. doi:10.1108/13619321111202313. 17. ^ Dubois, Hans; Anderson, Robert (2010). "Managing household debts: Social service provision in the EU. Working paper. Dublin: European Foundation for the Improvement of Living and Working Conditions" (PDF). European Foundation for the Improvement of Living and Working Conditions. Retrieved 20 February 2015.
2016-08-27 10:14:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32556551694869995, "perplexity": 5365.858610424399}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982298977.21/warc/CC-MAIN-20160823195818-00226-ip-10-153-172-175.ec2.internal.warc.gz"}
https://movies.stackexchange.com/questions/88821/does-thanos-intend-to-exert-his-plan-on-each-planet-individually-or-the-entire-u/88833
# Does Thanos intend to exert his plan on each planet individually or the entire universe as a whole? According to Thanos's plan in Avengers: Infinity War, which of the two statements is true: Thanos intends to kill half the population from the whole universe, randomly. or Thanos intends to kill half the population from every planet, randomly. Population varies from planet to planet, so the number of people killed on each planet will be different in each statement. • This would be a good exercise on the math exchange. My belief is there wouldn't be near as much deviation as you're thinking. A planet with a billion people, for example, probably would end up in the 400-600 million range as opposed to an even 500 million. – sirjonsnow May 9 '18 at 12:27 • @sirjonsnow Yes, but not even as large a range as 400-600 million. Take Earth (~7 bill. pop.) as an example. If Thanos kills half the universe randomly, about 3.5 billion Earthlings die, with a standard deviation of sqrt(3.5 bill.)=59,000, which is less than a thousandth of a percent of 7 billion. So there is a 99.999% chance that between 49.9% and 50.1% of Earthlings are destroyed. With the inaccuracies of the global census, we'd never know the difference between the two hypotheticals. – Sam May 9 '18 at 13:29 • An important thing to note is that he explicitly tells Tony Stark that because he's impressed with him, he'll leave half of Earth's population alive. The way he delivers the line leaves some ambiguity as to whether that was always what was going to happen, or if he's promising Tony he'll ensure that's the case - implying it's not NECESSARILY the case elsewhere. Just to muddy the waters some more! – CGriffin May 9 '18 at 18:40 • I don't think Thanos would understand the difference. If he understood statistics at all he would have noticed it only took us 40 years to double the population to 7 billion so he's not going to be able to, as he says, 'rest'. – Julian May 9 '18 at 19:18 • @CGriffin I don't think that's what he said. He did say that he was impressed with Stark and that he hopes that the people will remember him. He also said he'll kill half the population. I don't think these two statements were connected in the film. – Cubic May 13 '18 at 14:15 Based upon how Thanos explains it, planet by planet is most plausible Thanos explicitly explains his motivations in the film. He's trying to to spare other populated worlds from the fate that befell his. Which is to say, collapse due to overpopulation and competition for/exhaustion of finite resources. Prior to acquiring the infinity stones, his method for accomplishing that goal was to visit every planet and then use conventional measures to eliminate half of its population. Whether 50% is some ideal number for reducing resource contention doesn't matter, what matters is that Thanos seems to have considered it to be. He could have just as easily killed 20% of the population, or 70%, or anything else, but he settled on 50% as the "correct" proportion for his goal. So the 50% per planet seems significant, and although killing 50% of all the people on each planet will produce the same bodycount as killing 50% of all people in the universe, there are statistical differences between the approaches. A demonstration probably works best, so here's a Thanos Murder Simulator (or an alternate blue/orange version). Notably, killing 50% of the people in the entire universe produces a result like this (each box is a person; green is alive, red is dead, groupings of boxes are planets): Notably, one unlucky planet has only 6 survivors (nearly a 75% kill rate). A couple of others have lost only ~25% of their populations. If you need to kill 50% to curtail overpopulation on a planet, then killing everyone in the universe at random is a bad way to do it. You'll get some planets emerging relatively unscathed, and others catastrophically depopulated (I mean, even more catastrophically than 50% would be). You get a much more uniform result by killing 50% of the population on each planet: Considering Thanos's stated goals/motivations and the way he went about culling planets before completing the Infinity Gauntlet, the latter scenario seems like what he would go for. Other characters say 'half the universe' However, these other characters are not Thanos, and may be paraphrasing for expediency. The only character who knows what Thanos intends is Thanos, and he gets to deliver a fairly lengthy explanation in the film. His explanation leans more towards randomly killing 50% of the population of each planet than it does towards 50% of everyone in the universe. The bodycount is the same either way, but to Thanos it's the distribution of the bodies that's most important. • Comments are not for extended discussion; this conversation has been moved to chat. – Napoleon Wilson May 10 '18 at 18:26 • I think it's also worth noting that 50% planet-by-planet leads to 50% of the universe, but 50% of the universe doesn't lead to 50% planet-by-planet -- so you could perfectly accurately say he's going to destroy 50% of the population of the universe either way. – Fund Monica's Lawsuit May 10 '18 at 20:14 • This answer is pretty but mathematically misleading. The variance of 50% probability applied to 7 billion people is tiny compared to the variance when applied to 25 people. – RBarryYoung May 11 '18 at 14:25 • @RBarryYoung - That depends on a number of assumptions. The first is that the infinity stones create 'perfect' randomness, which is actually very hard to do. The second is that there aren't many planets (or planetoids, or space stations, or colony ships) with low carrying capacities. The third is that Thanos, with the power to wish for either outcome equally easily, would choose to wish for the outcome that admits the greater capacity for variance. – aroth May 11 '18 at 23:01 • @aroth: Can you switch to a different color scheme for your (lovely) Thanos Murder Simulator? Red-green color blindness is actually very frequent (about 8 percent of males are affected), and for these, distinguishing the survivors from the victims is more difficult than necessary. Other color combinations work much better, e.g. blue and orange. As a quick note on creating "perfect" randomness: C'mon, these stones twist reality and time and things. There's surely also some built-in mechanism that harnesses quantum stuff to create real random numbers. :) – Schmuddi May 12 '18 at 7:37 Using a Binomial distribution with number 7.5 billion and probability 0.50, the chances of Earth's death count from a universal unbiased killing resulting outside of 49.99%-50.01% of the total population is 1.7*10^-132, which is less likely than flipping 438 heads in a row with a fair coin. Updated: I was first using a Poisson distribution which is a limiting case of a Binomial distribution (old claim: The chance of >50.01% (or <49.99%) of Earth's population being killed universally randomly is 3*10^-32! This is less likely than flipping 100 heads in a row with a fair coin). I found this site now which can handle the very small numbers involved in the use of the error function. So, it doesn't really matter which interpretation is the true case: Planets with realistic populations and even very accurate census won't notice the difference. Full disclosure: I haven't seen the movie, nor have I ever read a comic book or know who any of these characters are. But I'm familiar with statistics and probabilities and have a habit of browsing the network Hot Questions, so... Explanation: The Binomial distribution is useful to see the results here. It gives the probability of observing integer random outcomes (like number of people killed) from any expected average (like half of a population). My use depends on the following assumptions. • The gauntlet will kill exactly half of the universe's (sentient?) lives (rounded if total is odd). • The gauntlet will take absolutely nothing into account -- the chosen to die are chosen randomly to meet this quota. This means the gantlet does not travel planet-to-planet, or choose those to die one at a time, or kill the oldest, the smallest, the smelliest, etc. • The population of any planet is a small fraction of the universe's population. If Earth comprised all of the universe, obviously exactly half of us would die. But if 10 googolplex exist, and thus 5 googolplex must die, each of our deaths makes a negligible impact on the quota and our chances of living are unchanged -- 50%. • The Marvel universe has no more than a trillion trillion trillion times as many galaxies as our own universe, and each of its stars has no more than a trillion populated planets each. These very large (and very small) numbers can be hard to wrap your head around, but no scientific estimate of the number of galaxies/stars/planets in this universe comes close to legitimizing the chances of the asymmetric planet-wise killings proposed in the comments/answers. The Math: From the binomial wiki page linked, the standard deviation is $\sqrt{np(1-p)}$, with p=0.50. For the Earth (n=7.5 billion), this is 43,301 people. In terms of sigma, 0.01% of our population (7.5 billion) is 17.32 sigma. As a rule of thumb for statisticians, there is a 99.7% chance of drawing a number from within three sigma from the mean. Likewise, there is a 99.999999...% chance of drawing from within 17.32 sigma from the mean. Put another way, with the error function tool linked above, the chance of drawing outside of 17.32 sigma from the mean is less than 1 in 10^100! The range of likely death counts from this event is much smaller than Earth's census would even notice. Moreover, as others have explained, the "kill randomly universally" method also avoids the complication of identifying to which planet (or ship, or rock, or floating naked in space...) each person belongs. • @ArturoTorresSánchez I've actually been meaning to ask that exact thing as a question, but there are so many questions about this already and I don't know how answerable that is. – F1Krazy May 9 '18 at 16:01 • This is a binomial distribution: mean = $pn$, std = $\sqrt{p(1-p)n}$. A Poisson distribution is an approximation of binomial for small p, in which case the square root of the mean is a reasonable approximation of the std (note that with small $p$, the $1-p$ factor can be ignored). Since $n$ is so large, the binomial distribution can be approximated by the Normal distribution. Here, $p$ is not small, and you're off by a factor of $\sqrt 2$, although it doesn't have much effect on the ultimate answer. – Acccumulation May 9 '18 at 16:11 • Are you assuming that each person dies with a 50% probability or that out of the total population of the universe exactly 50% will die? Lets assume that the galaxy has 7 billion planets of 3.5 billion people. I think what you are saying is that the mean will be ~1*10^19 and the standard deviation will be 3.5*10^9. That means from your link that by the time you get to the last planet there is ~30% chance that your current kill count is more than 3.5 billion people from the average which means that to ensure that you kill exactly 50% you will need to kill all or none of them... – Chris May 10 '18 at 11:48 • @Chris No, I am assuming that exactly half of all people are killed. The caveat I state is that if each planet's population makes up a negligible fraction of the total, this is equivalent to each person having a 50% chance of dying. If you are familiar with probability, this is like drawing colored balls from an urn without replacement; but if there are trillions of blue balls and trillions of red balls, drawing a red ball negligibly changes the odds of the next ball being red. I'm also assuming that this gauntlet is killing everyone all at once, so there is no quota later to be met as you say – Sam May 10 '18 at 13:24 • I am talking about it as an ordered thing purely as a way of looking at the problem. Your assumption that it is like each person having a 50% chance of dying is wrong though. For the first people it is but towards the end one you have drawn all but say 10 of the balls then you have a reasonable chance that those balls might be all the same colour. This is what I was getting at. The last 3.5million balls out of 10^19 have a reasonably high chance of all being the same colour I think (since 3.5millon is the standard deviation). – Chris May 10 '18 at 13:30 ## It was mentioned that Thanos wants to kill half the population of the universe. Gamora: He won't stop. Until he destroys half the universe. Everything you know. Everything you love. It will all be gone. However, before acquiring the Infinity Stones, he actually went to each planet and kill half the population there. This part might have created the confusion. From transcript, Gamora: No, no, we were happy on my home planet. Thanos: Going to bed hungry, scrounging for scraps? Your planet was on the brink of collapse. I was the one who stopped that. You know what's happened since then? The children born have known nothing but full bellies and clear skies. It's a paradise. Gamora: Because you murdered half the planet! and Thor: There's six stones out there. Thanos already has the Power Stone because he stole it last week, when he decimated Xandar. He stole the Space Stone from me, when he destroyed my ship and slaughtered half my people. The Time and Mind Stones are safe on Earth. They're with the Avengers. Also, different sources like this and this mention that he wants to kill half the universe. Killing half population of the universe and half population from each planet seem similar if we do the math. If he kills half population from each planet, he's still killing the half population of the universe. • Killing half the population of the universe at random would still leave a lot of overcrowded planets. I think we should assume that he wanted to kill half the population of each planet randomly thus killing half the population of the universe. Thanos is a smart guy, he would have thought this through. – Yates May 9 '18 at 10:14 • This is exactly what he is doing as mentioned in the transcripts I linked. – A J 9 May 9 '18 at 10:24 • @Thomas Yates But a lot of people don't even live on planets (take "Knowhere" for example). – Muschkopp May 9 '18 at 11:11 • @Muschkopp then "Knowhere" is counted as a planet. I think the point is that if you wipe out half the universe at random then you're going to end up with overpopulated planets (and/or planet-like space-stations) still. Thanos pretty much explicitly explains that he's killing half of everybody specifically to counter overpopulation. It follows that the 50% is on a planet(oid) by planet(oid) basis. It's randomly ineffective otherwise. – aroth May 9 '18 at 11:37 • @ThomasYates Some overcrowded planet - possibly, a lot - what is a lot? For a planet with a population of a couple billion you have 99.7% chance of not missing the target by more than about a hundred thousand people (and really a lot of nines after the 99. for even a million). – Michał Politowski May 9 '18 at 12:20 Thanos tells Tony that when he has accomplished his goal, half of humanity will remain. This is important, because he we know that he intends to choose his victims randomly. If Thanos intended to simply draw his victims from one big pool containing the entire universe, then he could not be certain ahead of time that half of humanity would remain. By random chance, he could very well draw more of Earth's population than that, or less. It's even possible, if unlikely, that he could wipe out Earth entirely, or leave it completely unscathed. Let us suppose that Thanos got a big bag of marbles -one for each person in the universe, half of them red, half of them blue- shook it up, and forced everybody to draw. Everyone who draws a red marble disintegrates. This is guaranteed to kill exactly half the people in the universe, while still being random. However, with trillions if not quadrillions of marbles in the bag, there are going to be runs of the same color that last for thousands, millions, maybe even billions long. It's entirely possible that everyone on Earth might draw red marbles (i.e. Earth is wiped out) or blue ones (i.e. Earth is unscathed). Earth probably wouldn't even be the only world this happened to. The universe is a big place. But that's not how Thanos acts. He knows that half of humanity will survive. The only way he can be sure of that, while still holding to his idea of "fairness through randomness", is if he has another method for choosing his victims. He would need to divide them up according to their worlds, and choose half the people on each: in other words, a bag of marbles for each planet, rather than one for the whole universe. This is similar to the method he used before he had the Stones, though faster and more efficient. More to the point, it's more closely in line with his idea that the resources on each world need to be distributed to half as many people: he needs to be sure that there are actually half as many people on each world, and this requires a finer-grained approach than throwing everyone into one large pool. • Citation needed for the earth could be completely wiped out or left unscathed – Kat May 9 '18 at 23:09 • (Editing to incorporate the "bag of marbles" explanation into the main text) – The Spooniest May 10 '18 at 19:18 • @Kat - "That'd be 86,602 sigma (!!!)" – Sam (Translated: 'No, but you can try.') – Mazura May 10 '18 at 20:33 • I guess I should have been more clear: you need to check your math, because the chances of any planet in the universe with billions of inhabitants being left untouched or completely wiped out is pretty much zero. See Sam's answer. – Kat May 11 '18 at 2:46 • The chances of being completely unscathed or wiped out are quite small, it's true, but it doesn't have to be complete either. The odds of the kill count for a given world being nonzero, but still rather far off from 50%, aren't that low. – The Spooniest May 14 '18 at 7:41 Thanos intends to kill half the population from the whole universe, randomly. My impression while seeing the movie was that he meant planet by planet, although this was likely caused by seeing Thanos visit each planet and killing half the population manually before he had the gauntlet. However, if we look into the scene where Thanos snaps his fingers we can see on Titan that this was not the case. At the time of the snap, Titan had 7 people. Iron Man, Spider-Man, Dr. Strange, Star Lord, Drax, Mantis, and Nebula. The snap killed 5 people. If his snap killed half of each planet, we would expect to see only 3-4 people dead. My conclusion says this is done on a universal level, rather than a planetary one. • It could be done on the basis of home planet. Like the way King Herod required everyone to return to their birthplace for the census. – Gaius May 9 '18 at 20:57 • @Gaius doubtfully, home planet basis or the planet of origin is an extremely subjective approach... Take the "rabbit" (rocket) as an example, would you consider where he was assembled his home/origin or where his parts came from? What about Starlord? He was born on Earth, spent most of his life in other planets and he has part of a living planet in himself (Ego)... – CPHPython May 10 '18 at 11:19 • @Gaius that seems speculative at best. What of those without a home planet? Or those who had their home planet destroyed? It's a possibility for sure, but seems to be a reach. – Mat May 10 '18 at 15:57 I think both of your statements are false. Why? Let me 'splain... We know that the 'snap' wasn't localized to Earth because characters on Titan also flaked away so he's not going planet to planet. We also know that, on Titan, about half of the Human characters flaked away (Starlord, Dr. String and Spider-Man)... There were an odd number of humans so you couldn't get EXACTLY half. That would just be gross. Therefore, this is a bit of a leap using available info from only the first IW but I believe the snap effected the Universe as a whole but NOT at random. I believe that half of all species on every planet were hit at the same time. Where there was only 1 of a specific species, (thor, rocket & groot on earth or drax & mantis on Titan), they could stay or could go based on whatever intent was behind the snap. Where there was an odd number, the "odd man out" could stay or go based on the same intent I think the previous answers underestimate the power of the Infinity Stones and the ability to control intent across the universe when wielding them all. Thanos WAS doing it on a planet by planet basis, manually and sequentially, since that was what was within his capabilities. The whole point of gathering the Infinity Stones and wielding them with the Infinity Gauntlet was that he could take care of the whole universe, instantly ("snap of his fingers"). Since your question seems more focused on whether "randomly" translates to an entire population on a particular planet being untouched, potentially and an entire other planet getting wiped clean, using the entire universe as a random pool would kind of defeat his (to himself, at least) altruistic intentions. The idea is that the universe's resources are finite, but so are each planet's. In order to save populations from their own excesses, the "pruning" would have to happen both locally and universe-wide, to achieve his goals. What's interesting is that on Titan people were taken out, but that appears to be within the pool of "Earth" population, in practice. It appears that he's going by each sentient species and planets they inhabit, not just visit. Though I'm not sure that this is canon, because the movie makers might not have considered this at the process-design level. Thanos intends to kill half the population from every planet, randomly. Because the motive of killing was that the living beings will have more available resources and there will be less sharing. If he kills from every planet then the other half of the population will always have the resources. But if he kills randomly from universe, then, some planets will still have same number of beings and some might be wiped off clean with all the resources and nobody to use it. • "some planets will still have same number of beings" -- as we see in other answers, this claim is yet to substantiated with math and/or lore. Please elaborate. – Sam May 10 '18 at 14:44 • You totally missed the important point! The whole idea of killing half population was to distribute the resources so that everyone had a fair share. So when half the population is being killed randomly, there is just one case- 1/nth probability (assuming 'n' is the different no of ways killings is done), the case in which all the planets have exactly the population eliminated, in which this motive will be achieved, which makes no sense. Here my friend, we don't need calculation but common sense to understand it. – Parul May 14 '18 at 11:10
2020-04-04 10:01:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5301573276519775, "perplexity": 1163.6877031300348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370521574.59/warc/CC-MAIN-20200404073139-20200404103139-00147.warc.gz"}
https://www.physicsforums.com/threads/accepted-value-for-omega-of-radiation-energy-density.17495/
# Accepted value for Omega of radiation energy density 1. Mar 31, 2004 ### hellfire What is the currently accepted value for Omega of radiation energy density (photons and neutrinos)? References would be helpful. Thanks. 2. Apr 1, 2004 ### Nereid Staff Emeritus units, ugh! This Hyperphysics page has an overview of the concepts and relationships. http://instruct1.cit.cornell.edu/courses/astro101/lec31.htm [Broken] gives some information in bite-sized bullets. Spergel et al present the WMAP team's cosmological parameter analysis and results; this has perhaps the most up-to-date, authoratitive estimate of the neutrino omega, as well as the mass omega. From the latter it should be easy to calculate an estimate of the photon omega. Last edited by a moderator: May 1, 2017
2018-10-16 20:56:39
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8302873373031616, "perplexity": 4068.9158034340885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510867.6/warc/CC-MAIN-20181016201314-20181016222814-00064.warc.gz"}
https://cs.nyu.edu/pipermail/fom/1999-April/003063.html
# FOM: Reply to Trybulec on MIZAR project Andrzej Trybulec trybulec at math.uwb.edu.pl Mon Apr 26 11:59:36 EDT 1999 On Mon, 12 Apr 1999, Joe Shipman wrote: > >1. I doubt if it can be done with Mizar as it is now. (***) > > The further development of the language is necessary. However, > > most of the problems related seems to be rather linguistic ones > rather > > than logical. > > Can you be more specific? I mean that for the practical fomalization we need a richer language. Let us look what we already have in Mizar, just two examples: 1. Types Mizar Mathematical Library is based on the set theory, TG (the Grothendieck system) and the root type is "set". However, Mizar allows for introducing new type constructors with which we may construct new types: as example of type constructors: Subset of ... Function of ..., ... So, the type of function from A to B is written as Function of A,B Perhaps the most important advantages of types are i. Hidden arguments (I believe they are also in LEGO, maybe Robert Pollack will give more exact information) the composition of two morphisms in a category looks like let C be Category; let a,b,c be Object of C; let f be Morphism of a,b, g be Morphism of b,c; func g * f -> Morphism of a,c ...... (the definiens and the proof of the correctness is given here) and the associativity of * the may be written as (h * g) * f = h * (g * f) Actually the introduced constructor has 6 argumets, two of them are explicit (we call them "visible"), 4 (C,a,b,c) are "hidden" It is possible only because knowing g and f, and supposing that they have the proper types, we may reconstruct the remaining argumens of * (Actually in Mizar we use extended ASCII, so it is not a star but d249, i.e. \cdot) Without types we should write e.g. Comp(C,a,b,c,f,g) and then the associativity looks like Comp(C,a,b,d,f,Comp(C,b,d,c,g,h)) = Comp(C,a,c,d,Comp(C,a,b,c,f,g),h) (h is a morphism from c to d). And C (or some of a,b,c) may be complicated expressions. Because the types may be used to identify constructors we may use the same symbol for different functors. E.g. The notation a + b is used now in Mizar for: - the sum of real numbers - the sum of complex numbers - the sum of real sequences - the sum in vectors spaces - the sum of two subspaces of a vector space - the coproduct in a category and so on. All together more than 50 functors. iii. Reservations Mizar allows for so called "reservations", they correspond to clauses like "In the sequel r,s,t range over real numbers". The actual syntax of it is: reserve r,s,t for Real; Then if in a sentence one of the r,s, or t is free, it will be bound by default at the beginning of the sentence (In Mizar we have no real free variables) Because the reserved identifiers may occur in the type for which they are reserved, we may shorten (and make more readable) the text After the reservations: reserve C for Category, a,b,c,d for Object of C, f for Morphism of a,b, g for Morphism of b,c, h for Morphism of c,d; we may write just (h * g) * f = h * (g * f) for C being Category, a,b,c,d being Object of C, f being Morphism of a,b, g being Morphism of b,c, h being Morphism of c,d holds (h * g) * f = h * (g * f) that is rather cumbersome. The Mizar language is rather verbose but even if universal formula is written as (x!)(...) (like in HOL) instead of "for x holds ..." the repetition of quantifiers is rather tedious. I guess it is enough about types in Mizar to feel how they are used. I am slightly defensive about types, because the use of types in formal langauges was under strong attack. Leslie Lamport with his papers: "Types are harmful" "Types are not harmless" I do not remember the title of the next. It was something like "It is pain in ..., to write about types". Also John McCarthy and Bob Boyer are (or were) definitely against the types. Mizar allows for defining (Boolean valued) attributes, with which we can create two adjectives. Let me cite a real piece of Mizar (the article FINSET_1 by Agata Darmochwal, 1989) definition let IT be set; attr IT is finite means :: FINSET_1:def 1 ex p being Function st rng p = IT & dom p î omega; antonym IT is infinite; end; After introducing the attribute we may use two adjectives "finite" "non finite" (or rather "infinite") Adjectives are used mostly in clusters on the beginning of a type. Actually, most of the types is just a shorthand writing for another type preceded by a cluster of attributes, e.g. mode AbGroup is add-associative right_zeroed right_complemented Abelian (non empty GroupStr); I think that I wrote enough about Mizar, let me finish with the remark that attributes have been introduced as syntactic sugar. It seems now, that the proper processing of attributes is one of the most important problems in the project. Some other mechanisms are planned (not designed): 1. Modifiers The construction "Function of A,B" is too rigid. I would like rather "function from A into B" It is not only syntactic sugar. We have another type of functions, called ManySortedSet of I (a family parametrized by I), but Mizar does not know (automatically) that a Function of A,B is a ManySortedSet of A We want in the future to write ManySortedSet of A as function from A and to widen automatically function from A into B to function from A, if necessary. 2. Operators Mizar allows for only one operator (besides quantifiers), corresponding to the replacement. Some of my collegues argue that we should have lambda operator. But would be nice to have traditional notation for integrals or inifite sums. 3. Multiple dots What we usually (in mathematics) wrote as a_1 + ... + a_k One has now (in Mizar) to define a finite sequence f = (a_1,...,a_k) and then use a unary operator of the sum (of elements) of finite sequences. I put it upside down. We have the operator of the sum of elements of a finite sequence and to use it we should write consider f being FinSequence such that len f = k and for i st i in dom f holds f.i = a_i and then use f as the argument. We do not have the notation (a_1,...,a_k), either. But of course, the most important may be the mechanisms that we do not anticipate. Andrzej Trybulec
2021-08-03 17:52:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8857004642486572, "perplexity": 3790.440502912502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154466.61/warc/CC-MAIN-20210803155731-20210803185731-00651.warc.gz"}
https://www.doubtnut.com/question-answer-physics/an-electroagnetic-wave-is-propagating-along-x-axis-at-x-1-m-and-t-10-s-its-electric-vector-vece6v-m--642751542
Home > English > Class 12 > Physics > Chapter > Electromagnetic Waves > An electroagnetic wave is prop... # An electroagnetic wave is propagating along x-axis. At x = 1 m and t = 10 s, its electric vector |vecE|=6V//m then the magnitude of its magnetic vector is Updated On: 27-06-2022 Text Solution 2xx10^(-8)T3xx10^(-7)T6xx10^(-8)T5xx10^(-7)T
2022-12-02 09:12:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6640043258666992, "perplexity": 8365.147709743685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710900.9/warc/CC-MAIN-20221202082526-20221202112526-00817.warc.gz"}
http://tex.stackexchange.com/questions/17094/code-inside-an-equation-environment-is-executed-4-times-or-isnt-it
# Code inside an equation environment is executed 4 times (or isn't it?). I created a small example: \documentclass[a4paper,12pt]{article} \usepackage{amsmath} \usepackage[pdfstartview=FitH]{hyperref} \usepackage{bookmark} \begin{document} one line \newcounter{test} $\text{\stepcounter{test}\bookmark[dest=whatever]{my equation}one equation}$ counter value: \arabic{test}. \end{document} It creates 4 bookmarks but increases the counter only by 1. How is that (even logically) possible? Either this equation environment somehow executes the code 4 times (maybe first measuring the contents or whatever), then the counter should be increased by four. Or it only executes once, then there should only be one bookmark. How do I get it to display only one bookmark? (moving it out of the equation is not an option since this happens inside a macro which at the same time marks the equation as target.) update: alright, egregs \tbookmark command fixed my minimal example. turns out it was too minimal. here is another one which went back to not working: \documentclass[a4paper,12pt]{article} \usepackage{amsmath} \usepackage[pdfstartview=FitH]{hyperref} \usepackage[atend]{bookmark} \makeatletter\newcommand{\tbookmark}[2][]{\iffirstchoice@\bookmark[#1]{#2}\fi}\makeatother \newcounter{nops} \newcommand{\nop}{\stepcounter{nops}\raisebox{\ht\strutbox}{\hypertarget{nop\arabic{nops}}{}}\begingroup\edef\x{\endgroup\noexpand\BookmarkAtEnd{\noexpand\tbookmark[dest=nop\arabic{nops}]{page \arabic{nops}}}}\x} \begin{document} one line $\text{\nop one equation}$ counter value: \arabic{nops}. \end{document} - I get only one bookmark. (I use TeX Live 2010 with latest packages) –  Leo Liu May 1 '11 at 15:14 I use miktex 2.9. I just started an update and will report back later. What additional information might be helpful to debug this? –  peter May 1 '11 at 15:18 the TeX primitive \mathchoice has four arguments corresponding to display, text, script and script-script style. When TeX encounters it it constructs all four and then inserts the appropriate one. So it could be a case of this primitive being used somewhere. –  jfbu May 1 '11 at 17:37 updated miktex, output unchanged. –  peter May 1 '11 at 20:56 \text uses internally \mathchoice (so jfbu was right), but amstext.sty redefines \stepcounter and \addtocounter in such a way that they act only once. A cheap solution would be to define a special \bookmark command: \makeatletter \newcommand{\tbookmark}[2][]{% \iffirstchoice@ \bookmark[#1]{#2}% \fi} \makeatother that will be free of the problem. Probably something that Heiko should take care of. One might try also with letltxmacro (the redefinition must go after \usepackage{bookmark}): \usepackage{letltxmacro} \makeatletter \LetLtxMacro{\orig@bookmark}{\bookmark} \renewcommand{\bookmark}[2][]{% \iffirstchoice@\orig@bookmark[#1]{#2}\fi} \makeatother --- Added after Peter's edits --- Any command to be used inside \text and which sets bookmarks or hypertargets should be defined in terms of \iffirstchoice@: \newcommand{\nop}{% \stepcounter{nops}% \iffirstchoice@ \raisebox{\ht\strutbox}{\hypertarget{nop\arabic{nops}}{}} \fi} The AMS packages define \text using the conditional \iffirstchoice@; essentially, \text{...} is defined to become \mathchoice{\firstchoice@true\textrm{...}} {\firstchoice@false\textrm{...}} {\firstchoice@false\textrm{...}} {\firstchoice@false\textrm{...}} One has to remember that \mathchoice typesets all four forms and TeX chooses one depending on the needed math style; with \iffirstchoice@<tokens>\fi we are sure that the <tokens> are found only once. - How is \LetltxMacro different from \let? –  Aditya May 1 '11 at 20:27 @Aditya: When a command is defined with an optional argument, using \let to keep its definition is not safe at all. I believe this is a good question to ask. –  egreg May 1 '11 at 21:04 @egreg: Thanks. I looked up the documentation of letltxmacro and it clearly explains why \let fails. –  Aditya May 1 '11 at 21:59 you fixed my minimal example, but my production file still exhibited the faulty behavior. so please look at the new minimal example, i updated the question. –  peter May 3 '11 at 6:58 show 1 more comment
2014-03-09 19:53:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9477390050888062, "perplexity": 3825.198875762764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010295336/warc/CC-MAIN-20140305090455-00070-ip-10-183-142-35.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/332483/how-to-solve-this-problem-about-the-surface-integral
# how to solve this problem about the surface integral The force per unit area acting on a sphere with radius $r$ is $F=r^2\vec r+cos\theta \vec \theta$, what is the net force on of the sphere? To solve this problem, i will use the surface integral and convert the force field in terms of standard basis in cartesian coordinate and do the surface integral but doing in this way completely ignored the meaning of spherical coordinates. My question is how to do the surface integral computating with the spherical coordinate instead of converting to the standard cartesian coordinate. • Please note the converted field force in the body above. Thanks ;-) – Mikasa Mar 17 '13 at 5:37 Well, you can observe that since the integral is additive you can evaluate the $r$ part and the $\theta$ part independently. Now, the $r$ part will give zero because of the spherical symmetry. For the similar symmetry reason but this time polar, the $\theta$ part will give a vector pointing into the $z$ (i.e. up/down) direction. This should allow you to transform the problem into a $1$-dimensional integral which is easily solved.
2021-07-28 00:42:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9488025903701782, "perplexity": 224.7791678053911}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153515.0/warc/CC-MAIN-20210727233849-20210728023849-00715.warc.gz"}
https://en.wikipedia.org/wiki/Bipartite_double_cover
# Bipartite double cover Jump to: navigation, search In graph theory, the bipartite double cover of an undirected graph G is a bipartite covering graph of G, with twice as many vertices as G. It can be constructed as the tensor product of graphs, G × K2. It is also called the Kronecker double cover, canonical double cover or simply the bipartite double of G. It should not be confused with a cycle double cover of a graph, a family of cycles that includes each edge twice. ## Construction The bipartite double cover of G has two vertices ui and wi for each vertex vi of G. Two vertices ui and wj are connected by an edge in the double cover if and only if vi and vj are connected by an edge in G. For instance, below is an illustration of a bipartite double cover of a non-bipartite graph G. In the illustration, each vertex in the tensor product is shown using a color from the first term of the product (G) and a shape from the second term of the product (K2); therefore, the vertices ui in the double cover are shown as circles while the vertices wi are shown as squares. The bipartite double cover may also be constructed using adjacency matrices (as described below) or as the derived graph of a voltage graph in which each edge of G is labeled by the nonzero element of the two-element group. ## Examples The bipartite double cover of the Petersen graph is the Desargues graph: K2 × G(5,2) = G(10,3). The bipartite double cover of a complete graph Kn is a crown graph (a complete bipartite graph Kn,n minus a perfect matching). In particular, the bipartite double cover of the graph of a tetrahedron, K4, is the graph of a cube. The bipartite double cover of an odd-length cycle graph is a cycle of twice the length, while the bipartite double of any bipartite graph (such as an even length cycle, shown in the following example) is formed by two disjoint copies of the original graph. ## Matrix interpretation If an undirected graph G has a matrix A as its adjacency matrix, then the adjacency matrix of the double cover of G is ${\displaystyle \left[{\begin{matrix}0&A\\A^{T}&0\end{matrix}}\right],}$ and the biadjacency matrix of the double cover of G is just A itself. That is, the conversion from a graph to its double cover can be performed simply by reinterpreting A as a biadjacency matrix instead of as an adjacency matrix. More generally, the reinterpretation the adjacency matrices of directed graphs as biadjacency matrices provides a combinatorial equivalence between directed graphs and balanced bipartite graphs.[1] ## Properties The bipartite double cover of any graph G is a bipartite graph; both parts of the bipartite graph have one vertex for each vertex of G. A bipartite double cover is connected if and only if G is connected and non-bipartite.[2] The bipartite double cover is a special case of a double cover (a 2-fold covering graph). A double cover in graph theory can be viewed as a special case of a topological double cover. If G is a non-bipartite symmetric graph, the double cover of G is also a symmetric graph; several known cubic symmetric graphs may be obtained in this way. For instance, the double cover of K4 is the graph of a cube; the double cover of the Petersen graph is the Desargues graph; and the double cover of the graph of the dodecahedron is a 40-vertex symmetric cubic graph.[3] It is possible for two different graphs to have isomorphic bipartite double covers. For instance, the Desargues graph is not only the bipartite double cover of the Petersen graph, but is also the bipartite double cover of a different graph that is not isomorphic to the Petersen graph.[4] Not every bipartite graph is a bipartite double cover of another graph; for a bipartite graph G to be the bipartite cover of another graph, it is necessary and sufficient that the automorphisms of G include an involution that maps each vertex to a distinct and non-adjacent vertex.[4] For instance, the graph with two vertices and one edge is bipartite but is not a bipartite double cover, because it has no non-adjacent pairs of vertices to be mapped to each other by such an involution; on the other hand, the graph of the cube is a bipartite double cover, and has an involution that maps each vertex to the diametrally opposite vertex. An alternative characterization of the bipartite graphs that may be formed by the bipartite double cover construction was obtained by Sampathkumar (1975). ## Other double covers In general, a graph may have multiple double covers that are different from the bipartite double cover.[5] In the following figure, the graph C is a double cover of the graph H: 1. The graph C is a covering graph of H: there is a surjective local isomorphism f from C to H, the one indicated by the colours. For example, f maps both blue nodes in C to the blue node in H. Furthermore, let X be the neighbourhood of a blue node in C and let Y be the neighbourhood of the blue node in H; then the restriction of f to X is a bijection from X to Y. In particular, the degree of each blue node is the same. The same applies to each colour. 2. The graph C is a double cover (or 2-fold cover or 2-lift) of H: the preimage of each node in H has size 2. For example, there are exactly 2 nodes in C that are mapped to the blue node in H. However, C is not a bipartite double cover of H or any other graph; it is not a bipartite graph. If we replace one triangle by a square in H the resulting graph has four distinct double covers. Two of them are bipartite but only one of them is the Kronecker cover. As another example, the graph of the icosahedron is a double cover of the complete graph K6; to obtain a covering map from the icosahedron to K6, map each pair of opposite vertices of the icosahedron to a single vertex of K6. However, the icosahedron is not bipartite, so it is not the bipartite double cover of K6. Instead, it can be obtained as the orientable double cover of an embedding of K6 on the projective plane.
2017-03-24 10:59:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8385293483734131, "perplexity": 224.63080659300255}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187792.74/warc/CC-MAIN-20170322212947-00588-ip-10-233-31-227.ec2.internal.warc.gz"}
https://mathematica.stackexchange.com/questions/8198/importing-array-with-complex-numbers
# Importing array with complex numbers This is probably an easy question: I am exporting the results from a longer calculation into a dat-File, then I set the cell to non-evaluatable and put an Import in front, so I can later work with the results, even after closing and reopening the notebook: results = Import["results.dat"] (non-evaluatable) results = makeLongCalculation[parameters] (non-evaluatable) Export["results.dat",results] doSomethingwithResults[results] (Is there a more elegant way?) Now, I have the problem that complex numbers are imported as strings. My results are typically of the form {{0.01,3*10^-6+5*10^-4*I},...}. Is there any more straightforward way than to apply something like ToExpression to the Import? I feel that native Mathemat • I think you accidentally the sente... ;) Also, it sounds like you need NumberForm – rm -rf Jul 11 '12 at 8:59 Or in concise notation: results >> "results.dat"; results = << "results.dat"; You can automate the process slightly, by reading the data in if the file exists and results isn't defined or writing it out otherwise. If[Head@results === Symbol && FileExistsQ["complex.dat"], results = << "complex.dat", results >> "complex.dat"] • As I would like to maintain control (calculation may take8h on 8cores), I will not use the automation here, but nevertheless nice idea. Thanks. (Also to Simon.) – mcandril Jul 11 '12 at 10:06 • @mcandril Thanks, it may be that someone here can reduce that computation time for you. – image_doctor Jul 11 '12 at 14:45 • It is basically just adding up A LOT of complex numbers, for unaproximated Frenel-Huygens principle. I am not sure how much optimisation potential there is and how to pose the question so its relevant. – mcandril Jul 12 '12 at 15:54 You could use Put and Get instead of Export and Import. Put[results,"results.dat"] results = Get["results.dat"];
2021-03-06 08:06:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5371549129486084, "perplexity": 2148.978238501427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374616.70/warc/CC-MAIN-20210306070129-20210306100129-00276.warc.gz"}
https://conference.math.muni.cz/veron70/index.php?id=program
# Program Monday June 01, 2020 Time Speaker Title and abstract Morning session Zoom Link, ID: 854 8880 7575 and Password: 024590 09:30 - 09:35 (GMT +2) Opening 09:35 - 10:25 (GMT +2) Juan Luis Vázquez $s-p$-Laplacian norms and operators. Recent trends (abstract, video) 10:35 - 11:25 (GMT +2) Jesús Ildefonso Díaz Beyond the unique continuation: "flat solutions" for reactive slow diffusion and the confinement singular potentials for the Schrödinger equation (abstract, video) Break Afternoon session Zoom Link, ID: 847 0933 5406 and Password: 063070 14:00 - 14:50 (GMT +2) Giuseppe Mingione Non-uniformly elliptic problems (abstract, slides, video) 15:00 - 15:50 (GMT +2) Lucio Boccardo Marie-Françoise Daza, Laurent Ariza: El amor en los tiempos del coronavirus (Two maximum principles for two friends) (abstract, slides, video) 16:00 - 16:50 (GMT +2) Marta García-Huidobro Some results concerning the nonnegative solutions of nonlinear elliptic equations involving a gradient term (abstract, slides, video) 17:00 - 17:50 (GMT +2) Nguyen Cong Phuc Weighted and pointwise bounds in measure datum problems with applications (abstract, slides, video) Tuesday June 02, 2020 Time Speaker Title and abstract Morning session Zoom Link, ID: 816 1093 4124 and Password: 072633 09:30 - 10:20 (GMT +2) Moshe Marcus Large solutions for some nonlinear equations with a Hardy type singular term (abstract, slides, video) 10:30 - 11:20 (GMT +2) Patrizia Pucci On $\left(p,N\right)$ problems with critical exponential nonlinearities (abstract, slides, video) Break Afternoon session Zoom Link, ID: 828 7734 4791 and Password: 004465 14:00 - 14:50 (GMT +2) Feng Zhou On isolated singular solutions to Lane-Emden equation (abstract, slides) 15:00 - 15:50 (GMT +2) Alessio Porretta Diffusive Hamilton-Jacobi equations with super-quadratic growth (abstract, slides, video) 16:00 - 16:50 (GMT +2) Van Tien Nguyen Singularity formation in Nonlinear Evolution Equations (abstract, slides, video) 17:00 - 17:50 (GMT +2) Igor Verbitsky Some classes of solutions to quasilinear elliptic equations of p-Laplace type (abstract, slides, video) Wednesday June 03, 2020 Time Speaker Title and abstract Morning session Zoom Link, ID: 817 0146 0367 and Password: 012888 09:30 - 10:20 (GMT +2) Chen Huyuan Semilinear elliptic problems involving Leray-Hardy and measure data (abstract, slides, video) 10:30 - 11:20 (GMT +2) Philippe Souplet Some recent Liouville type results and their applications (abstract, slides, video) Break Afternoon session Zoom Link,  ID: 826 5754 0868 and Password: 009600 14:00 - 14:50 (GMT +2) Manuel Del Pino Infinite time singularity formation for the Keller-Segel system in $ℝ$2 (abstract, slides, video) 15:00 - 15:50 (GMT +2) Julián López Gómez Uniqueness and multiplicity of large positive solutions (abstract, slides, video) 16:00 - 16:50 (GMT +2) Vitaly Moroz Asymptotic profiles of groundstates for a class of Choquard equations (abstract, slides, video)
2020-08-12 17:58:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3119784891605377, "perplexity": 12486.27910845866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738913.60/warc/CC-MAIN-20200812171125-20200812201125-00274.warc.gz"}
https://www.greencarcongress.com/2017/09/20170924-doe.html
## DOE announces $36M in funding for carbon capture technologies ##### 24 September 2017 The US Department of Energy (DOE) announced approximately$36 million in federally-funded financial assistance to advance carbon capture technologies. Under the Department of Energy’s (DOE’s) Office Of Fossil Energy (FE), the Design and Testing of Advanced Carbon Capture Technologies funding opportunity announcement (DE-FOA-0001791) will support cost-shared research and development projects that will continue the development of carbon capture technologies to either the engineering scale or to a commercial design. Selected projects for this FOA will fall under two areas of interest: • Scaling of Carbon Capture Technologies to Engineering Scales Using Existing Host Site Infrastructure. Up to four awards, with combined DOE funding up to $30 million. • Initial Engineering, Testing, and Design for a Commercial-Scale, Post-Combustion CO2 Capture System. Up to two awards, with combined DOE funding up to$6 million. These projects will undertake engineering-scale testing of transformational solvent or membrane-based CO2 capture technologies, and will conduct design work for a commercial-scale, post-combustion CO2-capture system at an existing coal-fueled generating unit.
2023-02-08 04:45:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4295788109302521, "perplexity": 13715.917755399025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00758.warc.gz"}
http://modeltheory.wikia.com/wiki/Omitting_types
## FANDOM 78 Pages Fix some countable theory $T$. Let $S_n$ be the space of complete $n$-types over the empty set. A type $p(x) \in S_n$ is said to be isolated if there is a formula $\phi(x)$ (over the empty set) such that every element satisfying $\phi(x)$ realizes $p(x)$, and vice versa. Equivalently, $p(x)$ is an isolated point in the stone topology on $S_n$. The countable omitting types theorem says that if $Z$ is a countable set of non-isolated types (perhaps with varying $n$), then there is a countable model of $T$ in which no type in $Z$ is realized. One proof proceeds by model-theoretic forcing (as described in Hodges' book Building Models by Games). If $Z$ is finite, more elementary proofs exist. For example, one can take a model of $T$, and begin building up a subset in which no type in $Z$ is realized, using the Tarski-Vaught criterion as a guide to determine what to add to this set, to make the set eventually be a model. The omitting types theorem provides one direction in the Ryll-Nardzewski theorem on countably categorical theories. Specifically, if $T$ is a countable theory in which there is a non-isolated $n$-type $p$, then by the omitting types theorem, there is some countable model of $T$ in which $p$ is not realized. On the other hand, by compactness and Löwenheim-Skolem, there is a countable model in which $p$ is realized. The two countable models just described cannot be isomorphic, so $T$ is not countably categorical.
2018-11-14 09:43:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9363456964492798, "perplexity": 134.69367358494918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741764.35/warc/CC-MAIN-20181114082713-20181114104713-00340.warc.gz"}
http://mathhelpforum.com/differential-geometry/104265-convergent-sequence-proof.html
# Math Help - Convergent sequence proof 1. ## Convergent sequence proof Prove that if an converges to A, then |an| converges to |A|. Then prove or disprove the converse to this. Well... if an converges to A, then |an-A|<epsilon for n>=N. So do we just approach this first for an positive and then negative to show it converges to |A|. I don't know how to show this formally... I would say the converse is likely false since it could be negative. 2. This is a well known inequality. $\left| {\left| x \right| - \left| y \right|} \right| \leqslant \left| {x - y} \right|$ From that the proof is trivial.
2016-06-24 23:02:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9789920449256897, "perplexity": 807.6141722905168}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00110-ip-10-164-35-72.ec2.internal.warc.gz"}
https://intelligencemission.com/free-electricity-generator-free-energy-from-magnets.html
I might be scrapping my motor and going back to the drawing board. Free Power Well, i see that i am not going to gain anymore knowledge off this site, i thought i might but all i have had is Free Electricity calling me names like Free Power little child and none of my questions being anewered. Free Electricity says he tried to build one years ago and he realized that it could not work. Ok tell me why. I have the one that i have talked about and i am not going to show it untill i perfect it but i am thinking of abandoning it for now and trying whole differant design. Can the expert Free Electricity answer shis? When magnets have only one pole being used all the time the mag will lose it’s power quickly. What will happen if you use both poles in the repel state? Free Electricity that ballance the mag out or drain it twice as fast? How long will Free Power mag last running in the repel state all the time? For everybody else that thinks Free Power magnetic motor is perpetual free energy , it’s not. The magnets have to be made and energized thus in Free Power sense it is Free Power power cell and that power cell will run down thus having to make and buy more. Not free energy. This is still fun to play with though. Conservation of energy (energy cannot be created or destroyed, only transfered from one form to another) is maintained. Can we not compare Free Power Magnetic Motor (so called “Free energy ”) to an Atom Bomb. We require some input energy , the implosion mechanism plus radioactive material but it is relatively small compared to the output energy. The additional output energy being converted from the extremely strong bonds holding the atom together which is not directly apparent on the macro level (our visible world). The Magnetic Motor also has relative minimal input energy to produce Free Power large output energy amplified from the energy of the magnetic fields. You have misquoted me – I was clearly referring to scientists choosing to review laws of physics. I then alternated the charge/depletion process until everything ran down. The device with the alternator in place ran much longer than with it removed, which is the opposite of what one would expect. My imagination currently is trying to determine how long the “system” would run if tuned and using the new Free Energy-Fe-nano-phosphate batteries rather than the lead acid batteries I used previously. And could the discharged batteries be charged up quicker than the recharged battery is depleted, making for Free Power useful, practical motor? Free Energy are claiming to have invented perpetual motion MACHINES. That is my gripe. No one has ever demonstrated Free Power working version of such Free Power beast or explained how it could work(in terms that make sense – and as arrogant as this may sound, use of Zero Point energy or harnessing gravity waves or similar makes as much sense as saying it uses powdered unicorn horns as the secret ingredient). Reality is never going to be accepted by tat section of the community. Thanks for writing all about the phase conjugation stuff. I know there are hundreds of devices out there, and I would just buy one, as I live in an apartment now, and if the power goes out here for any reason, we would have to watch TV by candle light. lol. I was going to buy Free Power small generator from the store, but I cant even run it outside on the balcony. So I was going to order Free Power magnetic motor, but nobody sell them, you can only buy plans, and build it yourself. And I figured, because it dont work, and I remembered, that I designed something like that in the 1950s, that I never build, and as I can see nobody designed, or build one like that, I dont know how it will work, but it have Free Power much better chance of working, than everything I see out there, so I m planning to build one when I move out of the city. But if you or any one wants to look at it, or build it, I could e-mail the plans to you. I made one years ago and realised then why they would never work. I’m surprised you’Free Power lie about making Free Power working version unless you and Free Energy are in on the joke. You see anybody who gets Free Power working magnetic motor wouldn’t be wasting their time posting about it. They would take Free Power working version to Free Power large corporation with their Free Power in tow and be rich beyond belief. I just don’t get why you would bother to lie about it. You want to be Free Power hero to the free energy “believers” I imagine. You and Free Energy are truly sad cases. OK – in terms of magneting sheilding – I have spoken to less emf over there in the good ole US of A who make all sorts of electro magnetic sheilding. They also make sheilding for normal magnets. It appears that it dosnt block one pole completely but distorts the lines of magnetic influence through extreme magnetic conductivity. Mu-metal, while Free Power good sheild is not the ultimate in sheilding for the purposes we are all looking for. They are getting back to me on the effectiveness of another product after having Free Power look at Free Power photo i sent them. Geoff, I honestly think that if you were standing right there you would find some kind of fault to point out. But I do think you are doing Free Power good service by pointing them out. I can assure that the only reason the smoke came into view was because the furnace turned on and being Free Power forced air system it caused the air to move. Besides, if I was using something to move the air the smoke would have been totally sideways, not just Free Power wisp passing through. Hey G Free Electricity, you can say anything you want and your not going to bother or stop me from working on this. My question is this, Why are you on this and just cutting every body down? Are you making one your self and don’t want anybody to beat you? Go for it! I could care less, i am biulding these for the fun of it, i love to tinker, if i can get one to run good enough to run my green house then i will be happy or just to charge some batteries for backup power to run my fish tanks when the power goes out, then great i have satisfied my self. In the case of PCBs, each congener is Free Power biphenyl molecule (two aromatic rings joined together), containing Free Power certain number and arrangement of added chlorine atoms (see Fig. Free Electricity. Free Electricity). Historically, there were many commercially marketed products (e. g. , Aroclor) containing varying mixtures of PCB congeners.) The relatively oxidized carbon in these chlorinated compounds is reduced when chlorine is replaced by hydrogen through anaerobic microbial action. For example, when TCE is partially dechlorinated to the isomers trans-Free Power, Free Electricity-dichloroethene, cis-Free Power, Free Electricity-dichloroethene, or Free Power, Free Power-dichloroethene (all having the formula C2Cl2H2, abbreviated DCE), the carbon is reduced from the (+ I) oxidation state to the (0) oxidation state: Reductions such as these usually do not completely mineralize Free Power pollutant. Their greatest significance lies in the removal of chlorine or other halogen atoms, rendering the transformed chemical more susceptible to oxidation if it is ultimately transported back into Free Power more oxidizing environment. ###### Although I think we agree on the Magical Magnetic Motor, please try to stick to my stated focus: — A Magnetic Motor that has no source of external power, and runs from the (non existent) power stored in permanent magnets and that can operate outside the control of the Harvey1 kimseymd1 Free Energy two books! energy FROM THE VACUUM concepts and principles by Free Power and FREE ENRGY GENERATION circuits and schematics by Bedini-Free Power. Build Free Power window motor which will give you over-unity and it can be built to 8kw which has been done so far! NOTHING IS IMPOSSIBLE! Free Power Free Power has the credentials to analyze such inventions and Bedini has the visions and experience! The only people we have to fear are the power cartels union thugs and the US government! @Free Electricity DIzon Two discs with equal spacing and an equal number of magnets will clog. Free Electricity place two magnets on your discs and try it. Obviously you haven’t. That’s simple understanding. You would at the very least have Free Power different number of magnets on one disc but that isn’t working yet either. Why? Because I didn’t have the correct angle or distance. It did, however, start to move on its own. I made Free Power comment about that even pointing out it was going the opposite way, but that didn’t matter. This is Free Power video somebody made of Free Power completed unit. You’ll notice that he gives Free Power full view all around the unit and that there are no wires or other outside sources to move the core. Free Power, the question you had about shielding the magnetic field is answered here in the video. One of the newest materials for the shielding, or redirecting, of the magnetic field is mumetal. You can get neodymium magnets via eBay really cheaply. That way you won’t feel so bad when it doesn’t work. Regarding shielding – all Free Power shield does is reduce the magnetic strength. Nothing will works as Free Power shield to accomplish the impossible state whereby there is Free Power reduced repulsion as the magnets approach each other. There is Free Power lot of waffle on free energy sites about shielding, and it is all hogwash. Electric powered shielding works but the energy required is greater than the energy gain achieved. It is Free Power pointless exercise. Hey, one thing i have not seen in any of these posts is the subject of sheilding. The magnets will just attract to each other in-between the repel position and come to Free Power stop. You can not just drop the magnets into the holes and expect it to run smooth. Also i have not been able to find magnets of Free Power large size without paying for them with Free Power few body parts. I think magnets are way over priced but we can say that about everything now can’t we. If you can get them at Free Power good price let me know. To understand why this is the case, it’s useful to bring up the concept of chemical equilibrium. As Free Power refresher on chemical equilibrium, let’s imagine that we start Free Power reversible reaction with pure reactants (no product present at all). At first, the forward reaction will proceed rapidly, as there are lots of reactants that can be converted into products. The reverse reaction, in contrast, will not take place at all, as there are no products to turn back into reactants. As product accumulates, however, the reverse reaction will begin to happen more and more often. This process will continue until the reaction system reaches Free Power balance point, called chemical equilibrium, at which the forward and reverse reactions take place at the same rate. At this point, both reactions continue to occur, but the overall concentrations of products and reactants no longer change. Each reaction has its own unique, characteristic ratio of products to reactants at equilibrium. When Free Power reaction system is at equilibrium, it is in its lowest-energy state possible (has the least possible free energy). Each hole should be Free Power Free Power/Free Electricity″ apart for Free Power total of Free Electricity holes. Next will be setting the magnets in the holes. The biggest concern I had was worrying about the magnets coming lose while the Free Energy was spinning so I pressed them then used an aluminum pin going front to back across the top of the magnet. Both sets of skeptics will point to the fact that there has been no concrete action, no major arrests of supposed key Deep State players. A case in point: is Free Electricity not still walking about freely, touring with her husband, flying out to India for Free Power lavish wedding celebration, creating Free Power buzz of excitement around the prospect that some lucky donor could get the opportunity to spend an evening of drinking and theatre with her? By the way, do you know what an OHM is? It’s an Englishman’s.. OUSE. @Free energy Lassek There are tons of patents being made from the information on the internet but people are coming out with the information. Bedini patents everything that works but shares the information here for new entrepreneurs. The only thing not shared are part numbers. except for the electronic parts everything is home made. RPS differ with different parts. Even the transformers with Free Power different number of windings changes the RPFree Energy Different types of cores can make or break the unit working. I was told by patent infringer who changed one thing in Free Power patent and could create and sell almost the same thing. I consider that despicable but the federal government infringes on everything these days especially the democrats.
2021-03-07 11:03:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3720117509365082, "perplexity": 1385.7562262518015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376467.86/warc/CC-MAIN-20210307105633-20210307135633-00375.warc.gz"}
https://brilliant.org/problems/symmetric-polynomial/
# Symmetric Polynomial Algebra Level 5 $\large z^8+4z^6-10z^4+4z^2+1=0$ How many distinct roots does the above equation have? ×
2017-12-14 04:30:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6039143204689026, "perplexity": 4672.298663625893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948539745.30/warc/CC-MAIN-20171214035620-20171214055620-00316.warc.gz"}
http://libros.duhnnae.com/material/2017may2/149408822034-and-mesons-in-the-Dyson-Schwinger-approach-using-a-generalization-of-the-Witten-Venezia.php
# $η$ and $η-$ mesons in the Dyson-Schwinger approach using a generalization of the Witten-Veneziano relation $η$ and $η-$ mesons in the Dyson-Schwinger approach using a generalization of the Witten-Veneziano relation - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online. Descargar gratis o leer online en formato PDF el libro: $η$ and $η-$ mesons in the Dyson-Schwinger approach using a generalization of the Witten-Veneziano relation The description of the $\eta$ and $\eta^\prime$ mesons in the Dyson-Schwinger approach has relied on the Witten-Veneziano relation. The present paper explores the consequences of using instead its generalization recently proposed by Shore. On the examples of three different model interactions, we find that irrespective of the concrete model dynamic Autor: D. Horvatic; D. Blaschke; Yu. Kalinovsky; D. Kekez; D. Klabucar Fuente: https://archive.org/
2018-09-21 14:25:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8391474485397339, "perplexity": 5576.185230048357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267157203.39/warc/CC-MAIN-20180921131727-20180921152127-00209.warc.gz"}
http://tex.stackexchange.com/questions/62886/poor-leading-in-titles
Here's an example of terrible (and non-constant) leading in titles that is causing me trouble: \documentclass[a4paper, 12pt]{memoir} \usepackage{fontspec} \setmainfont[Mapping=tex-text,Numbers=OldStyle]{Cardo} \begin{document} \begin{titlingpage} \begin{center} { \fontsize{36pt}{72pt}\selectfont \bfseries A Title To Show Off The Poor Leading In Titles and How the aaaaaaaaaaaaaaaaaaaaa Skip in Fontsize is Ignored } \end{center} \end{titlingpage} \end{document} Note how the second argument in \fontsize is completely ignored. I've tried various things to fix this, to no avail. Firstly, why does it do this and, secondly, what can I do to fix it? NOTE: I would post an example image, but I don't have enough reputation yet. Sorry. - Welcome to TeX.sx! As new user without image posting privileges simply include the image as normal and remove the ! in front of it to turn it into a link. A moderator or another user with edit privileges can then reinsert the ! to turn it into an image again. –  N.N. Jul 10 '12 at 19:42 Fantastic, thanks very much for the useful tip. –  rbobbington Jul 10 '12 at 20:16 \begin{titlingpage} \begin{center} \fontsize{36pt}{72pt}\selectfont\bfseries A Title To Show Off The Poor Leading In Titles and How the aaaaaaaaaaaaaaaaaaaaa Skip in Fontsize is Ignored \end{center} \end{titlingpage} The extra braces are not necessary, as the font size declaration will end with the environment (center) it's in. A paragraph can have only one baselineskip; what happens with the extra braces is that when TeX evaluates the end-of-paragraph the effect of the font size declaration is already finished, so the baseline skip used is the default one. In other cases the braces may be necessary, because the font change is not in some environment; issue \par before the closing brace in those situations. Also \selectfont is not necessary, in this case, because \bfseries executes it anyway. The image shows "before and after" removal of the extra braces. - Excellent, thanks very much for your help. The brackets were there because in my actual document there are multiple things within the center environment, but I never thought to remove them to see what happens. –  rbobbington Jul 10 '12 at 20:17 @rbobbington Just add \par before the closing brace, as explained. –  egreg Jul 10 '12 at 20:20 @rbobbington: I don’t know the {titelingpage} environment but I prefer to use \centering instead of {center} on {titlepage} since the latter doesn’t add extra white space, i.e. vertical skips before and after it. –  Tobi Jul 10 '12 at 21:07 @Tobi I too thought about it. But it depends whether all elements are centered. –  egreg Jul 10 '12 at 21:09 Sure, but even to center only some lines I use {\centering text\par} to have a better control of the white space with \vspace. I use {center} in the document body where the extra space is desirable. :) –  Tobi Jul 10 '12 at 21:11
2014-11-26 16:28:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8870057463645935, "perplexity": 1729.4757536868049}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931007150.95/warc/CC-MAIN-20141125155647-00185-ip-10-235-23-156.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2563932/independence-of-supremum-of-random-variables
# Independence of supremum of random variables Consider a probability space $(\Omega, \mathcal{F}, \mathbb{P})$ and a countable set of random variables $\{X_n \mid n \in \mathbb{N}\}$ such that each random variable is independent of a fixed sub sigma-algebra $\mathcal{G} \subseteq \mathcal{F}.$ Can one conclude that $\sup_{n \in \mathbb{N}}X_n$ is independent of $\mathcal{G}$? Consider the space $\Omega=\{0,1\}^2$ consisting of two coin flips with $\mathcal{F}=2^\Omega$ and $\mathbb{P}$ the fair coinflipping measure. Let $\mathcal{G}$ be the $\sigma$-algebra whose informational content is whether these two coin flips agree or not. The canonical projections are independent of $\mathcal{G}$, but their supremum is not: If it is $0$, the coin flips must agree since both must be $0$.
2021-12-04 03:56:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9728890061378479, "perplexity": 148.84173170077398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362930.53/warc/CC-MAIN-20211204033320-20211204063320-00607.warc.gz"}
http://ec.citizendium.org/wiki/Atom_(science)
# Atom (science) Main Article Discussion Related Articles  [?] Bibliography  [?] Citable Version  [?] Catalogs [?] Video [?] This editable Main Article is under development and subject to a disclaimer. An atom (from the Greek atomos, indivisible)[1] [2] is the smallest physical entity (particle) that can represent a chemical element—that retains the properties of a sensible/tangible piece of a chemical element—elements which nature has produced more than 90 different species (e.g., copper, gold, aluminium), each characterized by a species-specific number of protons in their atoms' nuclei, and by a species-variable number of atom varieties (called isotopes), each isotope characterized by an isotope-specific number of neutrons in their atoms' nuclei. Atoms of the chemical element, carbon, for example, each contain six protons in their nuclei, but differing numbers of neutrons, most carbon atoms containing six neutrons, though about 1% of them contain seven neutrons, and tiny fractions of about a dozen other varieties of carbon atoms contain fewer and greater numbers of neutrons than 6-7.[3] If, in some cataclysm, all of scientific knowledge were to be destroyed, and only one sentence passed on to the next generations of creatures, what statement would contain the most information in the fewest words? I believe it is the atomic hypothesis (or the atomic fact, or whatever you wish to call it) that all things are made of atoms—litte particles that move around in perpetual motion, attracting each other when they are a little distance apart, but repelling upon being squeezed into one another. In that one sentence, you will see, there is an enormous amount of information about the world, if just a little imagination and thinking are applied. —Richard P. Feynman[4] Atoms serve as the building blocks of all the matter that we come in contact with on a daily basis, solids, liquids and gases; matter composed of a single or assorted combinations of chemical elements; matter that can undergo spontaneous chemical change; matter that chemists can, through chemical reactions, manipulate to produce all of the synthetic products that humans use. Those facts and potentialities derive from the atomic hypothesis, first proposed by the ancient Greeks (5th century BCE: Leucippus[5], Democritus[6]) and first established on an experimental and quantitative basis in the early 1800s by the English scientist, John Dalton. Individually atoms are astonishingly small (carbon atom diameter: ~2.2•10–8cm = 22 nm[7], or about 10,000 to 100,000 times smaller than a grain of sand—diameter of a grain of sand: between 2•10-3 cm = 20,000 nm and 2•10-2 cm = 200,000 nm[8]). Accordingly, it takes immense numbers of them to form most objects with which we are familiar. An average adult human body, for example, contains an estimated 7•1027 atoms.[9] Atoms were once thought to be the fundamental building blocks of the entire universe. We now know, however, that atoms are made up of a few smaller subatomic-particles, the same set of which, in differing combinations, make up all the different chemical elements. Many of the chemical elements have non-stable, so-called radioactive isotopes, whose atomic nuclei emit matter and energy. ## Introduction It would be a poor thing to be an atom in a universe without physicists, and physicists are made up of atoms. A physicist is an atom's way of knowing about atoms. —George Wald (1906-1997), Nobel Laureate in Physiology/Medicine, 1967[10] The ancient Greek (5th century BCE[5] [6]) idea that all matter in the universe consisted of tiny indivisible entities had no empirical support at the time. Much later the English polymaths, Robert Boyle (dates) and Isaac Newton (1643-1727) championed the idea and added to the argument—as "corpuscularism". The search for the most fundamental entities in the universe has remained one of most important driving forces in science to this day. The particles that today bear the name "atom" were first encountered by early chemists who discovered that they could not reduce them to a more 'elementary' substance by chemical methods. The modern atom serves as the fundamental particle for chemistry and as the smallest stable structure for engineering. Nuclear energy is generated either by breaking an atom into two or more smaller atoms (fission), or by combining two smaller atoms into a larger one (fusion). ## History of the Atom ### The nineteenth century In the beginning of the nineteenth century men such as John Dalton, Joseph-Louis Gay-Lussac and Amedeo Avogadro hypothesized that matter consists of minute particles, atoms as Dalton liked to call them, or molecules (a term preferred by Avogadro and already used by Lavoisier). However, the difference between the two concepts was not clear. One of the main topics of the first international chemistry conference, the historic 1860 Karlsruhe conference, was to clear up the confusion about the difference between atoms and molecules. Stanislao Cannizaro offered for the first time in the history of science, a very clear definition of atoms as distinguished from molecules. To him the atom was the smallest quantity of each element which enters as a whole into the molecules which contain it. His statements represented a most remarkable contribution to the clarification of the issues debated at the time concerning the relations between atoms and molecules in both organic and inorganic compounds. ### Albert Einstein gives molecules a reality in his explanation of Brownian motion Employing the kinetic theory of heat and a mathematical approach to changes in velocity characterized by random motion, in 1905 Albert Einstein gave a convincing account of the microscopically observable random motion of pollen particles suspended in a liquid, previously noted by the botanist Robert Brown, attributing the motion to random collisions of the pollen particles with the molecules of the liquid in thermal motion. [11] ### J.J. Thomson's "Plum Pudding" Model The first widely accepted model of the atom was J.J Thomson's Model of a positively charged cloud with negatively charged "corpuscles," electrons, interspersed throughout. Thomson proposed and searched for configurations for which these particles had normal modes of vibration and were stable. He built his model using only electrostatic forces, which required that the electrons be in constant motion, this led to further difficulties and he was never able to create a model that matched observed data. ### Rutherford Model Ernest Rutherford, one of Thomson's students, disproved his theory in his now famous scattering experiment by showing that there was a dense, positively charged core at the center of each atom. Rutherford postulated that the structure of an atom more closely resembled a solar system with the nucleus at the center and electrons orbiting around it. Unfortunately, Classical Mechanics predicts that these orbits would be unstable, they would decay in less than a microsecond, while emitting a continuous spectrum of light. However, it was already known that each element emitted a unique spectrum of discrete lines. Rutherford could not explain why these orbits did not decay nor why atoms did not emit the continuous spectrum that these decays would have caused. ### Bohr's model of the Hydrogen Atom In 1913 Niels Bohr[12] devised the first quantum mechanical model of a (one-electron) atom.[13] He resolved the difficulties of Rutherford's model by making the non-classical assumption that there were only discrete and stationary energy states allowed for the electron. Further, Bohr conjectured that the angular momentum of each electron was constrained to be a positive integer times Planck's constant h divided by 2π. Since in Bohr's model electrons can only make jumps between the stationary states, the atomic spectral lines were explained by the discreteness of the energy differences of the states. Emission of a photon occurs when an electron falls down from a state with energy E2 to a state with lower energy E1. The frequency ν of the emitted photon is given by ${\displaystyle h\nu =E_{2}-E_{1}.\,}$ Absorption of a photon of energy hν occurs when hν "matches" an atomic energy difference. An atom in a certain state jumps to a state of higher energy such that the energy difference is equal to hν. Clearly, this requirement puts a strict constraint on the frequencies of the photons that may be absorbed by the atom. Bohr's theory was for one-electron atoms only, but he conjectured that many-electron atoms would have a similar structure. The extension of his theory to many-electron systems led a decade later to Heisenberg's matrix mechanics and Schrödinger's wave mechanics. (PD) Drawing: Lawrence Berkeley National Laboratory A common concept of an atom's structure ### Today's view For most practical purposes Bohr's model has proven an acceptable approximation. However, the quantum mechanical interpretation is that the electrons are spread out in various probability distributions around the nucleus rather than actually orbiting it. ## Structure of the Atom Atoms are made of a dense nucleus formed by combinations of the two nucleons (positively charged protons and zero charge neutrons) and surrounded by a much larger "cloud" of electrons. The vast majority of the volume of an atom is empty space. The number of protons contained in the nucleus determines the atomic number and in turn which element it is classified as. The number of neutrons further specifies the isotope number of that element. The number of electrons surrounding the nucleus is typically assumed to be equal to the number of protons in order to keep the entire atom electrically neutral. Atoms that are not neutral are called ions, they are designated by their charge in units of elementary charge, which is equal to the negative of the number of surplus electrons present around the atom. Thus an atom with one extra electron is charged -1, and one missing is charged +1. ### Forces in the atom The nucleus of the atom contains a high concentration of positively charged particles with no counter balancing negatively charged particles to keep it stable. In order to explain the existence of the nucleus scientists introduced the two nuclear forces, the strong force and the weak force. We now understand that the nucleus is held together by the residual strong force despite its significant positive charge. The electrons which surround the atom are electro-statically attracted to the nucleus due to their negative charge. Modern concept of the quantum atom ### Electron Quantum States Electrons surround the nucleus of an atom, where each is in a unique quantum mechanical state in the atom's structure. These states of well defined energy are called atomic orbitals and are organized from the lowest energy to highest. Each orbit can be uniquely defined by four quantum numbers: 'n' the shell, 'l' the angular momentum, 'm' the magnetic angular momentum, and 's' the spin. Shell numbers can only be natural numbers but no shell number over 7 has ever been observed. Each shell has n² spatial orbitals which are grouped by angular momentum 'l' into subshells. These subshells are often given the names 's', 'p', 'f', 'g', and 'h' rather than numerals. Each of these subshells contains ((2*l)+1) suborbitals 'm' which are numbered (−l...,0,...l). Finally, each suborbital can contain two electrons of different spin ${\displaystyle +{\tfrac {1}{2}}}$ and ${\displaystyle -{\tfrac {1}{2}}}$. For a total of 2n² electrons per shell. The valence, or the highest energy shell which contains electrons, is responsible for most of an elements chemical properties. An atom with electrons in higher energy states without first filling lower states is considered "excited" and is likely to emit this excess energy in the form of a photon of energy equal to the difference between the current state and a lower energy state until it is no longer excited. These emissions are responsible for the emission and absorption spectra that atoms produce. In fact it is this property that allows us to discern the composition of stars and planets from a great distance. ### Nuclei Quantum States The nucleus of an atom itself also has a shell like behavior not dissimilar to electron shells. This leads to some extra stable and semi-stable states when the number of protons or neutrons in the nucleus is equal to the "magic numbers" 20, 28, 50, 82, and 126 which is believed to correspond to complete shells. [14] Stable nuclei fall upon a "line" of stability which starts very close to equal numbers of protons and neutrons and move towards favoring neutrons for heavier nuclei. This is a result of the fact that the electrostatic repulsion of two protons drops in magnitude much more slowly than the residual strong force. Either way nucleons strongly prefer to bond in even numbers. ### Decay Most combinations of nucleons are inherently unstable and undergo a number of radioactive decays in order to form more stable nuclei. In all of the common decays a particle is emitted from the nucleus in order to compensate for some instability in the atom. • Alpha decay most frequently occurs in atoms which are simply too big, atoms are limited in size because the residual strong force which holds them together only acts over very small distances so that the rate of electrostatic repulsion grows faster than the rate of strong attraction as the nucleus grows. Alpha decay emits an Alpha particle which is denoted with the Greek letter α. • Beta+ decay occurs in atoms which are proton heavy. In this decay a proton decays into a neutron and emits both a positron and a neutrino. The only natural Beta+ emitter is 40K. • Beta- decay occurs in atoms which are neutron heavy. In this decay a neutron decays into a proton and emits both an electron and a neutrino. A lone proton can beta- decay. • Gamma decay occurs in atoms where the nucleus is in an excited state. The nucleus will turn this excess energy into a high energy photon, typically called a gamma ray. Gamma decays typically follow either an alpha or beta decay which leaves the nucleus with excess energy. • Electron capture occurs when a proton inside the nucleus captures an electron and becomes a neutron. ## References 1. http://www.webster.com/dictionary/atom 2. Feynman FP. (1963, 1989, 1995, 2011) Six Easy Pieces: Essentials of Physics Explained by Its Most Brilliant Teacher. With Robert B. Leighton and Matthew Sands. Introduction by Paul Davies. Quote from: Chapter 1. Title: Atoms in Motion. Section: Matter is made of atoms. Philadelphia: Basic Books, Perseus Books Group. Kindle edition. eISBN 978-0-465-02529-9 3. Leucippus. Stanford Encyclopedia of Philosophy. 4. Democritus. Stanford Encyclopedia of Philosophy. 5. The size of atoms. Hyperphysics. 6. Diameter of a Grain of Sand. The Physic Fact Book. See website for source-citations. 7. How many atoms are there in the human body? Thomas Jefferson National Accelerator Facility. 8. Warren WS. (2000) The physical basis of chemistry. 2nd edition. Academic Press. George Wald quote, p. xi. 9. Renn J. (2005) Einstein's invention of Brownian motion. Ann. Phys. (Leipzig) 14, Supplement, 23-37. | Google Books preview of Einstein's Annalen papers: the complete collection 1901-1922, Volume 14 of Annalen der Physik. Jürgen Renn (editor). 10. N. Bohr, Philosophical Magazine 26, p. 1 (1913) 11. Fowler M. The Bohr Atom. Article in : Modern Physics Website, Michael Fowler, University of Virginia. • Sections: Bohr Comes to Cambridge | What Determines Atomic Size? | Nicolson: a Clever Idea about a Wrong Model | Bohr Returns to Denmark | Bohr Changes his Mind about Spectra | Bohr Finds the Rydberg Constant without Doing an Experiment | Derivation of the Angular Momentum Quantization from the Correspondence Principle. 12. Paul A. Tipler, Ralph A. Llewellyn (2003), Modern Physics ISBN 0716743450
2022-06-27 18:46:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5900391936302185, "perplexity": 989.5832076265064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103337962.22/warc/CC-MAIN-20220627164834-20220627194834-00267.warc.gz"}
http://www.leancrew.com/all-this/
# Dollars to doughnuts When Apple first switched its iTunes affiliate program from Linkshare to Performance Horizon Group, I was happy with the change. The Linkshare website was hard to navigate, horribly slow, and presented its data in laughably opaque forms. PHG, in contrast, was snappy, with easy-to-read graphs and tables. But I’ve come to learn that PHG’s payment system is holding onto more of my affiliate fees than is reasonable. Let me say at the outset that the fees I get from iTunes links don’t amount to much and probably never will—I just don’t post very many affiliate links. By the end of 2014, for example, I will have collected a grand total of $207.17 for the year. It pays for the web hosting, but not much else. Which is fine. I know my reach is limited, and I’m not trying to make my living as an affiliate marketer. But I am annoyed at PHG’s policies for releasing funds. First, there’s the 60-day delay between the sale and the payment of affiliate fees. Given that all these transactions are electronic, I don’t see why the delay is more than 30 days. Second, there’s the$30 threshold that has to be met before payment is sent. I understand that PHG doesn’t want to accumulate the administrative costs that come with lots of very small payments, but Amazon’s threshold is only $10 if you have the money sent directly to a bank account.1 Finally, there’s the fact that PHG applies the$30 threshold not to the aggregate of all fees, but to each individual currency used in the various iTunes Stores. It’s this last policy that really bothers me. At the moment I have the equivalent of about $100 in fees from non-US purchases sitting at PHG, but because they’re spread across several currencies, I can’t collect any of it. Again, this is not much money, but it is 50% of what I made from the US iTS this year. And there’s the principle of the thing: I made these sales, so I should get the commission without having to wait years before piling up enough AUD, CAD, CHF, DKK, EUR, GBP, JPY, NZD, ETC. Because I seldom look at my PHG account, I didn’t realize until today that my non-USD fees were also non-trivial. I’d suggest a class action lawsuit, but… 1. Yes, Amazon’s threshold rises to$100 if you want a physical check cut, but that’s because they don’t want to be in the check-cutting business. # What is Drafts for? This morning, I tweeted about the half-off sale of Drafts 4, which is part of the App Santa promotion. If someone you care about is still using Apple’s Notes app, buy them Drafts 4 before the half-off sale is over. itunes.apple.com/us/app/drafts-… Dr. Drang (@drdrang) Dec 17 2014 9:10 AM Although this was generally well received, I did get a little pushback from people who like Notes or use another iOS text editor. The argument in favor of Notes was about what I expected: syncing across devices (including Macs) is seamless and automatic. Jason Snell said essentially the same thing on the most recent episode of Upgrade when he admitted to occasionally using Notes to a shocked and disappointed Myke Hurley. It’s not a bad reason for using Notes, but to accept it I have to suspend my disbelief that iCloud syncing really is seamless and automatic. I’d much rather use Notesy or Editorial, which sync via the more reliable Dropbox and let me access my notes using my editor of choice on the Mac. So if I have Notesy or Editorial, why bother with Drafts? It comes down to something David Sparks (or was it Merlin Mann?) said: “Drafts is the best place for text to start on your iOS device.” There are two reasons for that: 1. It launches quickly and puts you right into a blank draft with the keyboard up and the cursor blinking. There’s no messing around with filenames or sorting through what you’ve written before; Drafts is optimized to capture your thought right now. 2. Once you’ve captured it, Drafts can send it pretty much anywhere: Dropbox, Evernote, Google Drive, iCloud Drive, Facebook, Twitter, Mail, Messages, Reminders, Fantastical. It can create new documents or append to old ones. If your note was written in Markdown, it can format it for you. If the name weren’t already being used by an email client, Drafts could just as easily be called Dispatch. In other words, Drafts isn’t so much a home for text as it is an incubator, a place for creating and processing text to be used elsewhere. In earlier versions, the processing was done primarily through URL schemes that handed the text off to other apps. With Drafts 4, the processing actions have been expanded, and you can now manipulate the text in a draft in either useful or silly ways with JavaScript. If you don’t feel comfortable building your own actions, there’s a directory of actions built by others that you can install with just a tap or two. There’s no question but that using Drafts effectively takes more effort than using Notes. But it frees your text from the Notes silo and allows it to go almost anywhere. Well worth \$5. # One text editor to rule them all I ended my post on BBEdit finding with this line: Serious text editors have a depth that rewards their users. Of course, to collect those rewards you need to use the editor enough to understand its ways. This is what Jeff Hunsberger has decided to do with Sublime Text, forgoing the special-purpose editors he’s been using for this or that and concentrating on one that can do this and that. Once upon a time, text editors and word processors typically had their own complicated editing commands, commands that weren’t shared with other editors. Switching between editors was hard, and to achieve any kind of efficiency, you had to choose one and stick with it. If possible, you did all your writing in that one program. In the Unix world, the EDITOR and VISUAL environment variables allowed users to choose the editor that would be launched when a command required the entry of more than just a word or two. Email clients like elm and pine came with their own builtin editors but also allowed users to choose whatever they liked—typically vi or Emacs. This changed in the 80s as operating systems began to impose standards of consistency in text handling. On the Mac, dragging, double-clicking, and shift-clicking worked the same in every program, as did the holy trinity of ⌘X, ⌘C, and ⌘V. Windows followed suit, but with the Control key instead of Command. X Windows, with its three-button mouse, used middle and right mouse clicks to cut, copy, and paste. This made it easy to switch between text editors because the basics were the same in every one. The muscle memory developed in one program worked just as well in all the others. And for a lot of people, the basics were all that mattered. The consistency the Mac and Windows brought to computing made it accessible to an awful lot of people. But if you do a lot of writing, or a lot of manipulation of textual data, you’ll want to move beyond the basics, and that requires a text editor with depth, an editor you can shape to your own particular needs. And once you’ve decided to dive deep, you’ll want to focus on one editor so you can learn it well. It’s true that external utilities like Services in OS X (and whatever is the equivalent in Windows) can provide simple text editors with something like the control you can get with a serious editor, but it’s just not the same. There are still good reasons for doing what Jeff Hunsberger is doing: choosing a powerful text editor, learning it well, and doing all your writing in it. Which editor should be your one text editor? I couldn’t possibly tell you. Every one has its strengths, and every one has its proselytizers. You’ll have to try a few on for size and see which is the best fit. By the way, if you’ve been wondering how I could put an LOTR-esque title on a post about text editors and not link to this spectacular article by Kieran Healy, you can stop wondering now. # Atlas insulated vinyl gloves I don’t believe I’ve ever recommended an item of clothing here before, but there’s a first time for everything. I spent most of the past week outside in the cold, snow, rain, and wind, and these gloves kept my hands warmer and dryer than any gloves I’ve ever had before. They’re Atlas 460 insulated PVC gloves, and they’re neither fashionable nor suitable for detail work. But if you’re going to be out in the weather working for a long time you’ll appreciate them. I used them while moving large pieces of steel and wire rope, and they kept my hands protected and toasty. The PVC stayed flexible in sub-freezing temperatures, was thick enough to keep them from being cut on sharp edges, and the grease they accumulated cleaned off easily with isopropyl alcohol.
2014-12-22 06:13:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19798928499221802, "perplexity": 2235.291091700665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802774718.30/warc/CC-MAIN-20141217075254-00152-ip-10-231-17-201.ec2.internal.warc.gz"}
https://academicarchive.snhu.edu/items/b4e09640-e1a3-4719-aaf0-b6ef424d27c8
## Hartford Farms 1983 Winne, Mark ##### Publisher Southern New Hampshire University ##### Abstract As stated in the thesis project, "The goal of this project was to secure adequate financing for the construction and start up of Hartford Farms, a commercial food producing greenhouse wholly-owned by two Hartford-based non-profit agencies - the Hartford Food System and Southend Community Services. Due to a variety of unconventional factors that will be examined throughout this paper, the venture was required to seek alternative or non-standard investment capital consisting of grants and low interest loans. The project required almost 15 months to complete and resulted in financing commitments totaling $320,000 ($140,000 of this is available after 1984 for expansion)." (Library-derived description)
2023-03-30 11:43:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.263476699590683, "perplexity": 9210.479209270361}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949181.44/warc/CC-MAIN-20230330101355-20230330131355-00129.warc.gz"}
https://boredofstudies.org/threads/what-determines-band-6-e4.397753/
# What determines Band 6 / E4? (1 Viewer) #### jxdesml ##### New Member So I know your HSC mark is your internal assessment mark (moderated and aligned?) + your external exam mark. And I know that the raw mark cutoffs for a band 6 for most of the subjects I do is around 85, but what I don’t understand is: is that band you get solely due to your exam mark? So for Extension 2 Maths for example, if the cutoffs this year is 75, and I get 70 in the exam, that means I get an E3, but if my school mark is higher, and my rank is high (2/10 in my case), can I possibly get an E4 still? Idk if that makes any sense. mmw #### Jojofelyx ##### Well-Known Member So I know your HSC mark is your internal assessment mark (moderated and aligned?) + your external exam mark. And I know that the raw mark cutoffs for a band 6 for most of the subjects I do is around 85, but what I don’t understand is: is that band you get solely due to your exam mark? So for Extension 2 Maths for example, if the cutoffs this year is 75, and I get 70 in the exam, that means I get an E3, but if my school mark is higher, and my rank is high (2/10 in my case), can I possibly get an E4 still? Idk if that makes any sense. You sure can. A friend of mine managed to get like 91 that way. Exam mark of 89, and internal moderated of 93, which is how they managed to get an E4. #### jimmysmith560 ##### Le Phénix Trilingue Moderator $image=https://latex.codecogs.com/png.latex?\bg_white%20\frac{70+90}{2}=80&hash=6a713e08d6cc788fcc64deb748d6dffb$ , placing the student in the band 5 zone for this particular subject. Regarding your Mathematics Extension 2 question, and considering the example that was given above, I believe a band E4 would be possible. A factor that this will depend on is the performance of whoever achieves the second-highest Examination Mark, as this is the mark that will be used to moderate (adjust) your Assessment Mark. I hope this helps! #### jxdesml ##### New Member $image=https://latex.codecogs.com/png.latex?\bg_white%20\frac{70+90}{2}=80&hash=6a713e08d6cc788fcc64deb748d6dffb$ , placing the student in the band 5 zone for this particular subject. Regarding your Mathematics Extension 2 question, and considering the example that was given above, I believe a band E4 would be possible. A factor that this will depend on is the performance of whoever achieves the second-highest Examination Mark, as this is the mark that will be used to moderate (adjust) your Assessment Mark. I hope this helps! Jimmy coming in clutch. thank you! Moderator #### johnnys20 ##### New Member Are the "Examination Marks" listed in the HSC results table scaled or raw? For example: #### jimmysmith560 ##### Le Phénix Trilingue Moderator Are the "Examination Marks" listed in the HSC results table scaled or raw? For example: View attachment 33756 The Examination Marks you receive are aligned and contribute 50% of your respective HSC marks. You are not informed of your raw marks, unless you wish to request them from NESA. #### johnnys20 ##### New Member The Examination Marks you receive are aligned and contribute 50% of your respective HSC marks. You are not informed of your raw marks, unless you wish to request them from NESA. I see, thanks a lot #### akhan324 ##### Member The Examination Marks you receive are aligned and contribute 50% of your respective HSC marks. You are not informed of your raw marks, unless you wish to request them from NESA. what does the assessment mark mean then? #### Jojofelyx ##### Well-Known Member what does the assessment mark mean then? internal moderated = assessment mark #### akhan324 ##### Member internal moderated = assessment mark oh that’s according to your cohort and stuff right? #### jimmysmith560 ##### Le Phénix Trilingue Moderator what does the assessment mark mean then? As mentioned above, the Assessment Mark refers to a student's internal mark. This mark is subject to the moderation process and contributes 50% of a student's HSC mark. oh that’s according to your cohort and stuff right? That's correct. The Assessment Mark relies on your rank relative to your cohort in a particular subject, in addition to the performance of your cohort in the HSC exam of that particular subject. This is because the moderation process uses Examination Marks achieved by a cohort in a particular subject to adjust/determine the Assessment Marks for that cohort in that subject.
2021-12-03 09:29:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6731634736061096, "perplexity": 3070.485361643589}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362619.23/warc/CC-MAIN-20211203091120-20211203121120-00020.warc.gz"}
https://www.skyofwar.net/
# Description $n,k\leqslant 10^5$ - 阅读全文 - # Description $n \leqslant 2\times 10^5$ - 阅读全文 - # Description $N \leqslant 5 \times 10^5, \ T<2, \ K \leqslant 10^9$ - 阅读全文 - # Description Alice想要得到一个长度为$n$的序列,序列中的数都是不超过$m$的正整数,而且这$n$个数的和是$p$的倍数。 Alice还希望,这$n$个数中,至少有一个数是质数。 Alice想知道,有多少个序列满足她的要求。 $1 \leq n \leq 10^{9}, 1 \leq m \leq 2 \times 10^{7}, 1 \leq p \leq 100$ - 阅读全文 - # Set of Sets A set of sets $S$ is said to be pairwise disjoint if and only if: $$\forall X, Y \in \Bbb S: X \ne Y \implies X \cap Y = \varnothing$$ Here, $∩$ denotes intersection, and $\varnothing$ denotes the empty set. Hence we can say that the elements of $S$ are pairwise disjoint. # Family of Sets An indexed family of sets $\left \langle {S_i} \right \rangle_{i \mathop \in I}$ is said to be pairwise disjoint if and only if: $$\forall i, j \in I: i \ne j \implies S_i \cap S_j = \varnothing$$ Hence the indexed sets $S_i$ themselves, where $i\in I$, are referred to as being pairwise disjoint. • 暂无回复
2019-04-25 12:58:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9968286752700806, "perplexity": 1442.4776492524898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578721441.77/warc/CC-MAIN-20190425114058-20190425140058-00281.warc.gz"}
https://homework.cpm.org/category/CCI_CT/textbook/apcalc/chapter/4/lesson/4.1.2/problem/4-23
### Home > APCALC > Chapter 4 > Lesson 4.1.2 > Problem4-23 4-23. For parts (a) and (b) below, trace $f\left(x\right)$ on your paper. Then, using a different color, sketch the graph of $y = f^\prime\left(x\right)$ for the function given. • $f\left(x\right)$ has a cusp at $x = 3$, so $x ^\prime\left(x\right)$ should have a jump at $x = 3$. • The max and min on a smooth continuous graph are the location of zeros on the derivative graph.
2021-09-19 13:48:36
{"extraction_info": {"found_math": true, "script_math_tex": 6, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6815526485443115, "perplexity": 550.6540457959957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056890.28/warc/CC-MAIN-20210919125659-20210919155659-00584.warc.gz"}
https://mathoverflow.net/questions/171709/predual-of-a-direct-sum-of-banach-spaces/171713
# Predual of a Direct Sum of Banach Spaces This may be basic, and if it is I apologize, but I have found no references to it in literature. I would appreciate a reference at least if I am wrong. I have supplied background for those interested, but you are more than welcome to skip it - I have indicated where it begins and ends. BACKGROUND STARTS HERE I am currently dealing with certain questions regarding, in very broad terms, the behavior of duals and preduals of Banach spaces, especially trying to establish simple ways to test for the existence of a predual. One class of Banach spaces for which there is a handy iff statement on when they are dual spaces is the family of spaces of the form $C(X)$, $X$ a compact Hausdorff spaces: a predual exists if and only if $X$ is hyperstonean, i.e. extremely disconnected (the closure of any open set is open) and having sufficiently many normal measures (in a sense which is better left described elsewhere, e.g. vol. I of Takesaki's Theory of Operator Algebras). Cumbersome as this description is, the proof is more cumbersome still, and indeed, it doesn't seem right for a discussion of hyperstonean spaces to be needed to prove that $C([0,1])$ has no predual* (this seems like a silly objection, but bear with me if you have so far). Fortunately, it is not needed; in fact, it can proved that for a compact metric space $K$, $C(K)$ never has a predual, by combining the two facts that 1) $c_0$ embeds isometrically into any such space and 2) $c_0$ does not embed isometrically into any separable space which has a predual.** *Note that the standard proof based on Krein-Milman only works in the real case; here we are dealing with complex-valued continuous functions. **Not that this fact is so trivial - in fact, I have not found a proof of it which doesn't go through things like points of weak-to-norm continuity and the like. All good and well; however, these methods outlive their usefulness quickly, as they immediately break down for even spaces of the form $C^k([a,b])$. However, we know quite well that the latter space can be identified with the direct sum $\mathbb{C}^k \oplus C([a,b])$ - as such, due to this and other examples, it is useful to know whether the presence of a non-dual factor in a direct sum necessarily makes the direct sum itself non-dual. BACKGROUND ENDS HERE Question: Let $X$ and $Y$ be Banach spaces such that the direct sum $X \oplus Y$ has a predual $V$. Is it true that $V$ must be of the form $V_X \oplus V_Y$, where $V_X$ and $V_Y$ are preduals of $X$ and $Y$, respectively? In other words: is $X \oplus Y$ a dual space iff $X$ and $Y$ are dual spaces? I actually have something of an idea for a proof. Suppose $V$ is a predual for $X \oplus Y$. Define subspaces of $V$: $V_X = \left \{v \in V | y(v) = 0 \forall y \in Y \right \}$ $V_Y = \left \{v \in V | x(v) = 0 \forall x \in X \right \}$ These are our candidates for the decomposition, and they at least have the virute of intersecting trivially. It is only left to show that any element of $V$ is the sum of elements from these subspaces; but $V$ naturally embeds into the direct sum $X^* \oplus Y^*$, so we already have a decomposition for $v \in V$ in that space: $v = v_x + v_y, v_x \in X^*, v_y \in Y^*$ Thus, all that remains to show is that both $v_x$ and $v_y$ are elements of $V$, which is where I am currently stuck. The answer is no as $X$ need not have a predual. Indeed, take a space $X$ which is compelemnted in its bidual but not isomorphic to a dual space (e.g. $X=L_1$) and consider $X^{**} = X \oplus Y$. You can produce even more weird examples: $\ell_1$ (which has lots of projections) has preduals which are hereditarily indecomposable (so only finite/cofinite-dimensional subspaces are complemented). It seems that you are interested in von Neumann algebras, so it need not hold even when your sum is isomorphic to $\ell_\infty(\Gamma)$ as there exist injective $C(K)$-spaces which are not isomorphic to dual spaces (Rosenthal). Take such a space and embed it into $\ell_\infty(\Gamma)$ for $\Gamma$ big enough. Moral: there is absolutely no reason for which the projections onto $X$ and $Y$ should be weak*-to-weak* continuous in general.
2021-04-10 12:40:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.915106475353241, "perplexity": 126.15799229419407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038056869.3/warc/CC-MAIN-20210410105831-20210410135831-00300.warc.gz"}
http://openstudy.com/updates/5285da87e4b0b3376510db2a
• anonymous Can someone please explain to to me how can we calculate Hickisian Substitution Effect for this utility function OCW Scholar - Principles of Microeconomics Looking for something else? Not the answer you are looking for? Search for more explanations.
2017-04-23 21:46:03
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.830730676651001, "perplexity": 3374.9097537771845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118831.16/warc/CC-MAIN-20170423031158-00569-ip-10-145-167-34.ec2.internal.warc.gz"}
https://rpg.stackexchange.com/questions/89262/are-ability-modifiers-sneak-attacks-multiplied-in-a-critical-hit
# Are ability modifiers/sneak attacks multiplied in a critical hit? I know that in other d20 games, precision damage is not multiplied to critical hits, but I can't find the answer for 13th Age. Maybe if it doesn't say it, I have to assume that it gets multiplied. Also, what about ability modifiers, do they get multiplied in a critical hit?
2021-01-16 00:35:09
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8246617913246155, "perplexity": 1699.3988046294746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703497681.4/warc/CC-MAIN-20210115224908-20210116014908-00081.warc.gz"}
https://www.proofwiki.org/wiki/General_Fibonacci_Sequence_whose_Terms_are_all_Composite
# General Fibonacci Sequence whose Terms are all Composite ## Theorem The general Fibonacci sequence $\left\langle{a_n}\right\rangle$ defined as: $a_n = \begin{cases} r & : n = 0 \\ s & : n = 1 \\ a_{n - 2} + a_{n - 1} & : n > 1 \end{cases}$ where: $r = 62 \, 638 \, 280 \, 004 \, 239 \, 857$ $s = 49 \, 463 \, 435 \, 743 \, 205 \, 655$ is such that: $r$ and $s$ are coprime all its terms are composite.
2022-08-15 03:00:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9478352665901184, "perplexity": 1478.889299436361}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572127.33/warc/CC-MAIN-20220815024523-20220815054523-00796.warc.gz"}
https://www.studysmarter.us/textbooks/math/essential-calculus-early-transcendentals-2nd/integrals/q43e-in-example-2-in-section-51-we-showed-that-int01-x2-dx-f/
Suggested languages for you: Americas Europe Q43E Expert-verified Found in: Page 281 ### Essential Calculus: Early Transcendentals Book edition 2nd Author(s) James Stewart Pages 830 pages ISBN 9781133112280 # In Example 2 in Section 5.1 we showed that $$\int_0^1 {{x^2}} dx = \frac{1}{3}$$. Use this fact and the properties of integrals to evaluate $$\int_0^1 {\left( {5 - 6{x^2}} \right)} dx$$. The value of $$\int_0^1 {\left( {5 - 6{x^2}} \right)} dx$$ is $$3.$$ See the step by step solution ## Step 1: Property Used for the integral Suppose all the following integrals exist. 1. $$\int_a^b c dx = c(b - a)$$, where $$c$$ is any constant 2. $$\int_a^b {(f(x) + g(x))dx} = \int_a^b {f(x)dx} + \int_a^b {g(x)dx}$$ 3. $$\int_a^b c f(x)dx = c\int_a^b f (x)dx$$, where $$c$$ is any constant 4. ## Step 2: Obtain the value of the integral The value of the integral, $$\int_0^1 {\left( {5 - 6{x^2}} \right)} dx$$ as follows. Apply the properties 3 and 4 in the integral $$\int_0^1 {\left( {5 - 6{x^2}} \right)} dx$$ and compute. $$\int_0^1 {\left( {5 - 6{x^2}} \right)} dx = \int_0^1 5 dx - \int_0^1 6 {x^2}dx$$ $$= \int_0^1 5 dx - 6\int_0^1 {{x^2}} dx{\kern 1pt} \,{\kern 1pt} {\kern 1pt} \,........(1)$$ Apply property 1 in $$\int_0^1 5 dx$$ and calculate as, $$\int_0^1 5 dx = 5(1 - 0)$$ $$= 5$$ ## Step 3: Substitute the values in the given integral It is given that, $$\int_0^1 {{x^2}} dx = \frac{1}{3}$$. Substitute this in (1) and compute the value of the integral as follows. \begin{aligned}{l}\int_0^1 {\left( {5 - 6{x^2}} \right)} dx &= \int_0^1 5 dx - 6\int_0^1 {{x^2}} dx\\ &= 5 - 6 \times \frac{1}{3}\\ &= 5 - 2\\ &= 3\end{aligned} Hence, the value of $$\int_0^1 {\left( {5 - 6{x^2}} \right)} dx$$ is 3.
2023-03-23 01:31:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999779462814331, "perplexity": 766.8540717192784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944606.5/warc/CC-MAIN-20230323003026-20230323033026-00002.warc.gz"}
https://socratic.org/questions/the-longer-the-half-life-the-more-less-stable-the-nuclide
# The longer the half-life, the more/less stable the nuclide? Oct 28, 2015 The longer the half-life, the more stable the nuclide. #### Explanation: First thing first - what is nuclear half-life? Well, you know that nuclear half-life represents the time needed for an initial sample of a radioactive substance to be halved. In other words, the nuclear half-life tells you how much time must pass until half of whatever mass you started with decays to form daughter nuclides. If this is the case, a long half-life would imply that it takes a very long time for the nuclide to decay to half of its initial mass. This of cource means that it is very stable. You can expect it to be around for a long period of time. A short half-life, on the other hand, implies that the radioactive substance decays to half of its initial mass very quickly. Moreover, after another half-life passes, it decays to half of its current mass again. Another half-life passes, and its mass gets halved again. This means that it will be very unstable, since it will decay almost completely in a much shorter time.
2019-10-22 14:41:30
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8664469122886658, "perplexity": 524.2143350850114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987822098.86/warc/CC-MAIN-20191022132135-20191022155635-00060.warc.gz"}
https://stats.stackexchange.com/questions/209418/hierarchical-modelling-partial-pooling-with-correlation
# Hierarchical modelling - partial pooling with correlation I am doing a Bayesian regression. I have groups of data $(y_1 ~X_1), (y_2~X_2),...$, where each $y$ and $X$ is a vector. The subscript is regarded as group number. The completely unpooled regression would be $(y_1, y_2,...)~(X_1, X_2,...)$. The completely pooled regression would be to do regression on each $X_i, y_i$ separately. I am interested in a partial pooling scheme. The thing here is that I believe similar groups, defined by similarity of group number (subscript), will have similar regression coefficients. Say, $\beta_1$ is a coefficient vector for $y_1 \sim \beta_1^TX_1$, $\beta_2$ is a coefficient for $y_2 \sim \beta_2^TX_2$ and so on. $\beta_{1j}$ is the $jth$ element of vector $\beta_{1j}$. What I do is to build a covariance matrix using something like an RBF kernel so that $$K= \begin{bmatrix} k(1,1) & k(1,2) & \dots & k(1,n) \\ k(2,1) & k(2,2) & \dots & k(2,n) \\ \vdots \\ k(n,1) & k(n,2) & \dots & k(n,n) \end{bmatrix}$$, where $k(.,.)$ is the kernel function. Then I can say that $$\begin{bmatrix} \beta_{1i} \\ \beta_{2i} \\ \vdots \\ \beta_{ni} \end{bmatrix} \sim N(0, K\sigma^2)$$, where $N(.,.)$ is a multivariate normal distribution. This is the prior for regression coefficiets. The problem is that I have about 100 groups, but when I try to fit the model using MCMC (pymc3), this is extremely slow. I was thinking of using a random walk to express the slow change of regression coefficients as we go from one group to the other. The problem don't like this approach is that the variance of beta coefficients of groups with larger group number is much larger. I wonder if someone has some ideas to do this kind of thing in practice. • How did you initialize your problem? Perhaps using a MAP solution (if possible) can then make convergence faster. – Vladislavs Dovgalecs Apr 27 '16 at 6:37 • Also, in PyMC3 the NUTS algorithm is available. If your variables are real-valued, may be the convergence can be achieved faster than with MCMC. – Vladislavs Dovgalecs Apr 27 '16 at 6:40 • I tried that, but the optimizer cannot even find a MAP solution. There is some numerical instability. If I reduce the number of variables to under 10, then it works. I suspect I have too many variables to optimize and do MCMC. Not sure if 1000 variables is too much for a hierarchical regression - each regression has 10 variables and I have 100 groups. So a total of 1,000 variables. I am using NUTS. – Tom Bennett Apr 27 '16 at 6:41 • About those numerical instabilities - do you have data for each individual regression? Is observed data at least 10 points long? May be the MAP problem is ill-posed. – Vladislavs Dovgalecs Apr 27 '16 at 6:46 • What if you use the posterior for the solution with 10 variables as a crude prior for your model with 1,000 variables? – Vladislavs Dovgalecs Apr 27 '16 at 6:53
2020-01-21 09:48:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7349114418029785, "perplexity": 390.72472963294666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250601628.36/warc/CC-MAIN-20200121074002-20200121103002-00291.warc.gz"}
http://openstudy.com/updates/55809240e4b0636b8cc3e180
## anonymous one year ago In the year 2535, astronauts Milky and Way are blasting around the solar system. If they could decide to visit 5 of the 9 planet, how many different selections could they make? 1. IrishBoy123 do you know ""n choose k"": eg $$\\^nC_k$$ or $$\left(\begin{matrix}n \\ k\end{matrix}\right)$$? if you don't, you can draw a diagram and figure it out. 2. anonymous what 3. anonymous eg nCk or (nk) 4. anonymous idk what that is
2017-01-23 05:05:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2788102924823761, "perplexity": 2632.260856928868}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00543-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.emathzone.com/tutorials/general-topology/second-countable-space.html
# Second Countable Space Let $\left( {X,\tau } \right)$ be a topological space, then $X$ is said to be the second countable space, if $\tau$ has a countable base. In other words, a topological space $\left( {X,\tau } \right)$ is said to be the second countable space if it has a countable open base. A second countable space is also said to be a space satisfying the second axiom of countability. Example: If $X$ is finite, then a member of each $\tau$on $X$ is finite. So its base is finite. Hence $\left( {X,\tau } \right)$ is the second countable space. Now we show that $\left( {X,\tau } \right)$ is the first countable space. Let $S$ be a subbase of $\tau$. So, $S \subseteq P\left( X \right)$ (Countable), then ${\rm B} \subseteq P\left( X \right)$ (countable), so ${\rm B}$ is also countable. Therefore, $\left( {X,\tau } \right)$ is the second countable space, as each local base is also countable, so this is also the first countable space. Theorems • Every second countable space is the first countable space, but the converse may not be true. • Any uncountable set $X$ with a co-finite topology is not the first countable space and so it is not the second countable space. • The set of all intervals with rational ends is a countable base for the usual topology on $\mathbb{R}$. The real line is the second countable space. • Any uncountable set $X$ with a countable topology is not the first countable and so is not the second countable space.
2021-08-03 06:32:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9711779952049255, "perplexity": 112.77546173673201}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154432.2/warc/CC-MAIN-20210803061431-20210803091431-00297.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/calculus-3rd-edition/chapter-2-limits-2-3-basic-limit-laws-preliminary-questions-page-58/3
## Calculus (3rd Edition) Since the quotient law is $$\lim _{x \rightarrow c} \frac{f(x)}{g(x)}=\frac{\lim _{x \rightarrow c} f(x)}{\lim _{x \rightarrow c} g(x)}$$ then it does not hold if the limit of the denominator is zero. That is, (a) is the correct answer.
2021-04-20 03:54:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9986143708229065, "perplexity": 179.62124393391127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039375537.73/warc/CC-MAIN-20210420025739-20210420055739-00605.warc.gz"}
https://codereview.stackexchange.com/questions/220254/take-n-elements-from-list-of-lists
# Take N elements from List of Lists I've come with a solution super tricky for a simple requirement. I think I could solve the problem using LinQ, but I'm not seeing it so clearly at all. What's sure, I'm not comfortable with my code. The requirement is the following: Given a list of lists of string (IEnumerable>), take the first N elements from each but one by one. So given this list of list and N=10: {aaa, bb, ccc, ddd} {eee, fff} {ggg, hhhhh, iii, jjj, kkk} {lll} {1111, 22, 333, 444, 55555, 66666} This is the output: {aaa, eee, ggg, lll, 111, bb, fff, hhhhh, 22, ccc} And here is the code: private IEnumerable<string> Extract( IEnumerable<IEnumerable<string>> listOfList, int N ) { var result = new List<string>(); for (int i = 0; i < N; i++) { foreach (IEnumerable<string> list in listOfList) { if (list.Count() > i) { if (result.Count >= N) return result; } } } return result; } The code works, but I don't think this is maintainable nor easy to read. Thanks. • Would you mind clarifying the spec a little? Are you meant to take the first N elements of each list, but taking the first of each, then the second of each, etc. or are you meant to take the first N of the sequences defined by taking the first, then the second, etc. Your code (and the example and all the answers) currently do the second, but the spec sounds like it wants the first. – VisualMelon May 16 at 18:09 Calling ElementAt inside a loop will cause the IEnumerable to be called multiple times. This code isn't deffering execution either. Now if you think the pervious code wasn't readable then hold on to your hats. I'm going to post some code in hopes someone can build on it and maybe come up with something better. public static class IEnumerableExtensions { public static IEnumerable<TSource> Interweave<TSource>(this IEnumerable<TSource> source, params IEnumerable<TSource>[] weavers) { // Create a list of Enumerators but need to reverse it as will be removing from list and don't want o mess up indexer var enumerators = new[] { source }.Concat(weavers).Select(x => x.GetEnumerator()).Reverse().ToList(); try { while (enumerators.Count > 0) { // index backwards so we can remove from list and not mess up index for (var i = enumerators.Count - 1; i >= 0; i--) { var currentEnumerator = enumerators[i]; if (currentEnumerator.MoveNext()) { yield return currentEnumerator.Current; } else { currentEnumerator.Dispose(); enumerators.Remove(currentEnumerator); } } } } finally { // finally block as we can't use "using" as have multiple disposables and don't know count ahead of time if (enumerators != null) { enumerators.ForEach(x => x.Dispose()); } } } } This doesn't do Take but you can just chain on the Take method. Example of it in use static void Main(string[] args) { var one = new[] { "aaa", "bb", "ccc", "ddd" }; var two = new[] { "eee", "fff" }; var threee = new[] { "ggg", "hhhhh", "iii", "jjj", "kkk" }; var four = new[] { "lll" }; var five = new[] { "1111", "22", "333", "444", "55555", "66666" }; foreach (var item in one.Interweave(two, threee, four, five).Take(6)) { Console.WriteLine(item); } } • You can avoid the fiddly stuff with the list indices by using a queue instead. – Peter Taylor May 15 at 8:10 • This code isn't just more efficient, it will handle infinite and non-resuable IEnumerables, which the OP's code won't. I like the lists personally (though not-so-much the reverse order, and it would probably be tidier with a Queue); you might consider using RemoteAt(i) rather than Remove(currentEnumerator) to avoid a linear scan. There is also no utility in the null check in the finally. – VisualMelon May 15 at 9:14 • @VisualMelon RemoveAt would be better. The null check was from docs.microsoft.com/en-us/dotnet/csharp/language-reference/… where they have code of what using boils down to. I was wondering about it as well but put it in based on their example. – CharlesNRice May 15 at 13:21 • I think this is a great approach. Here's a fiddle of what it might look like with Dequeue + maybe re-Enque in place of Remove or RemoveAt. dotnetfiddle.net/sOYUn4 – benj2240 May 15 at 23:20 • @benj2240 the queue's constructor is eager and will enumerate the main collection entirely. – t3chb0t May 16 at 16:55 • I would rename N to count, which is descriptive and follows typical naming conventions. Your method would benefit from inline documentation (///), which could clarify the behaviour (what does Extract mean?!?) and describe the parameters precisely. • It's good that you've used the general-purpose IEnumerable as the return type (gives you freedom to use lazy implementations like those provided by the other answers). I would consider removing the count parameter: the consumer can use LINQ's Take if they want. Currently the API means you can't just keep consuming stuff until you get bored (e.g. with TakeWhile or something), and lacks a specification as to what the method should do if it runs out of stuff to return, what to do with invalid inputs (e.g. -1) ,and all that fun stuff that comes with providing a nontrivial API. • Note that t3chb0t has provided a generic implementation, so it works with lists of anything, and not just strings. There is basically no reason not to do this, and it means you will have a nice reusable piece of code that works with any type. • Again, t3chb0t has made the method a static extension method: there is no need for your method to be an instance method unless it is swappable behaviour, which is not implied by the spec. An extension method means it will fit in nicely with the other LINQ methods that most of us use daily. CharlesNRice's solution has still one flaw. It eagerly enumerates the main collection and since we don't know its length, we better avoid it. Since the other answer already mentions flaws in your code, let me just add this lazy alternative. It's a little bit tricky to make it deferred because you first need to enumerate the main collection and create enumerators for each sub-collection, this is what I use the isQueuePopulated flag for. Then you need to collect them in the queue for as long as you're enumerating the main collection. When this is done, you need to switch to the queue, then you Dequeue the first enumerator, try to MoveNext and if it succeeded, you return Current and Enqueue the enumerator for later. public static IEnumerable<T> TakeEach<T>(this IEnumerable<IEnumerable<T>> source, int count) { var counter = 0; var queue = new Queue<IEnumerator<T>>(); var mainEnumerator = source.GetEnumerator(); var isQueuePopulated = false; try { var e = default(IEnumerator<T>); while (!isQueuePopulated || queue.Any()) { if (!isQueuePopulated && mainEnumerator.MoveNext()) { e = mainEnumerator.Current.GetEnumerator(); } else { isQueuePopulated = true; } e = isQueuePopulated ? queue.Dequeue() : e; if (e.MoveNext()) { queue.Enqueue(e); yield return e.Current; if (++counter == count) { yield break; } } else { e.Dispose(); } } } finally { mainEnumerator.Dispose(); foreach (var e in queue) { e.Dispose(); } } } Another opportunity to solve this problem is to use Transpose() in MoreLinq with Linq itself: var listoflists = new List<List<string>>() { one, two, three, four, five }; var res = listoflists.Transpose() .SelectMany(x => x) .Take(10); Result: { "aaa", "eee", "ggg", "lll", "1111", "bb", "fff", "hhhhh", "22", "ccc" } • The Transpose extension is implemented virtually exactly as we did it here ;) – t3chb0t May 17 at 9:28 • yep ;) As opportunity to not implement it by yourself – HelloWorld May 17 at 9:30 • And I find their _() hillarious. This should have never been accepted LOL – t3chb0t May 17 at 9:30 • Although, now I think my solution is better because it's deferred on every collection. Theirs is using Acquire which is eager and enumerates the main collection completely. – t3chb0t May 17 at 9:32 • @t3chb0t it doesn't have much choice, since it yields columns at a time ;) (good point non-the-less) (I also don't like that transpose method's API... transpose ought to be reversible in my book) – VisualMelon May 17 at 9:37
2019-07-23 21:31:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2658059895038605, "perplexity": 3606.09300106937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529664.96/warc/CC-MAIN-20190723193455-20190723215455-00494.warc.gz"}
https://www.maths.usyd.edu.au/s/scnitm/billu-ComputationalAlgebraSemin-090
SMS scnews item created by Bill Unger at Mon 12 Mar 2018 1116 Type: Seminar Distribution: World Expiry: 16 Mar 2018 Calendar1: 15 Mar 2018 1400-1500 CalLoc1: Carslaw 535A CalTitle1: Roney-Dougal -- Backtrack search problems in permutation groups Auth: billu@laplace.maths.usyd.edu.au Computational Algebra Seminar: Roney-Dougal -- Backtrack search problems in permutation groups Speaker: Colva Roney-Dougal (St Andrews) Title: Backtrack search problems in permutation groups Time & Place: 2-3pm, Thursday 15 March, Carslaw 535 Abstract: When computing with subgroups of a finite symmetric group Sym(n), we generally seek algorithms whose worst-case running time is bounded by a low-degree polynomial in n. However, there are a range of problems where no such algorithms are known, and where the best current methods involve algorithms whose running time may be much worse than this. I’ll give an introduction to this class of problems, present results on various special cases where we can recover polynomial running time, and briefly discuss some ongoing work on computing normalisers in the full symmetric group. The new material is joint work with Mun See Chang and Chris Jefferson at St Andrews. Actions: Calendar (ICS file) download, for import into your favourite calendar application UNCLUTTER for printing AUTHENTICATE to mark the scnews item as read School members may try to .
2022-12-07 06:32:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35165682435035706, "perplexity": 8947.013007588414}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711150.61/warc/CC-MAIN-20221207053157-20221207083157-00622.warc.gz"}